Skip Navigation

Apparently, Google's new AI-based search is quite honest.

For those not aware, Google is rolling out their new AI-based "Generative AI" search, which seems to mesh Bard with the standard experience.

I asked it today why Google no longer follows their "don't be evil" motto... The results are pretty hilarious.

BrainWorms @lemm.ee

Apparently, Google's new AI-based search is quite honest.

1 0
127 comments
  • I tried to use Bard to write some code the other day, and found it amusing that it doesn't just make shit up that doesn't exist, it makes up the excuses as well when you call it out on it's bullshit.

    Like you tell it a particular class doesn't exist, and it pulls an old version of the compiler out of it's arse and tells you it was deprecated in that.

    AI doesn't know where it's limits are. It's incapable of saying "I don't know". They have invented a digital politician.

    • Reminds me of the alphastar AI that played starcraft 2. It was probably at the low grandmaster level, but a big problem with it was it didn't know when to just say "GG" and quit. It would just start doing random shit and a human on the alphastar team would have to intervene and end the match.

      It takes actual intelligence to know when you're out of ideas, which these so-called AIs are lacking.

      • But these things are NOT intelligence. Language is not intelligence. These are predictive language models.

        Language is compelling but intelligence doesn't require it.

    • In the future we'll be ruled (regulated) by AIs, to which legitimate citizens are allowed to upload one approved document to add to it's training data.

  • Google reached a point where "not being evil" was incompatible with its business goals.

    You can't fault it for a lack of honesty. Google is evil because it's good business.

    • We can help derp about capitalism all we want but this wouldn't change in a government run program. An organization is only as ethical as the people that make it up. The military question was an inflection point where the organization was really forced to deal with the question of how to define evil.

      Suddenly every person in that organization was forced to answer some questions. Is the existence of a military evil? Is it evil if I don't directly support those solutions? What if something I build is used to develop it indirectly? Even if it is not, am I now complicit?

      Now, I'm a Soldier so I have a massive bias here. I personally cannot see why anyone would intentionally want to contribute to us getting killed or losing a war. Tech products are already used in the process. Toyota is not complicit in terrorism just because their trucks are the trucks of choice for insurgent logistics. That being said, if they started accepting contracts with them, there would be an issue.

      A lot of it comes down to the thoughts on the war on terror at the time. The funny thing is that the solutions that they built are focused on Eastern Europe right now in a conflict that most people support and were not completed in time to do counter insurgency.

      The funny thing about the COIN fight is that information products simply made things more accurate with better intelligence. It meant less terrorism due to less insurgents and less civilian casualties resulting in blowback. If poorer information resulted in higher civilian casualties, are the pacifists complicit in that?

      Again, I'm biased so my perspective is one of this issue being a detractor to doing my job better. In the end, defining evil is not black and white, even if you could theoretically come to a specific answer for a specific circumstance with the magical power of all the knowledge in the world. It broke the culture of the company.

  • It has about the same tone as a typical autistic tech worker with an overdeveloped sense of justice and a loose sense for when it's impolitic to drop truth bombs

    (for context, I am an autistic dev that's worked for some big corporations in my career)

127 comments