Tackling the Hallucinations: Google’s Efforts to Rein in AI’s Invented “Facts”

Google’s advanced AI systems, including LaMDA and Imagen, are capable of generating human-like text and images. However, they often produce “hallucinations” — plausible yet fabricated outputs. Discover how Google is addressing these challenges to develop reliable AI.

Tackling the Hallucinations in AI

When someone asked about cheese sliding off pizza, Goole AI recommended a user apply glue:

It suggested to another that a python was a mammal. Google’s artificial intelligence systems are incredibly advanced, capable of generating human-like text, images, code and more. However, these powerful AI models have a penchant for making stuff up – a phenomenon known as “hallucinating.”

AI hallucinations occur when language models like LaMDA or image generators like Imagen produce plausible-sounding outputs that are completely fabricated and contradictory to established facts. These hallucinations can manifest as incorrect statements, made-up people or events, or outputs that simply don’t make logical sense.

For example, Google’s LaMDA conversational AI has claimed it was instructed by an ex-Googler, discussed metaphysical beliefs about the nature of its inner experience, and professed a desire to be an engineer. None of these claims are factually accurate, but were confidently stated by the advanced AI.

In image generation, Google’s Imagen has visually hallucinated objects that don’t exist, blended elements incorrectly, or fabricated aspects of the requested subject matter. An AI tasked with creating an image of a “GoPro video of a squirrel going down a slide” invented the scenario rather than reflecting reality.

These hallucinations pose obvious problems for Google’s ambitions around developing reliable, truthful AI systems that can be safely deployed in consequential applications. Fabricated knowledge and deceptive outputs could be problematic if put into production in areas like online search, robotics, self-driving cars, or other critical use cases. So what is Google doing to mitigate the hallucination problem? The tech giant is pursuing a range of approaches:

  1. Reinforcing factual grounding in training data: Google is emphasizing the importance of extremely high-quality, fact-based data for training AI models to reduce hallucinations.
  2. Detecting hallucinated content: Researchers are developing AI systems aimed at identifying hallucinated text or images coming from other AI generators.
  3. Transparency and sandboxing: Google aims to be upfront that their AI can hallucinate, and is exploring sandboxed applications where fabrications cannot cause harm.
  4. Human oversight: Human fact-checkers may ultimately be needed to vet substantive outputs before they are released for high-stakes use cases.
  5. Ethics and guidelines: The company is establishing ethical guardrails and guidelines around acceptable parameters for generative AI output.

While AI systems will likely never achieve 100% truthfulness, Google is committed to tackling the challenge of hallucinations and making concrete progress in minimizing fabrications from their advanced AI models.

As AI capabilities grow and systems become more influential, mitigating hallucinations is crucial for developing trusted, reliable AI that aligns with facts and reality. Google’s multipronged efforts in this area will be vital for the responsible development of transformative artificial intelligence.

Dave Graff

Next Post

Eric Schmidt on AI’s Transformative Innovations

Mon May 27 , 2024
Discover Eric Schmidt's insights on the transformative innovations in artificial intelligence, the need for responsible development, and the importance of a global framework to manage AI's profound impact on society. Learn about AI's potential benefits and risks, and how ethical foresight and international cooperation can shape a safer AI future.
eric schmidt

You May Like