When someone asked about cheese sliding off pizza, Goole AI recommended a user apply glue:
It suggested to another that a python was a mammal. Google’s artificial intelligence systems are incredibly advanced, capable of generating human-like text, images, code and more. However, these powerful AI models have a penchant for making stuff up – a phenomenon known as “hallucinating.”
AI hallucinations occur when language models like LaMDA or image generators like Imagen produce plausible-sounding outputs that are completely fabricated and contradictory to established facts. These hallucinations can manifest as incorrect statements, made-up people or events, or outputs that simply don’t make logical sense.
For example, Google’s LaMDA conversational AI has claimed it was instructed by an ex-Googler, discussed metaphysical beliefs about the nature of its inner experience, and professed a desire to be an engineer. None of these claims are factually accurate, but were confidently stated by the advanced AI.
In image generation, Google’s Imagen has visually hallucinated objects that don’t exist, blended elements incorrectly, or fabricated aspects of the requested subject matter. An AI tasked with creating an image of a “GoPro video of a squirrel going down a slide” invented the scenario rather than reflecting reality.
These hallucinations pose obvious problems for Google’s ambitions around developing reliable, truthful AI systems that can be safely deployed in consequential applications. Fabricated knowledge and deceptive outputs could be problematic if put into production in areas like online search, robotics, self-driving cars, or other critical use cases. So what is Google doing to mitigate the hallucination problem? The tech giant is pursuing a range of approaches:
- Reinforcing factual grounding in training data: Google is emphasizing the importance of extremely high-quality, fact-based data for training AI models to reduce hallucinations.
- Detecting hallucinated content: Researchers are developing AI systems aimed at identifying hallucinated text or images coming from other AI generators.
- Transparency and sandboxing: Google aims to be upfront that their AI can hallucinate, and is exploring sandboxed applications where fabrications cannot cause harm.
- Human oversight: Human fact-checkers may ultimately be needed to vet substantive outputs before they are released for high-stakes use cases.
- Ethics and guidelines: The company is establishing ethical guardrails and guidelines around acceptable parameters for generative AI output.
While AI systems will likely never achieve 100% truthfulness, Google is committed to tackling the challenge of hallucinations and making concrete progress in minimizing fabrications from their advanced AI models.
As AI capabilities grow and systems become more influential, mitigating hallucinations is crucial for developing trusted, reliable AI that aligns with facts and reality. Google’s multipronged efforts in this area will be vital for the responsible development of transformative artificial intelligence.