Hollywood actress Scarlett Johansson has found herself at the center of a legal grey area concerning artificial intelligence (AI) and intellectual property rights. The controversy arose when OpenAI, a research company focused on developing friendly AI systems, launched a new version of its chatbot, ChatGPT, featuring a voice assistant that bore an uncanny resemblance to Johansson’s distinctive vocals.
The crux of the legal issue lies in whether OpenAI infringed upon Johansson’s right of publicity – a law that protects an individual’s commercial interest in the use of their likeness. Johansson’s voice, particularly after her critically acclaimed performance in the 2013 film “Her,” holds significant commercial value.
OpenAI has argued that the voice in question does not belong to Johansson but rather a different actress, and that there was no intentional imitation. However, Johansson’s legal team may contend that the similarity is close enough to deceive consumers, causing them to believe she is endorsing or associated with OpenAI’s product.
The case highlights the nascent and complex legal landscape surrounding AI-generated content and intellectual property rights. Here are some of the key aspects at play:
- Right of Publicity: Did OpenAI misappropriate Johansson’s voice for commercial gain, potentially damaging her reputation or causing confusion among consumers?
- Intent vs. Impact: While OpenAI claims no deliberate imitation, the determining factor may be the impact on consumers. Did the voice create a likelihood of confusion or false endorsement?
- Fair Use: Can OpenAI claim fair use, arguing that the use of the voice was transformative or for the purpose of criticism/commentary? This defense seems unlikely in this particular case.
The resolution of this case could have far-reaching implications for future AI development and the use of AI-generated content. In response to the controversy, OpenAI has proposed a “Media Manager” tool, which would allow creators to control how their work is used to train AI models. This could be a step towards establishing a “social contract” for the AI age, balancing the interests of creators and the development of AI technologies.
While Johansson may not have a strong case for substantial damages, as OpenAI has already discontinued the use of the voice in question, the incident underscores the need for clearer legal guidelines regarding AI-generated content and its potential impact on intellectual property rights, privacy, and consumer protection.