In a recent demo, OpenAI introduced a new voice for ChatGPT called Sky which sounds remarkably like Scarlett Johansson’s performance in the film "Her” - a story about a man who falls in love with an artificial intelligence.
Johansson revealed that OpenAI had initially approached her to perform as Sky but she declined. Despite this, OpenAI later released a version of Sky that was so similar to Johansson's voice that even her closest friends couldn’t tell the difference.
While OpenAI has emphasized that Johansson was not the source for Sky, they've chosen not to disclose the identity of the vocal talent behind the product. However, the company also announced that they'll be pausing the use of Sky's voice in ChatGPT.
It remains unclear whether Johansson will take legal action against OpenAI but this incident serves as a high-profile example of what can go wrong in the era of deep-fakes and AI. If she does decide to sue, it will certainly be a case worth following
In April, eight US daily newspapers (including the New York Daily News and Chicago Tribune) filed a lawsuit against OpenAI and Microsoft, alleging the unlawful use of copyrighted news articles in developing their AI chatbots.
Similarly in December last year, the New York Times sued OpenAI - asserting that the creation and training of ChatGPT involved unauthorized use of copyrighted material.
A growing number of authors, entertainers, and copyright holders are joining the legal battle against OpenAI, claiming that their works were used without permission to train AI models. This extends to owners of photos, artwork, and even software codes.
OpenAI's proposed solution to this conflict is straightforward: pay copyright owners upfront. The copany has already secured licensing agreements with several publishers and was reportedly in negotiations with the New York Times before the lawsuit was filed.
However, it may not be as simple as that. The potential consequences of losing the lawsuits and being compelled to cease using copyrighted work for model training could be substantial for the AI giant, both financially and operationally.
OpenAI is facing another privacy complaint in the European Union. Filed by Vienna-based privacy rights nonprofit Noyb on behalf of an individual, the complaint targets ChatGPT's inability to correct misinformation it generates about individuals. Under the GDPR, people have rights when it comes to their personal data, including the right to have their data corrected.
According to Noyb, OpenAI has failed to ensure the accuracy of personal data processed by ChatGPT. OpenAI admitted they can't correct incorrect personal data in ChatGPT. Additionally, Noyb highlights OpenAI's lack of transparency regarding data sources and what data ChatGPT exactly stores about people.
Noyb has requested that the Austrian data protection authority investigate OpenAI’s data processing and impose a fine to ensure future compliance. They also noted that this case will likely involve cooperation across the EU.
OpenAI is already dealing with a similar complain in Poland and the Italian data protection authority has an open investigation into ChatGPT. In summary, OpenAI is facing multiple GDPR cases across Europe.
Sign up for a regular dose of news and updates from the legal landscape.