Scarlett Johansson, the renowned Hollywood actress, has taken legal action against an AI app that used her likeness without her permission. The advertisement in question, posted on the social media platform formerly known as Twitter, featured a 22-second clip generated by an app called Lisa AI: 90’s Yearbook & Avatar. The ad utilized real footage of Johansson to create a fake image and dialogue for her.
Variety magazine confirmed that the 38-year-old actress is not affiliated with the app, and her representatives have taken appropriate legal measures since the ad was discovered on October 28th. Johansson’s lawyer, Kevin Yorn, stated, “We do not take these things lightly. Per our usual course of action in these circumstances, we will deal with it with all legal remedies that we will have.”
Upon review of the video, Variety noted that it began with behind-the-scenes footage of Johansson on the set of the Marvel movie Black Widow. The clip then seamlessly transitioned into AI-generated photos resembling the actress, accompanied by a voice imitating her. The ad concluded with a promotion for the Lisa AI app, encouraging users to create not only avatars but also images with text and AI videos.
Despite a disclaimer underneath the advertisement stating that the images were produced by Lisa AI and had nothing to do with Johansson, legal action was taken, and the ad has been removed. It is worth noting that several other apps by Lisa AI are still available on the App Store and Google Play.
Johansson isn’t the only actor to express concerns about the unauthorized use of their likeness by AI. Recently, Tom Hanks warned his fans on Instagram about a dental plan that employed an AI-generated image of him in their promotional video. In a similar vein, multiple authors, including comedian Sarah Silverman, have filed copyright infringement lawsuits against OpenAI’s ChatGPT and Facebook’s parent company, Meta, alleging that their AI models were trained on their work without their consent.
This incident is not the first time Johansson has faced the unauthorized use of her image. In 2018, she spoke to the Washington Post about the creation of “deepfakes,” computer-generated videos in which women’s faces are superimposed onto explicit content. Johansson expressed her frustration, stating, “Nothing can stop someone from cutting and pasting my image or anyone else’s onto a different body and making it look as eerily realistic as desired.”
She also noted the lack of regulations on the internet, calling it a “lawless abyss” that defies US policies.
The unauthorized use of celebrity likenesses in AI-generated content raises important questions about privacy, consent, and the ethical implications of emerging technologies. Initiatives have been undertaken to address these concerns, such as the development of deepfake detection systems and the ongoing efforts of platforms to combat misleading or harmful content.
As AI continues to advance, it is crucial to strike a balance between the immense possibilities it offers and the potential risks it poses. With legal actions like Johansson’s, celebrities are taking a stand to protect their rights and assert control over their own image in the digital realm.
In an interview with the BBC, Jonathan Taplin, the director emeritus of the Annenberg Innovation Lab at the University of Southern California, emphasized the need for strong legal frameworks in this context. He said, “We are at a stage where technology is advancing faster than the laws around it — or the ethics. I think it’s incumbent on Hollywood and entertainment unions to get in early before politicians catch up.”
Use the share button below if you liked it.