Late last month, a disturbing incident shed light on the rise of deepfake pornography and sparked renewed calls for action. AI-generated, sexually explicit images of pop superstar Taylor Swift began circulating on social media, drawing outrage from her fans and the general public. This incident has once again emphasized the urgent need for measures to combat the widespread dissemination of deepfakes.
Deepfakes refer to highly realistic, AI-generated images or videos that can feature anyone’s likeness in any imaginable scenario. While deepfakes have been a cause for concern due to their potential to spread misinformation, research shows that a staggering 98% of all AI-generated videos online are pornographic, with women being the primary targets. Celebrities, particularly actresses, musicians, and social media influencers, are frequent subjects of deepfake porn. However, there are also numerous instances of ordinary women and girls falling victim to this invasive and harmful phenomenon.
Last year, a New Jersey high school became aware that some students had used artificial intelligence to create fake nude images of more than 30 of their classmates. Similar incidents have been reported at schools in the United States and abroad. While sharing real nude images without consent is illegal in most states, the laws surrounding artificial porn are much weaker, despite the harm it can cause to victims. Only about 10 states have statutes specifically banning deepfake porn, and there is no federal law addressing the issue. Although most social media platforms prohibit AI porn, lax moderation and the scale of the problem allow it to permeate their platforms. For instance, a post featuring Swift deepfakes remained live on X (formerly Twitter) for 17 hours and garnered over 45 million views before being taken down.
Efforts are underway to combat deepfake porn, both through legislation and public pressure. Several bills have been proposed in Congress to establish nationwide protections against deepfakes. These proposals aim to either introduce new legal penalties for those creating and sharing deepfake porn or grant victims the right to seek damages. Supporters of these measures argue that even if they don’t eliminate every bad actor, they would set precedents that would deter others from engaging in such activities. Beyond legislative action, tech industry observers emphasize the importance of pressuring mainstream entities, such as social media platforms, search engines, AI developers, and credit card companies, to take deepfakes more seriously. Fear of potential lawsuits from high-profile individuals like Swift could create enough financial risk for these entities to implement stricter measures against deepfakes.
However, some experts believe that the battle against deepfake porn has already been lost. They argue that the scale of the problem and the difficulty of detecting and blocking so many deepfakes make it almost impossible to find a comprehensive solution. Even the most aggressive laws or policies would likely only capture a fraction of the vast amount of fake explicit content in circulation.
As for Swift, she is reportedly considering legal action in response to the deepfakes of her. However, the limited number of laws in place may restrict her options. Despite the newfound attention to this issue, there are currently no plans for Congress to vote on any of the proposed anti-AI porn measures.
The dialogue surrounding deepfake porn extends beyond legislation and legal action. Public awareness and pressure on Big Tech companies are crucial factors in mitigating the impact of deepfakes. Swift and her fans, for instance, could advocate for federal legal changes that specifically target deepfake porn. Furthermore, raising awareness about the devastating consequences of deepfakes can help combat their dismissive perception as a lesser harm compared to physical sexual assault.
In the long run, tearing down the infrastructure that supports the AI porn economy is essential. Deepfake porn creators profit from their abusive actions, and websites hosting such content are enabled by search engines. Internet service providers host these sites, while credit card and payment companies facilitate transactions. Advertising companies also contribute to the proliferation of deepfakes by promoting their products alongside this explicit content.
The alarming rise of deepfake pornography necessitates comprehensive and decisive action. It is not only a threat to individuals' privacy and consent but also highlights the broader perils of AI if left unchecked. Stricter laws, increased public pressure, and a concerted effort to disrupt the infrastructure supporting the AI porn industry are necessary to combat this invasive and harmful phenomenon. By doing so, we can strive for a more secure future for everyone.
Use the share button below if you liked it.