In today’s rapidly advancing technological landscape, artificial intelligence (AI) has become a powerful tool, capable of creating convincing copies that can upend real lives. One troubling application of AI is the creation of deepfakes, digitally manipulated images and videos that can be difficult to distinguish from reality. The sophistication of generative AI tools has made it easy and cheap to create and circulate deepfakes, posing a growing threat to women and sexual minorities in South Asia.
In recent incidents, Bollywood stars, politicians, and everyday women have fallen victim to the malicious use of deepfakes. These manipulated images and videos, which often go viral on social media, can unleash lust, vitriol, and even result in physical harm. Deepfakes have been used to clone and attack Bollywood stars such as Rashmika Mandanna, Katrina Kaif, Alia Bhatt, and Deepika Padukone. The proliferation of deepfakes is particularly challenging in conservative societies where women have long been harassed online, and abuse has gone largely unpunished.
The majority of deepfake videos online are pornographic, with women being the primary targets. This trend is deeply concerning, as it exacerbates online harassment and gender-based violence. Digital rights experts argue that social media firms need to take more responsibility in combating deepfakes. While Google’s YouTube and Meta Platforms (which owns Facebook, Instagram, and WhatsApp) have updated their policies to require labeling of AI-generated content, the burden often falls on victims to take action.
Rumman Chowdhury, an AI expert at Harvard University, highlights the pressing issue that generative AI will amplify online harassment and malicious content, and women are the most vulnerable. She warns that if society does not pay attention, this problem will affect everyone. Deepfakes have already been linked to incidents of harassment, scams, and sextortion worldwide.
Regulation of deepfakes has been slow, but some progress is being made. The US and the European Union are taking steps to address the dangers posed by deepfakes through executive orders and proposed AI acts. In Asia, China requires providers to use watermarks and report illegal deepfakes, while South Korea has made it illegal to distribute deepfakes that harm the public interest. India, too, is taking a tough stance by drafting new rules that hold social media firms accountable for removing deepfakes promptly.
However, experts argue that the focus should be on preventing incidents rather than reactive responses. Balancing privacy protection and prevention of abuse is crucial. It is equally important to protect vulnerable communities, such as LGBTQ+ individuals, who are at higher risk from deepfakes.
The impact of deepfakes goes beyond individual harm. In countries like Bangladesh and Pakistan, deepfake videos targeting female politicians and LGBTQ+ individuals have emerged. These videos can jeopardize careers, perpetuate gender-based violence, and discourage women from participating in politics and online spaces.
A recent report revealed that entrenched gender biases in countries like India already hinder girls and young women from fully utilizing the internet. Deepfakes targeting powerful Bollywood stars have only brought further attention to the broader risk that AI poses to all women. This heightened focus should prompt platforms, policymakers, and society at large to create a safer and more inclusive online environment.
In conclusion, as generative AI technology becomes more advanced, the specter of deepfakes looms large in South Asia. Women and sexual minorities are particularly vulnerable, facing the potential for harm, harassment, and discrimination. It is imperative that social media platforms and policymakers take proactive steps to address this issue and protect the rights and safety of individuals in the digital realm.
Use the share button below if you liked it.