The Power and Pitfalls of AI in Social Media: An Analysis of Enhancements, Drawbacks, and the Call for Regulation
In today’s digital age, social media platforms have become integral parts of our lives, connecting us with friends, family, and the wider world. Now, companies like Meta, the parent company of Facebook, are taking the power of social media to a new level with the integration of artificial intelligence (AI) technologies. Meta’s latest innovation comes in the form of an AI-powered chatbot, which is now available on its WhatsApp and Instagram services.
With this chatbot, users have access to a wealth of knowledge at their fingertips, transforming their social media experience into one that offers not just connections, but also information and assistance. Mark Zuckerberg, the CEO of Meta, stated, “Our goal is to build the world’s leading AI and make it available to everyone. We believe that Meta AI is now the most intelligent AI assistant that you can freely use.” This move by Meta reflects a larger trend of incorporating generative AI into social media platforms.
TikTok, for example, has an engineering team dedicated to developing large language models that can recognize and generate text. Instagram states on its help page that Meta may use user messages to train its AI model, thereby improving the performance of these AI systems. Ethan Mollick, a professor at the Wharton School, explains that social media apps invest in AI to keep users engaged for longer periods, as more time spent on the platforms translates to increased ad revenue.
While the integration of AI in social media presents exciting possibilities, there are also potential drawbacks that must be considered. Jaime Sevilla, director of Epoch research institute, warns that the expansion of AI in social media could lead to a decline in human presence on these platforms. Sevilla envisions a future where AI-generated people and content dominate social media. He states, “We might live in a world where the part that humans play in social media is a small part of the whole thing.”
This raises questions about the authenticity of interactions on social media and the impact on users' experience. How much of what we read online is already generated by AI? Mollick points out that AI is increasingly driving online communication, but the extent to which AI writing is present remains unclear. The challenge for social media companies lies in striking a balance between AI-generated content and the genuine interactions that users seek.
Moreover, the persuasive capabilities of AI raise concerns about the potential for coercion and fraud. A recent study conducted by AI researchers at the Swiss Federal Institute of Technology Lausanne found that Open AI’s large language model, GPT-4, was 81.7% more effective than a human at convincing someone in a debate to agree. While this study is yet to be peer-reviewed, it highlights the need for caution. Sevilla states, “That is concerning that [AI] might significantly expand the capacity of scammers to engage with many victims and perpetrate more and more fraud.”
As the use of AI in social media continues to evolve, policymakers must be vigilant about the dangers of misinformation and manipulation, particularly during politically charged periods. While some argue for a complete ban on AI in social media, others, like Bindu Reddy, CEO and co-founder of Abacus.AI, emphasize the need for nuanced approaches. Reddy believes that AI can play a positive role in detecting and addressing issues such as bias and pornography on online platforms.
However, Reddy advocates for regulations prohibiting the creation of deepfakes using AI and expresses reservations about countries like the European Union implementing stringent restrictions on AI development. She believes that falling behind in AI technology development compared to competing countries like China and Saudi Arabia could have far-reaching consequences for the United States.
Sevilla acknowledges the potential biases inherent in AI moderation but reminds us that human moderators have also demonstrated political biases. He suggests that studying the biases reflected in AI systems can provide valuable insights. Nonetheless, Sevilla warns that there is a possibility that AI could become so effective in conforming to company guidelines that it may restrict individual free speech. He asks, “Is that the kind of social media you want to be consuming?”
In conclusion, the integration of AI in social media brings both enhancements and challenges. It has the potential to enrich user experiences with personalized content and assistance. However, the persuasive power of AI raises concerns about manipulation and fraud. Policymakers must carefully consider regulations that strike a balance between harnessing AI’s capabilities and protecting users from misinformation and coercion. The future of social media lies in finding the right equilibrium between AI-generated content and authentic human interactions.
Use the share button below if you liked it.