In a potential blow to Microsoft’s Bing search engine, the European Union has raised concerns over the platform’s handling of artificial intelligence (AI) and its role in spreading deepfakes and false news. The European Commission has suggested that Bing’s AI image generator and chatbot may be violating the bloc’s content-moderation law, known as the Digital Services Act (DSA). While no formal investigation has been launched yet, the EU believes that Bing may have failed to meet the requirements to assess and mitigate the risks associated with its AI features.
Under the DSA, online platforms are expected to take responsibility for moderating the content that appears on their platforms. This includes AI-generated content, which has the potential to create fake information, spread false and misleading photos, and manipulate services.
According to a spokesperson from the European Commission, “The spread of disinformation, including deepfakes, can have damaging effects on individuals, society, and democratic processes. We need to ensure that online platforms understand their responsibilities and take the necessary actions to prevent the spread of such harmful content.”
The concerns raised by the EU are not unfounded. Deepfakes, which are AI-generated videos that manipulate and superimpose an individual’s face onto another body, have become increasingly sophisticated and realistic. They pose a significant threat to public trust, as they can be used to spread misinformation, manipulate public opinion, and potentially undermine democratic processes.
Bing’s AI image generator and chatbot have faced scrutiny in the past. In recent years, there have been instances of AI tools creating false information and spreading it across the internet. The automated manipulation of services, such as chatbots, can also mislead and deceive users.
Microsoft, the parent company of Bing, has been vocal about its commitment to addressing the challenges posed by AI and disinformation. In response to the EU’s concerns, a Microsoft spokesperson stated, “We take this responsibility seriously and are continuously working to improve our AI systems to detect and mitigate the risks associated with deepfakes and false information.”
The EU’s potential investigation into Bing’s AI practices is part of its broader efforts to combat disinformation and ensure that online platforms are held accountable for the content they host. The outcome of this investigation could have significant implications for the future regulation of AI and the responsibilities of tech companies in combatting disinformation.
As AI continues to evolve and play an increasingly prominent role in our lives, it is crucial to establish clear guidelines and regulations to prevent its misuse. The spread of deepfakes and false news can have far-reaching consequences, and it is essential for both governments and tech companies to work together to ensure the responsible development and deployment of AI technologies.
In the words of EU Commission Executive Vice-President Margrethe Vestager, “It’s important that we address the spread of disinformation in a comprehensive and coordinated way, taking into account the responsibilities of online platforms and the potential risks associated with AI tools. We need to strike a balance between the benefits of these technological advancements and the potential harms they can cause.”
Use the share button below if you liked it.