Yesterday, a letter penned by Shane Jones, an artificial intelligence engineer at Microsoft, shocked the tech industry. In the letter, published on LinkedIn, Jones alleges that Microsoft’s AI image generator lacks safeguards against creating violent and sexualized images. He claims that his repeated attempts to address these issues with Microsoft management went unanswered. To raise awareness, Jones sent the letter to the Federal Trade Commission and Microsoft’s board of directors.
Jones focuses on Microsoft’s Copilot Designer, a tool powered by OpenAI’s DALL-E 3 artificial intelligence system. This tool allows users to create images based on text prompts. However, Jones argues that it has “systemic problems” with producing harmful content. The letter provides alarming examples, such as the tool generating images of women in sexually objectifying poses when given unrelated prompts.
These allegations come at a time when concerns are mounting over the use of generative AI for spreading disinformation and generating offensive content. Several other generative AI image makers have faced similar backlash recently. Google suspended its Gemini AI tool after it generated images that reinforced racial biases.
Microsoft, however, denies ignoring safety issues and claims to have dedicated teams evaluating potential problems. The company facilitated meetings between Jones and its Office of Responsible AI to address his concerns. In a statement, a Microsoft spokesperson emphasized their commitment to addressing employees' concerns and enhancing the technology’s safety.
Microsoft launched Copilot as an AI companion last year and heavily promoted it as a groundbreaking tool for businesses and creative endeavors. With the tagline “Anyone. Anywhere. Any device.”, the company marketed Copilot as an accessible product. However, Jones argues that promoting Copilot Designer as safe for anyone to use is irresponsible, and that Microsoft fails to disclose the associated risks.
Interestingly, Microsoft updated Copilot Designer in January, following safety concerns similar to those raised by Jones. The update closed loopholes regarding the generation of fake and sexualized images after images of Taylor Swift spread on social media.
Jones also alleges that he faced pressure from Microsoft’s corporate, external, and legal Affairs team. According to him, they urged him to remove a LinkedIn post in which he requested OpenAI’s board of directors to suspend the availability of DALL-E 3. Jones claims that his manager directed him to delete the post without providing any justification, even after multiple requests for an explanation.
The revelation of these concerns surrounding Microsoft’s AI image generator highlights the ongoing challenges in ensuring AI systems behave ethically and responsibly. The incident serves as a reminder that the deployment of AI technology requires meticulous attention to prevent the creation and dissemination of harmful content and biases.
As the field of AI continues to advance, it is essential for companies to prioritize robust safeguards and transparency. Without such measures, the potential for AI to generate disturbing and offensive content remains a lingering threat.
Use the share button below if you liked it.