Snapchat's AI Chatbot 'My AI' Under Scrutiny for Privacy Risks

Snapchat's AI Chatbot 'My AI' Under Scrutiny for Privacy Risks

Snapchat’s artificial intelligence chatbot, “My AI”, is facing scrutiny from Britain’s data watchdog over concerns about privacy risks for children. The Information Commissioner’s Office (ICO) stated that Snapchat may not have properly assessed these risks before launching the chatbot. If the company fails to address these concerns, “My AI” may be banned in the UK.

Information Commissioner John Edwards expressed his concern about Snap’s failure to identify and assess privacy risks adequately. However, this does not mean that British data protection laws have been breached, nor does it guarantee that the ICO will issue an enforcement notice.

Snap, the parent company of Snapchat, responded by stating that it was reviewing the ICO’s notice and was committed to user privacy. The company claimed that “My AI” underwent a thorough legal and privacy review process before being made public. Snap also expressed its willingness to work with the ICO to ensure compliance with risk assessment procedures.

The ICO’s investigation focuses on how “My AI” processes the personal data of Snapchat’s 21 million UK users, particularly children aged 13-17. The chatbot is powered by OpenAI’s ChatGPT, a prominent example of generative AI. Policymakers worldwide are currently grappling with the challenge of regulating such AI systems due to concerns related to privacy and safety.

It is worth noting that social media platforms like Snapchat are meant for users aged 13 and above, but there have been challenges in effectively preventing underage users from accessing these platforms. In August, Reuters reported that the ICO was gathering information to determine whether Snapchat was adequately removing underage users from its platform.

The ICO’s concern about privacy risks to children posed by Snapchat’s AI chatbot highlights the need for companies to thoroughly assess and address potential risks before launching such technologies. The regulator’s investigation provides an opportunity for policymakers to consider ways to regulate generative AI to ensure user privacy and safety.

Privacy and safety concerns continue to be key considerations in the development and implementation of AI systems. As technology advances, it is crucial for companies, regulators, and policymakers to work together to establish robust frameworks that protect users, especially children, from potential risks and breaches of privacy.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.