Snap's innovative AI chatbot, "My AI", has raised red flags with the U.K.'s Information Commissioner’s Office (ICO) over potential privacy threats, especially concerning children. The core concern is whether Snap had thoroughly assessed the privacy risks before launching.
What's the Issue?
- The ICO's preliminary enforcement notice suggests Snap might not have adequately ensured the chatbot's compliance with the enhanced data protection rules, which now encompass the Children’s Design Code.
- The investigation hasn't determined a breach yet, but Snap's risk assessment process for "My AI" seems questionable, given its targeted audience includes teens aged 13 to 17.
Snap had launched "My AI" leveraging OpenAI’s ChatGPT technology. While it promised moderated and safeguarded features, early reports found it suggesting inappropriate advice to minors. This raised eyebrows, considering Snap’s earlier commitments to ensure the bot would steer clear from harmful content.
In response to the ICO's concerns, Snap states they had conducted a thorough privacy review before the chatbot's public release. They intend to cooperate closely with the ICO to alleviate their concerns.
Snap isn’t the first to face scrutiny over AI chatbots. OpenAI’s ChatGPT and "virtual friendship service" Replika faced issues with Italy's data protection authority, Garante. Google's Bard chatbot also experienced delays due to concerns from Ireland's Data Protection Commission.
As AI tools become more integrated into public platforms, regulatory bodies worldwide are voicing their concerns, ensuring companies prioritize data protection. G7 DPAs emphasize the importance of embedding privacy in the design and operation of AI technologies.
It's evident that while generative AI holds great promise, its alignment with privacy norms will be under close watch. The tech community and users await a fine balance between innovation and data protection.