Skip to content

OpenAI Hesitates on Launching DALL-E 3 AI-Generated Image Detection Tool

OpenAI's DALL-E 3 image detector is in the spotlight, with the company debating its release due to concerns about accuracy and the definition of AI-generated content.

OpenAI Hesitates on Launching DALL-E 3 AI-Generated Image Detection Tool

OpenAI, a leading AI research organization, remains in two minds about launching its tool capable of detecting images crafted by DALL-E 3, its generative AI art model. Sandhini Agarwal, an OpenAI researcher, revealed in an interview with TechCrunch that despite the tool's impressive accuracy, it hasn't surpassed OpenAI’s quality benchmarks.

The tool's precision metrics seem daunting. Mira Murati, OpenAI’s CTO, disclosed that the classifier is 99% accurate at discerning if a photo was originally fashioned using DALL-E 3. Nevertheless, what exact accuracy goal OpenAI is aiming for remains undisclosed.

Another challenge is defining what genuinely qualifies as an AI-produced image. For instance, an image created from scratch by DALL-E 3 clearly counts. However, the classification of a DALL-E 3-derived image that underwent numerous edits, merged with other pictures, and subjected to post-processing filters becomes ambiguous. Agarwal expressed that OpenAI is keen on engaging artists and those who might be profoundly affected by such tools to shed light on this dilemma.

Given the surge in AI deepfakes, several entities, not just OpenAI, are delving into watermarking and detection methodologies for generative media. Notable strategies include DeepMind's SynthID and watermarking tools from startups like Imatag and Steg.AI. However, the industry is still in search of a unified watermarking or detection standard.

When quizzed about the possibility of OpenAI’s image classifier detecting images crafted with other generative tools, Agarwal was non-committal. However, she did hint at OpenAI's willingness to consider this path based on the current tool's reception.