Skip to content

OpenAI Establishes Team to Research 'Catastrophic' AI Risks, Including Nuclear Threats

OpenAI establishes a team, led by Aleksander Madry, to identify and mitigate grave risks tied to advanced AI systems, encompassing concerns from phishing to potential nuclear threats.

OpenAI Targets Catastrophic AI Risks with New Team

OpenAI, a leader in artificial intelligence research, has raised eyebrows with its latest initiative: a dedicated team, named Preparedness, to proactively analyze and shield against perceived "catastrophic risks" of AI.

Helmed by Aleksander Madry, previously of MIT’s Center for Deployable Machine Learning, the Preparedness team will majorly concentrate on the evolving threats posed by AI, which includes the potential to mislead humans in cyber threats like phishing, as well as more ominous risks related to creating malicious codes.

Interestingly, some of the threats OpenAI is gearing to tackle seem to border on the realm of science fiction. The organization has highlighted AI's potential role in "chemical, biological, radiological, and nuclear" risks, lending an almost dystopian shade to their endeavors.

Sam Altman, OpenAI's CEO, has been vocal about his apprehensions surrounding the unchecked advancements of AI. While his fears about AI's role in an apocalyptic future aren’t new, the explicit emphasis on dire, dystopian risks by the organization is somewhat unexpected.

In tandem with the Preparedness team's introduction, OpenAI has rolled out a competition, seeking ideas on AI risks from the broader community. The contest offers lucrative incentives, including a hefty cash prize and potential employment opportunities with Preparedness.

One of the thought-provoking questions from the contest prompts participants to consider the catastrophic misuse of some of OpenAI's most advanced models if wielded by a malicious entity.

Committed to ensuring the safety of AI advancements, the Preparedness team's mandate will also encompass developing a "risk-informed development policy". This policy aims to elucidate OpenAI's philosophy towards AI model evaluation, risk countermeasures, and overarching governance in the model development lifecycle. This initiative aligns with OpenAI's overarching mission in AI safety, placing emphasis on measures both before and after model deployment.

This significant move by OpenAI underscores its belief in the transformative potential of AI - both for unprecedented benefits and unforeseen risks. The company stresses the importance of being equipped with a robust understanding and infrastructure to ensure the safety of high-caliber AI systems.

The timing of this announcement, aligned with a key UK government summit on AI safety, aligns with OpenAI's previous declarations regarding the need to guide and control emerging "superintelligent" AI, especially given predictions that such AI, surpassing human intelligence, might materialize within the coming decade.