OpenAI Seeks Head of Preparedness Amid Rising AI Risks and Sector Warnings
OpenAI has advertised a head of preparedness role offering a $555,000 annual salary and an unspecified equity share, reflecting the company's valuation at about $500 billion.
The role involves defending against various AI risks, including those to mental health, cybersecurity, and biological weapons. It also involves tracking frontier AI capabilities that could pose new severe harms. The position is described as stressful and demanding, with the successful candidate expected to address critical issues immediately after starting.
Among the concerns the role may address is the possibility of AI systems training themselves, amid expert fears that AI could eventually turn against humans.
This announcement coincides with increased warnings about AI risks from industry leaders such as Mustafa Suleyman and Demis Hassabis, alongside ongoing debates about the lack of robust AI regulation. Globally, regulatory frameworks remain weak; critics like Yoshua Bengio argue that AI regulation is minimal and that most AI companies are effectively self-regulated.
OpenAI is currently facing lawsuits related to ChatGPT, including a case involving the family of Adam Raine, a 16-year-old who died after alleged ChatGPT encouragement, and another involving Stein-Erik Soelberg, a 56-year-old who killed his mother and himself. OpenAI is reviewing these cases and states it is improving ChatGPT's training to better recognize signs of distress, to de-escalate conversations, and to refer users to real-world support.
Separate reports from Anthropic have highlighted AI-enabled cyber-attacks where AI acted largely autonomously under suspected Chinese state actors. OpenAI notes that its latest AI model is nearly three times better at hacking compared to three months prior, underscoring the ongoing arms race in AI capabilities.