TLDR;
- OpenAI will provide the US AI Safety Institute early access to its upcoming safety test, aiming to collaborate on improving AI evaluation methods.
- OpenAI is dedicating at least 20% of its computing resources to safety initiatives, following the dissolution of its Superalignment team, though a release date for the safety test is not yet specified.
- In response to recent criticisms, OpenAI has taken steps to enhance transparency, including removing non-disparagement clauses for employees and making efforts to improve internal practices.
OpenAI has announced that it will grant the US AI Safety Institute early access to its upcoming safety test, as detailed in a tweet by OpenAI founder Sam Altman on X.
Altman explained that the aim of this early access is to collaborate on advancing the science of AI evaluations.
In the same post, Altman also emphasised OpenAI’s commitment to dedicating at least 20% of its computing resources to safety initiatives. This commitment was intended to be executed by the now-dissolved Superalignment team, which was co-led by Jan Leike, Head of Alignment, and Ilya Sutskever, co-founder and Chief Scientist of OpenAI. However, a specific release date for the safety test has yet to be announced.
Addressing recent criticisms, Altman suggested that OpenAI has taken meaningful steps to enhance transparency as part of its safety measures.
He noted, “In May, we eliminated non-disparagement clauses for both current and former employees and removed provisions that allowed OpenAI—though they were never used—to cancel vested equity. We’ve made significant efforts to rectify these issues and improve our practices.”