How likely is AI to wipe out humanity? 70%, according to ex-OpenAI employee
How likely is it that AI will destroy or cause catastrophic harm to humanity? According to Daniel Kokotajlo, a former employee of OpenAI, the chance is a frightening 70 percent. In an open letter, he and other current and former employees of OpenAI, Google DeepMind, and Anthropic warned of the enormous risks posed by artificial intelligence – and demanded that they and others be allowed to point this out publicly under the motto “Right to Warn”.
The letter’s signatories, including AI luminary Geoffrey Hinton, who left Google last year over similar concerns, claim they are being prevented from publicly warning about the dangers. OpenAI CEO Sam Altman appears to share the concerns, but according to Kokotajlo, is doing too little to restrict the technology. Instead, the company is recklessly pushing ahead with the development of AGI (Artificial General Intelligence).
Warnings go unheard
Kokotajlo, who joined OpenAI in 2022 and was tasked with forecasting the technology’s advances, is convinced that the industry will reach artificial general intelligence (AGI) by 2027 – with potentially catastrophic consequences for humanity. He personally urged Altman to invest more time in safety measures instead of making AI ever more intelligent. But his warnings seemingly went unheeded.
In April, Kokotajlo quit his job at OpenAI in frustration. In an email to his team, he wrote: “The world is not ready, and we are not ready. And I am worried that we will still rush forward and rationalize our actions.” In light of the high-profile departures and frightening forecasts, the gloomy news from OpenAI is piling up.
Most recently, OpenAI’s chief scientist, Ilya Sutskever, resigned after internal disagreements. A new “Safety and Security Committee” was then set up, on which the controversial OpenAI CEO Sam Altman himself sits and presides.
OpenAI defends security approach
In a statement, OpenAI defended its approach: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk”. They agree that given the importance of this technology, rigorous debate is crucial and will continue to work with governments, civil society, and other communities.
According to OpenAI, there are also “ways for employees to raise their concerns, including an anonymous integrity hotline and a safety committee led by board members and company security officers.” Whether these measures will be enough to avert the “AI apocalypse” feared by Kokotajlo and others remains to be seen.