AI Warning

How likely is AI to wipe out humanity? 70%, according to ex-OpenAI employee

Prompt: Eine AI schwebt über einer dunklen Stadt und greift nach ihr. © Dall-E
PROMPT: AN AI HOVERS OVER A DARK CITY AND REACHES FOR IT. © DALL-E
Startup Interviewer: Gib uns dein erstes AI Interview Startup Interviewer: Gib uns dein erstes AI Interview

How likely is it that AI will destroy or cause catastrophic harm to humanity? According to Daniel Kokotajlo, a former employee of OpenAI, the chance is a frightening 70 percent. In an open letter, he and other current and former employees of OpenAI, Google DeepMind, and Anthropic warned of the enormous risks posed by artificial intelligence – and demanded that they and others be allowed to point this out publicly under the motto “Right to Warn”.

The letter’s signatories, including AI luminary Geoffrey Hinton, who left Google last year over similar concerns, claim they are being prevented from publicly warning about the dangers. OpenAI CEO Sam Altman appears to share the concerns, but according to Kokotajlo, is doing too little to restrict the technology. Instead, the company is recklessly pushing ahead with the development of AGI (Artificial General Intelligence).

Warnings go unheard

Kokotajlo, who joined OpenAI in 2022 and was tasked with forecasting the technology’s advances, is convinced that the industry will reach artificial general intelligence (AGI) by 2027 – with potentially catastrophic consequences for humanity. He personally urged Altman to invest more time in safety measures instead of making AI ever more intelligent. But his warnings seemingly went unheeded.

In April, Kokotajlo quit his job at OpenAI in frustration. In an email to his team, he wrote: “The world is not ready, and we are not ready. And I am worried that we will still rush forward and rationalize our actions.” In light of the high-profile departures and frightening forecasts, the gloomy news from OpenAI is piling up.

Most recently, OpenAI’s chief scientist, Ilya Sutskever, resigned after internal disagreements. A new “Safety and Security Committee” was then set up, on which the controversial OpenAI CEO Sam Altman himself sits and presides.

Bubble.AI

OpenAI defends security approach

In a statement, OpenAI defended its approach: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk”. They agree that given the importance of this technology, rigorous debate is crucial and will continue to work with governments, civil society, and other communities.

According to OpenAI, there are also “ways for employees to raise their concerns, including an anonymous integrity hotline and a safety committee led by board members and company security officers.” Whether these measures will be enough to avert the “AI apocalypse” feared by Kokotajlo and others remains to be seen.

Advertisement
Advertisement

Specials from our Partners

Top Posts from our Network

Deep Dives

© Wiener Börse

IPO Spotlight

powered by Wiener Börse

Europe's Top Unicorn Investments 2023

The full list of companies that reached a valuation of € 1B+ this year
© Behnam Norouzi on Unsplash

Crypto Investment Tracker 2022

The biggest deals in the industry, ranked by Trending Topics
ThisisEngineering RAEng on Unsplash

Technology explained

Powered by PwC
© addendum

Inside the Blockchain

Die revolutionäre Technologie von Experten erklärt

Trending Topics Tech Talk

Der Podcast mit smarten Köpfen für smarte Köpfe
© Shannon Rowies on Unsplash

We ❤️ Founders

Die spannendsten Persönlichkeiten der Startup-Szene
Tokio bei Nacht und Regen. © Unsplash

🤖Big in Japan🤖

Startups - Robots - Entrepreneurs - Tech - Trends

Continue Reading