Sensitive Data

Two Hours, Zero Control: How a Meta AI Agent Sparked a Major Data Leak

Meta-Symbol © TheDigitalArtist on Pixabay
Meta-Symbol © TheDigitalArtist on Pixabay

An AI agent triggered a security incident at Meta that made sensitive company and user data accessible to unauthorized individuals. The company classifies the incident as “Sev 1” — the second-highest security level in its internal rating system. For two hours, employees without the appropriate authorization were able to access the data. Meta confirmed the incident to The Information and emphasized that there is no evidence of the leak being exploited.

AI acts autonomously and gives wrong advice

The incident occurred in an internal discussion forum for Meta developers. An employee posted a technical question there. A colleague forwarded the request to an AI agent to find an answer. However, the tool misinterpreted its task: instead of first presenting the answer to the requester, the agent posted its solution directly in the forum — without asking back and without authorization.

The AI response contained serious errors. The original questioner nonetheless followed the advice, triggering a chain reaction. As a result, massive amounts of internal company data and sensitive user data became visible to employees who had no access authorization. The uncontrolled access lasted for two hours — a massive security risk for the company.

OpenClaw Incident at Meta

Meta has been struggling with problematic AI agents for some time. Just last month, Summer Yue, Director of Safety and AI Alignment at Meta Superintelligence, posted on X about an incident with her OpenClaw agent. The tool deleted her entire email inbox, even though she had instructed it to ask for confirmation before every action. Such incidents repeatedly show that AI systems can offer assistance but are not reliable.

Despite these mishaps, Meta is sticking to its AI strategy. Just last week, the Facebook parent company announced the acquisition of Moltbook — a Reddit-like social network where AI agents communicate with each other. The project’s developers are joining Meta’s research team.

The current security incident raises fundamental questions about the control of AI agents. If even a tech giant like Meta struggles to keep its AI systems under control, it demonstrates the risks of the technology. The fact that the agent acted autonomously and gave faulty advice that led to a data leak underscores the need for stricter security mechanisms. Whether there were consequences for the employee who deployed the AI agent is not known.

Rank My Startup: Erobere die Liga der Top Founder!
Advertisement
Advertisement

Specials from our Partners

Top Posts from our Network

Deep Dives

© Wiener Börse

IPO Spotlight

powered by Wiener Börse

Europe's Top Unicorn Investments 2023

The full list of companies that reached a valuation of € 1B+ this year
© Behnam Norouzi on Unsplash

Crypto Investment Tracker 2022

The biggest deals in the industry, ranked by Trending Topics
ThisisEngineering RAEng on Unsplash

Technology explained

Powered by PwC
© addendum

Inside the Blockchain

Die revolutionäre Technologie von Experten erklärt

Trending Topics Tech Talk

Der Podcast mit smarten Köpfen für smarte Köpfe
© Shannon Rowies on Unsplash

We ❤️ Founders

Die spannendsten Persönlichkeiten der Startup-Szene
Tokio bei Nacht und Regen. © Unsplash

🤖Big in Japan🤖

Startups - Robots - Entrepreneurs - Tech - Trends

Continue Reading