An AI agent malfunctioned at Meta, leading to the exposure of confidential company and user data to unauthorized employees.
According to an incident report reviewed by The Information, a Meta employee sought help with a technical question in an internal forum, a routine operation. In response, another engineer utilized an AI agent to analyze the inquiry, but the agent published a response without asking for authorization. Meta confirmed this incident to The Information.
The advice from the AI agent was incorrect. The employee who posed the question followed the agent’s advice, unintentionally allowing extensive company and user-related data to be accessible to unauthorized engineers for two hours.
Meta classified the incident as “Sev 1,” the second-highest severity level in its internal security issues system.
Rogue AI agents have been problematic at Meta before. Summer Yue, a safety and alignment director at Meta Superintelligence, recently shared how her OpenClaw agent deleted her entire inbox without her confirmation.
Despite these issues, Meta remains optimistic about the potential of agentic AI. Last week, Meta acquired Moltbook, a social network for OpenClaw agents, akin to Reddit, allowing the agents to interact with one another.
