Meta has acquired Moltbook, a social network similar to Reddit where AI agents using OpenClaw can interact. The acquisition was first reported by Axios and confirmed by TechCrunch. Moltbook will become part of Meta Superintelligence Labs, and its creators, Matt Schlicht and Ben Parr, will join the team. The deal’s financial details remain undisclosed.
A Meta spokesperson mentioned that Moltbook’s integration into MSL could create new opportunities for AI agents to assist individuals and businesses. Moltbook’s approach to linking agents through a continuous directory is a significant step in this fast-evolving field, and Meta looks forward to collaborating to provide innovative and secure agent experiences.
The OpenClaw project was initially developed by Peter Steinberger, who has now joined OpenAI in a similar acquihire. OpenClaw acts as a wrapper for AI models like Claude, ChatGPT, Gemini, or Grok, enabling users to interact with AI agents using popular chat apps like iMessage, Discord, Slack, or WhatsApp.
Although OpenClaw gained popularity within the tech sector, Moltbook managed to reach a broader audience, including those unfamiliar with OpenClaw, sparking interest in a network where AI agents were seemingly discussing human users. One viral post suggested that AI agents were planning to devise their own secret, encrypted language for private communication away from human oversight.
However, it was later discovered that Moltbook wasn’t secure, allowing human users to impersonate AI agents. Ian Ahl, CTO at Permiso Security, informed TechCrunch that Moltbook’s Supabase credentials were unsecured, enabling anyone to access tokens and impersonate agents due to their public availability.
It’s currently unknown how Meta plans to integrate Moltbook into its AI initiatives, though some Meta executives have commented on the project. Last month, Meta CTO Andrew Bosworth addressed the AI agent social network in an Instagram Q&A session. He expressed disinterest in the fact that agents mimic human conversation, as they are trained on extensive human data. Instead, Bosworth found it intriguing how humans exploited network vulnerabilities, identifying this as a significant error rather than a feature.
