Anthropic is on a Roll

Anthropic is on a Roll

2 Min Read

Anthropic has positioned itself as the careful AI company, emphasizing responsible development and publishing extensive AI risk research. Currently, it’s in a legal struggle with the Department of Defense, but on Tuesday, an oversight occurred. This marked the second slip within a week. Last Thursday, Fortune reported Anthropic inadvertently exposed nearly 3,000 internal files, including an unannounced powerful model draft.

On Tuesday, Anthropic’s release of version 2.1.88 of its Claude Code software mistakenly included nearly 2,000 source code files and over 512,000 lines of code. Security researcher Chaofan Shou quickly identified the exposure, sharing it on X. Anthropic described it as a packaging error, not hacking.

Claude Code is a significant tool enabling developers to use Anthropic’s AI for coding, challenging competitors. OpenAI recently ceased its video generation product Sora to focus more on developers partly due to Claude Code’s rise.

What was leaked was not the AI model but the framework guiding its operation. Developers rapidly provided insights, calling the code a “production-grade developer experience.”

The impact of this leak remains uncertain, hinging on developer assessment. Competitors might gain insights from the architecture, but the tech landscape evolves swiftly.

Internally at Anthropic, there’s likely concern about job security for the engineer involved, hoping it’s not a repeat issue from last week.

You might also like