Anthropic drops flagship safety pledge, says it won't halt AI development if competitors 'blaze ahead'
Anthropic scrapped the central commitment of its Responsible Scaling Policy: the 2023 promise to never train an AI system without guaranteeing adequate safety measures first. Chief Science Officer Jared Kaplan told TIME that "it wouldn't actually help anyone for us to stop training AI models" given competitors' pace.
The revised RSP commits to matching or surpassing rivals' safety efforts and to delaying development only if Anthropic leads the race and considers catastrophe risks significant. The change leaves the company far less constrained by its own policies at the same moment it has caught or surpassed OpenAI in capabilities.
View full digest for February 25, 2026