← Back to Home

Anthropic Drops Its Core Safety Pledge Amid Deadline Threat From US Government

Anthropic drops its AI safety pledge to keep pace in the competitive AI market amid Pentagon demands for unrestricted military use of its technology.

Anthropic Drops Its Core Safety Pledge Amid Deadline Threat From US Government

Anthropic on Wednesday (Feb 25) announced it was dropping its safety pledge to stay competitive in the ongoing artificial intelligence (AI) race. Led by CEO Dario Amodei, the company's revised Responsible Scaling Policy focuses more on ensuring the company stays competitive as the AI marketplace heats up. Anthropic stood out from its competitors for committing to never train an AI system unless it could guarantee that the safety measures were adequate. Anthropic's chief science officer, Jared Kaplan, told Time Magazine that the current policy was not helping the company keep pace with the competitors who weren't adhering to such 'unilateral' commitments. "We felt that it wouldn't actually help anyone for us to stop training AI models. We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments, if competitors are blazing ahead," said Kaplan. Anthropic said another reason for giving up on the safety pledge was that the higher theoretical levels of risk associated with AI models cannot be contained by any one company alone. Under the new policy, the company said it will publish detailed “Frontier Safety Roadmaps” outlining its planned safety milestones, along with regular “Risk Reports” that assess model capabilities and potential threats.

Source: NDTV