← Back to Home

Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

A top Pentagon official says a fight with Anthropic centred on how the military could someday use artificial intelligence in autonomous weapons

Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

A top Pentagon official said Anthropic’s dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in US President Donald Trump’s future Golden Dome missile defence program, which aims to put U.S. weapons in space. U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said he came to view the AI company’s ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same. “I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that,” Michael said in a podcast aired Friday. “I need someone who’s not going to wig out in the middle.” The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defence work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors. Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that’s deeply embedded in classified military systems, including those used in the Iran war. Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons. Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the “All-In” podcast. A fourth co-host, former PayPal executive David Sacks, is now Trump’s AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year. TERS A top Pentagon official said Anthropic’s dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in US President Donald Trump’s future Golden Dome missile defence program, which aims to put U.S. weapons in space. U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said he came to view the AI company’s ethical restrictions on the use of its chatbot Claude as an irrational obstacle as the U.S. military pursues giving greater autonomy to swarms of armed drones, underwater vehicles and other machines to compete with rivals like China that could do the same. “I need a reliable, steady partner that gives me something, that’ll work with me on autonomous, because someday it’ll be real and we’re starting to see earlier versions of that,” Michael said in a podcast aired Friday. “I need someone who’s not going to wig out in the middle.” Advertisement Powered by: PlayStream 00:07 Pentagon says it is labeling AI company Anthropic a supply chain risk ’effective immediately’ The comments came after the Pentagon formally designated San Francisco-based Anthropic a supply chain risk, cutting off its defence work using a rule designed to prevent foreign adversaries from harming national security systems. Anthropic has vowed to sue over the designation, which affects its business partnerships with other military contractors. Trump has also ordered federal agencies to immediately stop using Claude, though the Republican president gave the Pentagon six months to phase out a product that’s deeply embedded in classified military systems, including those used in the Iran war. Anthropic said it only sought to restrict its technology from being used for two high-level usages: mass surveillance of Americans or fully autonomous weapons. Michael, a former Uber executive, revealed his side of months-long talks with Anthropic CEO Dario Amodei in a lengthy conversation with Silicon Valley venture capitalists Jason Calacanis, David Friedberg and Chamath Palihapitiya, co-hosts of the “All-In” podcast. A fourth co-host, former PayPal executive David Sacks, is now Trump’s AI czar and was not present for the episode but has been a vocal critic of Anthropic, including for its hiring of former Biden administration officials shortly after Trump returned to the White House last year. As talks hit an impasse last week, Michael lashed out at Amodei on social media, saying he “has a God-complex” and “wants nothing more than to try to personally control” the military. In the podcast, however, he positioned the dispute as part of a broader military shift toward using AI. Michael said the military is developing procedures for enabling different levels of autonomy in warfare depending on the risk posed. “This is part of the debate I had with Anthropic, which is we need AI for things like Golden Dome,” Michael said, sharing a hypothetical scenario of the U.S. having only 90 seconds to respond to a Chinese hypersonic missile. A human anti-missile operator “may not be able to discriminate with their own eyes what they’re going after,” but an autonomous counterattack would be a low risk “because it’s in space and you’re just trying to hit something that’s trying to get you.”

Source: The Hindu