More than 100 Google employees reportedly signed a petition calling on the company to “refuse to comply” with certain Pentagon uses of artificial intelligence in military operations.
Image: File
It is now an open secret that some leading technology companies do business with the US military. Initially, Anthropic, Google, Meta and OpenAI were united in opposing the military use of their AI tools. But later, something changed.
According to a US tech magazine, OpenAI quietly rescinded its ban on using AI for “military and warfare” purposes, and soon after it was reported to be working on “a number of projects” with the Pentagon. In November — during the same week that Donald Trump was re-elected US president — Meta announced that the United States and select allies would be able to use its Llama model for defence purposes. A few days later, Anthropic said it too would allow its models to be used by the military and confirmed a partnership with defence firm Palantir. As the year ended, OpenAI announced its own partnership with defence start-up Anduril. Later, even Google revised its AI principles to permit the development and use of weapons and technologies that could potentially cause harm. Within a single year, concerns about the existential risks of AI appeared to fade, and the military use of AI became increasingly normalised.
More recently, however, one US tech company — Anthropic — has reportedly clashed with the US government over how its technology should be used. The Pentagon allegedly demanded that Anthropic remove two major safety guardrails from Claude, the only frontier AI model currently deployed in classified Department of Defense operations. The company was reportedly given an ultimatum: comply with the Pentagon’s terms, be designated a “supply chain risk,” or be compelled to provide the technology under the Defense Production Act. It was later declared a supply chain risk. A day after the ban, Reuters reported that Anthropic’s AI technology had been used when the US carried out attacks on Iran in 2026.
Reuters said it could not determine precisely how the tools were used in the war effort. However, the agency reported that Anthropic’s AI had been deployed across the intelligence community and armed services, and that it was among the first peer AI companies to handle classified information through a cloud supply arrangement with Amazon.
In June 2025, the US Army established “Detachment 201: The Army’s Executive Innovation Corps,” a specialised unit recruiting senior Silicon Valley executives as part-time Army Reserve lieutenant colonels. The initiative was designed to modernise military technology by incorporating private-sector expertise in AI and software without requiring traditional basic training. The initial cohort reportedly included Andrew Bosworth (CTO of Meta), Shyam Sankar (CTO of Palantir), Kevin Weil (CPO of OpenAI), and Bob McGrew (former research officer at OpenAI).
In December 2025, it was announced that the Department of Defense had selected Google’s Gemini AI model to power the military’s internal AI platform, known as GenAI.mil. More recently, Pete Hegseth said the US military would begin integrating Elon Musk’s AI tool, Grok, into Pentagon networks.
Workers at technology companies have also spoken out about what they describe as pressure from government authorities. More than 100 Google employees reportedly signed a petition calling on the company to “refuse to comply” with certain Pentagon uses of artificial intelligence in military operations. Employees at Amazon, Google and Microsoft urged their leadership, in a separate open letter, to “hold the line” against expanded military deployment of AI tools. Technologists across Silicon Valley have also expressed concerns that AI should not be used for purposes such as mass surveillance of Americans.
Silicon Valley has, in some cases, rallied behind Anthropic in its dispute with President Trump and the Pentagon regarding the military use of its technology.
These developments raise a serious dilemma for consumers — both governments and individuals — who rely on products developed by companies whose technologies may also be used in warfare or in ways that could violate human rights. Many users adopted these products long before companies formalised partnerships with the US military. Now that the landscape has shifted, some consumers face difficult questions. Should they continue using products built by companies that also develop technologies for war? How can international users be assured that their data will not be used in ways that compromise their security?
The current moment calls for careful reflection and difficult decisions about how societies, governments and individuals should respond.
Wesley Diphoko is a Technology Analyst and Editor-in-Chief of Fast Company (South Africa) magazine.
Image: Supplied
Wesley Diphoko is the Editor-in-Chief of FastCompany (SA) magazine.
*** The views expressed here do not necessarily represent those of Independent Media or IOL.
BUSINESS REPORT
Related Topics: