OpenAI has been awarded a $200 million contract by the U.S. Department of Defence (DoD) to supply advanced AI tools; the Pentagon confirmed in a statement on Monday, June 16, 2025. The announcement marks a significant expansion of OpenAI’s role in the public sector and has ignited a heated debate over the ethical implications of AI militarisation.
As global tensions rise and AI governance frameworks evolve, the deal underscores the U.S.’s strategic push to maintain technological dominance while raising concerns among critics about privacy, security, and international relations.
According to the details of the contract, OpenAI will be providing the DoD with cutting-edge AI solutions, though specific applications remain undisclosed pending security reviews. This follows the White House Office of Management and Budget’s April 2025 guidance, which encouraged federal agencies to leverage a competitive American AI marketplace, exempting national security and defence systems from certain restrictions.

OpenAI, which boasted 500 million weekly active users as of March 2025, brings its expertise in natural language processing and machine learning to the table, potentially enhancing military operations ranging from logistics to intelligence analysis. The deal positions OpenAI alongside established defence contractors like Lockheed Martin and Raytheon, signalling a shift toward integrating commercial AI into national security.
OpenAI: Experts warn against AI militarisation
Meanwhile, analysts warned that booming military spending on AI, including this contract, could bolster tech giants’ influence while posing risks to democratic oversight, a concern echoed by the United Nations University’s January 27, 2025, report on the militarisation of AI.
Ethical concerns loom large. The DoD adopted ethical principles for AI use in 2020, emphasising accountability and transparency, but critics argue these guidelines may be insufficient for a company like OpenAI, which has faced scrutiny over data privacy.
The UN University report highlighted the dual-use nature of AI, benefiting civilian and military purposes, complicating global regulation. “The militarisation of AI has profound implications for global security,” it stated, urging adaptive regulatory frameworks.
The contract’s financial scope ($200 million) reflects the U.S.’s escalating investment in AI defence capabilities. Notably, the Pentagon’s budget has seen bipartisan support, with private contractors receiving the majority of allocated funds. OpenAI’s entry into this space could spur competition, with rivals like Palantir and ScaleAI likely to accelerate their defence offerings.
Market analysts predict a surge in AI adoption across agencies, potentially creating a new moat for OpenAI in the public sector. However, this could also trigger a regulatory backlash, with lawmakers debating AI governance as the technology’s role in warfare grows.


OpenAI’s leadership has not publicly detailed the contract’s specifics, citing security protocols. The company’s pivot from a research-focused entity to a defence contractor aligns with its recent expansion, including partnerships with Microsoft, which holds a significant stake. This move could bolster OpenAI’s valuation, already estimated at over $80 billion, but it risks alienating some users and investors wary of military ties. The DoD’s statement emphasised that the contract adheres to ethical standards, yet the lack of transparency has fuelled scepticism.
Globally, the deal may prompt copycat investments. The UN University advocated for international cooperation to regulate AI weaponisation, a challenge given divergent national interests. In Africa, where AI adoption is rising, Nigeria, the continent’s powerhouse with a robust tech ecosystem, may eye similar contracts, though capacity constraints limit immediate replication. The U.S. move could also escalate tensions with rivals like China, which is advancing its own military AI programs.
As the contract’s details and implications unfold, OpenAI’s foray into defence could redefine AI’s role in global security, but it also amplifies the need for robust oversight. With public discourse heating up, the balance between innovation and ethics remains a critical question for the industry and policymakers alike.





