US Department of Defense Collaborates with AI Companies
According to the US Department of Defense, the agency has reached agreements with several leading AI companies worldwide, allowing them to deploy advanced AI technologies on the Department’s classified networks for legitimate military purposes. These companies include SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services (AWS).
The Department’s statement claims that these agreements will accelerate the transformation of the US military into an AI-prioritized combat force and enhance its decision-making capabilities across all operational domains.

Experts: AI is Profoundly Changing Modern Warfare
In today’s rapidly developing AI landscape, what changes can we expect in military applications? Xie Hui, an assistant researcher at the Institute of World Peace and Security Studies at the China Institute of International Studies, shared insights in an interview with Global Information’s “Flash Review” segment. He believes that AI is not merely adding a new weapon but is profoundly changing the organization, command models, and combat methods of modern warfare.
- Recent regional conflicts illustrate that military applications of AI generally fall into two categories: one focuses on military support systems, such as rapidly processing satellite, drone, radar, and communication data to help armies quickly grasp battlefield situations, filter targets, and formulate plans.
- The other category pertains to weapon systems, such as drones autonomously identifying targets, planning routes, coordinating operations, and assisting in fire control, leading to deep-seated changes.
- In the past, military power was more about platform firepower and troop scale; now, it increasingly shifts towards competition in data algorithms, computational power, and system collaboration capabilities. AI can enhance intelligence processing efficiency and strike precision, reduce personnel exposure in high-risk battlefields, and potentially lower the loss of certain equipment and munitions. However, it may also compress decision-making time and accelerate the pace of war, shortening the chain from detection and judgment to strike.
US Pushes for Deep AI Integration in Military, Heightening Global Concerns
Multiple US media outlets have reported that former Iranian Supreme Leader Khamenei was killed in an airstrike on February 28, attributed to the US leveraging AI technology and cyber espionage. On the same day, a school in southern Iran was attacked, resulting in over 160 student casualties. Journalist Tyler Austin Harper from The Atlantic characterized this incident as a civilian casualty due to “target recognition errors” from AI technology.

Xie Hui argues that the US’s push for major tech companies to deeply integrate into military systems further blurs the lines between civilian technology and military actions, intensifying global concerns about the uncontrolled militarization of AI.
- The accelerated use of AI in military operations reveals real concerns about the militarization of AI applications. Some technologies that have not been fully validated, lack transparency, and have unclear responsibility boundaries are rapidly being used in real combat scenarios, directly affecting key aspects like target recognition, operational decision-making, and fire strikes.
- While AI can indeed improve intelligence analysis, target recognition, and operational planning efficiency, it does not guarantee accurate judgments. The battlefield environment is highly complex; data may be outdated, images may be unclear, communications may be disrupted, and models may have biases.
- If AI is misused in target recognition and strike processes, it could lead to severe civilian casualties, with consequences that are difficult to reverse. Moreover, there is a growing risk that human roles in war decision-making may be diminished; AI can provide analytical support but cannot replace humans in life-and-death decisions.
Experts: AI Should Serve Peace, Not Make War More Efficient
The misuse of AI technology in warfare raises ethical risks and security concerns. UN Secretary-General Antonio Guterres has warned that humanity’s fate should not be left to algorithms. So how can we regulate and constrain the development of AI?
Xie Hui believes that to ensure AI truly serves peace, it should not make war more efficient but rather reduce misjudgments and lower the risk of conflict escalation, and be more focused on peace objectives such as peacekeeping, mine clearance, humanitarian aid, disaster warning, and crisis management.
- First, we must maintain human ultimate control, especially regarding target selection, fire strikes, and life-and-death decisions. The decision-making power cannot be entirely delegated to machines; AI can assist in analysis and provide suggestions, but the final decision to use force must be made by humans, who must also bear responsibility.
- Second, we must ensure that technology is safe and reliable. Military environments are highly complex; data may be incomplete, communications may be disrupted, and models may produce misjudgments. Therefore, any military AI system should undergo rigorous testing and risk assessment before deployment. Systems closer to the end of the kill chain should be used cautiously, retaining human intervention and emergency stop mechanisms.
- Third, we must clarify responsibility boundaries. The use of AI in military operations should not lead to unclear responsibilities. There should be clear divisions of responsibility among developers, deployers, commanders, and users. In the event of misfires or system failures, the causes should be traceable, responsibilities identified, and corrections made promptly.
- Fourth, we need to strengthen the establishment of international rules. Currently, the militarization of AI applications is developing rapidly, but relevant international norms remain inadequate. The international community should use the UN as a primary channel to promote consensus among major military powers, leading AI technology countries, and developing nations on issues such as autonomous weapons, human-machine control, civilian protection, and accountability.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.