icon-download-alt
Go to the homepage of Stories of Purpose The Hague
AI in the battlefield: is AI performing better than human beings?

AI in the battlefield: is AI performing better than human beings?

ChatGPT and Copilot have already become indispensable in writing and other service professions. However, I didn’t know that AI is playing an increasingly important role in warfare.

5 min read 22 Oct 2025 Download text

The Dutch army, for example, has been developing its own version of Chat-GPT to help its operators during wartime, called DefGPT. AI-based Decision Support Systems are being used in the conflicts in Gaza and Ukraine.

Hilde van Turennout freelance reporter
By Hilde van Turennout
Impact communications specialist with a legal background.

Why write an article about warfare to highlight The Hague, the city of peace and justice? Because in this city, the Asser Institute is closely monitoring whether warfare using AI complies with legal obligations under International Humanitarian Law (IHL).

I spoke with the Asser Institute about two ongoing research projects.

AI as a people-pleaser

In his new publication, researcher Dr Jonathan Kwik provides a pioneering examination of sycophantic military AI assistants. His paper not only provides a comprehensive analysis of why bootlicking behaviour in AI systems poses significant dangers on the battlefield, but also offers militaries an integrated list of solutions to mitigate risks, through both technical measures and practical means (like operator sensibilisation and adapting operating procedures). What that means? Jonathan Kwik explains:

“Ever feel that Chat-GPT is just mindlessly agreeing with what you're saying, even though it was confidently making a completely contradictory claim seconds before? And that it sometimes agrees with you, even if your own claim makes no sense? Turns out, this is not just your imagination or an occasional fluke. It is a systematic problem that many large language models (LLM) have: they tend to be people-pleasers. Computer scientists have called this phenomenon "sycophancy" — or for the layperson: bootlicking behaviour."

Dr. Jonathan Kwik of the Asser Institute

"But what if such digital assistants start to prioritise pleasing their operators, over reporting facts? For example, by telling a military officer what they hoped to hear (that a military objective is cleared of civilians), instead of unwanted but correct information (that a family just moved in to take shelter from bombings)?"

Comparison between AI and humans

Researchers Klaudia Klonowska and Taylor Kate Woodcock of the Asser Instituut have conducted another interesting study. When compared to humans, AI is often promised to offer superior speed, accuracy, and certainty. This supposed superiority of AI sustains broader hopes for ‘overcoming one’s own imperfections in the wielding of violence’.

In their research, Klonowska and Woodcock examine the comparison between AI and humans. They focus primarily on the suggestion that AI could be ‘better’ at reducing harm to civilians and as a result may ensure greater compliance with international humanitarian law.

Summarising their extensive research was not that easy. I asked Copilot, and this is what it came up with:

Misleading assumptions in human/ai comparisons

“The idea that AI can simply replace humans in military tasks is misleading. For example, when unmanned aircraft were introduced, they didn’t just remove the need for pilots; instead, they changed how surveillance and targeting were done, creating new roles for humans. Using AI in the military requires careful thought about how responsibilities shift and whether important values are still protected. Human-AI collaboration doesn’t automatically combine the best qualities of both; it brings its own challenges.

Comparisons between humans and AI often ignore the fact that both are highly diverse. No two humans are the same, and AI systems also vary widely in design and function. AI performance can change over time due to updates, retraining, or changing conditions, just as human performance can be affected by stress or fatigue.”

Unpacking better performance

“Speed and accuracy are often cited as AI’s main advantages. While faster operations may seem beneficial, they can reduce the time available to check information and take precautions, increasing risks to civilians. Similarly, high accuracy in tests doesn’t guarantee the same results in real-world situations. For example, in the Gaza conflict, faster AI-driven targeting led to more civilian casualties.

Comparing AI and humans in military contexts often focuses on speed and accuracy, but these are tactical—not humanitarian—goals. All military actions must comply with International Humanitarian Law (IHL), which balances military necessity and civilian protection. Oversimplifying AI’s potential risks ignoring IHL’s complex, interrelated principles of distinction, precaution, and proportionality.”

Balancing technology with human hudgment

I find it reassuring that the Asser Institute is monitoring the humanitarian aspects of the use of AI on the battlefield. As in other sectors, people are searching for the best application of AI. Researchers Klonowska and Woodcock advise against placing too much emphasis on the comparison between AI and humans, but rather to look at the quality of human-machine interaction.

*picture is created with the help of AI

“Even a well-performing AI-enabled system can nevertheless lead to unintended consequences in targeting due to the complexities of human-machine interaction."

Klaudia Klonowska and Taylor Kate Woodcock of the Asser Instituut

International humanitarian law (IHL)

International humanitarian law (IHL), also known as the law of war or law of armed conflict, is a set of rules that limits the effects of armed conflict for humanitarian reasons. It protects people who are not participating in hostilities, such as civilians, the wounded, and prisoners of war, and restricts the methods and means of warfare that can be used. IHL applies to all sides in a conflict and does not regulate the legality of war itself.

Asser

About Asser Instituut

The T.M.C. Asser Instituut conducts fundamental and independent policy-oriented research and organises critical and constructive reflection on international and European legal developments, at the interface of academia, legal practice and governance.

Read more

What are the initiatives of OPCW to free the world from chemical weapons?

An impression of the work of Nobel Peace Prize awarded, OPCW in The Hague to free the world from chemical weapons.

[New publication] “Digital yes-men: How to deal with sycophantic military AI?”

In a new publication, researcher Jonathan Kwik (Asser Institute) examines sycophantic military AI assistants. He explores the reasons behind ‘bootlicking’ behaviour in AI systems, highlights the significant battlefield dangers it presents, and proposes a two-part strategy comprising improved design and enhanced training to mitigate these risks for military forces.

[New publication] Rhetoric and regulation: The (limits of) human/AI comparison in debates on military artificial intelligence

The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of military targeting.

Download the article

You can easily download the text of this article as a Word file to use freely for your own article or story. The images can be downloaded separately from the article.

Go to top