The Dutch army, for example, has been developing its own version of Chat-GPT to help its operators during wartime, called DefGPT. AI-based Decision Support Systems are being used in the conflicts in Gaza and Ukraine.
Why write an article about warfare to highlight The Hague, the city of peace and justice? Because in this city, the Asser Institute is closely monitoring whether warfare using AI complies with legal obligations under International Humanitarian Law (IHL).
I spoke with the Asser Institute about two ongoing research projects.
AI as a people-pleaser
In his new publication, researcher Dr Jonathan Kwik provides a pioneering examination of sycophantic military AI assistants. His paper not only provides a comprehensive analysis of why bootlicking behaviour in AI systems poses significant dangers on the battlefield, but also offers militaries an integrated list of solutions to mitigate risks, through both technical measures and practical means (like operator sensibilisation and adapting operating procedures). What that means? Jonathan Kwik explains:
"But what if such digital assistants start to prioritise pleasing their operators, over reporting facts? For example, by telling a military officer what they hoped to hear (that a military objective is cleared of civilians), instead of unwanted but correct information (that a family just moved in to take shelter from bombings)?"
Comparison between AI and humans
Researchers Klaudia Klonowska and Taylor Kate Woodcock of the Asser Instituut have conducted another interesting study. When compared to humans, AI is often promised to offer superior speed, accuracy, and certainty. This supposed superiority of AI sustains broader hopes for ‘overcoming one’s own imperfections in the wielding of violence’.
In their research, Klonowska and Woodcock examine the comparison between AI and humans. They focus primarily on the suggestion that AI could be ‘better’ at reducing harm to civilians and as a result may ensure greater compliance with international humanitarian law.
Summarising their extensive research was not that easy. I asked Copilot, and this is what it came up with:
Misleading assumptions in human/ai comparisons
“The idea that AI can simply replace humans in military tasks is misleading. For example, when unmanned aircraft were introduced, they didn’t just remove the need for pilots; instead, they changed how surveillance and targeting were done, creating new roles for humans. Using AI in the military requires careful thought about how responsibilities shift and whether important values are still protected. Human-AI collaboration doesn’t automatically combine the best qualities of both; it brings its own challenges.
Comparisons between humans and AI often ignore the fact that both are highly diverse. No two humans are the same, and AI systems also vary widely in design and function. AI performance can change over time due to updates, retraining, or changing conditions, just as human performance can be affected by stress or fatigue.”
Unpacking better performance
“Speed and accuracy are often cited as AI’s main advantages. While faster operations may seem beneficial, they can reduce the time available to check information and take precautions, increasing risks to civilians. Similarly, high accuracy in tests doesn’t guarantee the same results in real-world situations. For example, in the Gaza conflict, faster AI-driven targeting led to more civilian casualties.
Comparing AI and humans in military contexts often focuses on speed and accuracy, but these are tactical—not humanitarian—goals. All military actions must comply with International Humanitarian Law (IHL), which balances military necessity and civilian protection. Oversimplifying AI’s potential risks ignoring IHL’s complex, interrelated principles of distinction, precaution, and proportionality.”
Balancing technology with human hudgment
I find it reassuring that the Asser Institute is monitoring the humanitarian aspects of the use of AI on the battlefield. As in other sectors, people are searching for the best application of AI. Researchers Klonowska and Woodcock advise against placing too much emphasis on the comparison between AI and humans, but rather to look at the quality of human-machine interaction.
*picture is created with the help of AI
International humanitarian law (IHL), also known as the law of war or law of armed conflict, is a set of rules that limits the effects of armed conflict for humanitarian reasons. It protects people who are not participating in hostilities, such as civilians, the wounded, and prisoners of war, and restricts the methods and means of warfare that can be used. IHL applies to all sides in a conflict and does not regulate the legality of war itself.
1About Asser Instituut
The T.M.C. Asser Instituut conducts fundamental and independent policy-oriented research and organises critical and constructive reflection on international and European legal developments, at the interface of academia, legal practice and governance.
Read more
What are the initiatives of OPCW to free the world from chemical weapons?
An impression of the work of Nobel Peace Prize awarded, OPCW in The Hague to free the world from chemical weapons.
[New publication] “Digital yes-men: How to deal with sycophantic military AI?”
In a new publication, researcher Jonathan Kwik (Asser Institute) examines sycophantic military AI assistants. He explores the reasons behind ‘bootlicking’ behaviour in AI systems, highlights the significant battlefield dangers it presents, and proposes a two-part strategy comprising improved design and enhanced training to mitigate these risks for military forces.
[New publication] Rhetoric and regulation: The (limits of) human/AI comparison in debates on military artificial intelligence
The promise of artificial intelligence (AI) is ubiquitous and compelling, yet can it truly deliver ‘better’ speed, accuracy, and decision making in the conduct of war? As AI becomes increasingly embedded in military targeting processes, legal and ethical debates often compare who performs better, humans or machines? In a new publication, researchers Klaudia Klonowska and Taylor Kate Woodcock argue for the urgent need to critically examine the assumptions behind the human/AI comparison and its usefulness for legal analysis of military targeting.