Artificial intelligence is rapidly transforming modern warfare, introducing unprecedented capabilities and challenges. At the University of Amsterdam, researcher Taylor Kate Woodcock has conducted groundbreaking research examining how AI is not only changing military tactics but also fundamentally reshaping international law. Her thesis on AI, warfare, and international law reveals critical insights about the intersection of technology, military decision-making, and legal frameworks.
The Bias Problem in AI Military Systems
One striking example from Woodcock’s research highlights a concerning bias: a middle-aged man walking down the road is more likely to be identified as a target by AI than a woman walking on the same road. This demonstrates how AI systems can perpetuate and even amplify existing human biases in warfare scenarios. Should militaries accept these limitations, and what are the legal consequences?
Woodcock explains that while existing targeting practices are already subject to the messy realities of warfare, AI introduces systematic errors that are harder to contest. “AI not only causes mistakes but also reproduces them systematically and in a way that is hard to contest,” she notes. “In that sense, AI can amplify existing human biases and mistakes in military decision-making.”
How AI is Used in Modern Military Operations
Military commanders must distinguish between civilians and lawful targets like combatants while ensuring incidental harm is not “excessive” and taking feasible precautions to minimize civilian casualties. These requirements remain unchanged, but AI fundamentally alters how these decisions are made.
“Machine learning models are used to make predictions based on patterns,” Woodcock explains. “Contemporary military organizations use drone footage, open-source information, and human intelligence. There is an overload of information that people cannot process without AI. That is the key use of AI in militaries: it can process huge amounts of data at high speed and scale.”
Understanding AI’s Different Decision-Making Process
AI operates fundamentally differently from human cognition. For instance, when tasked with distinguishing between photos of dogs and wolves, an AI system might focus on irrelevant features like snow in the background rather than the animals themselves. This “black box” nature of AI makes it difficult for humans to intervene or understand why certain decisions are made.
“AI is often viewed as objective, leading people to uncritically rely on these systems,” Woodcock warns. “Imagine AI identifying targets in wars in life-or-death situations, when there is still a lot of uncertainty.”
The Legal Implications of AI in Warfare
Woodcock views international law as a practice shaped by human interpretation and daily engagement. Legal standards like “feasible precautions” and “excessive” civilian harm require concepts of reasonableness that gain meaning through interpretation. AI can fundamentally alter this discretion and reshape what is considered reasonable.
“AI is making predictions about targets that are not 100 percent accurate, with margins of error being a feature of how these systems work,” she explains. “Those errors can become reasonable because the commander cannot foresee system errors in specific instances due to AI’s lack of transparency and predictability.”
Accountability and Transparency Challenges
This shift raises serious concerns about accountability and transparency in AI-assisted warfare. There’s a real risk of causing systematic harm, highlighting the need for checks and balances so commanders understand the risks of the systems they’re working with.
“It is important to preserve space to hesitate in the face of difficult decisions,” Woodcock emphasizes. “Decisions about what is reasonable or not are more than just a mathematical formula. Commanders must weigh the harm to civilians against the military advantages. These legal assessments are not straightforward; they are highly complex.”
Preserving Human Judgment in AI-Assisted Warfare
The speed and efficiency of AI can inadvertently close the space for hesitation that is crucial in military decision-making. Woodcock stresses that legal assessments require room for knowledge, experience, intuition, and values that AI cannot replicate.
“It is important to be able to hesitate with all these factors in mind,” she says. “Unfortunately, the use and speed of AI can close the space for hesitation.”
The Future of AI and International Law
Woodcock hopes her research will shed light on AI for military organizations and contribute to ongoing debates about how AI in warfare is reshaping international law and military decision-making. She notes that industries are increasingly engaging with these issues, recognizing the need to understand where the risks lie.
Her research, conducted at the Asser Institute and the Amsterdam Law School as part of the NWO-funded DILEMA project, represents a crucial step in understanding the complex relationship between emerging technologies and established legal frameworks.
Key Takeaways for Military Organizations
Military organizations must recognize that AI systems, while powerful, introduce systematic errors and biases that can become normalized through repeated use. Commanders need to maintain critical thinking and hesitation in decision-making processes, even when AI provides seemingly objective recommendations.
The legal framework of international humanitarian law must evolve to address the unique challenges posed by AI, including issues of accountability, transparency, and the preservation of human judgment in life-or-death situations.
As AI continues to reshape warfare, the research from the University of Amsterdam provides essential guidance for navigating this complex intersection of technology and international law. The challenge moving forward will be ensuring that AI enhances rather than undermines the fundamental principles of humanitarian law and human rights in military operations.