Artificial Intelligence in Defense: The Last Straw in International Law's Rock

A few days ago, Ukraine announced its intention to withdraw from the Ottawa Convention, which prohibits the use of antipersonnel mines . Poland and the Baltic states had already done the same; the United States, Israel, Russia, and China, for their part, never signed it. The decision is far from marginal: antipersonnel mines, by definition, violate international humanitarian law, indiscriminately affecting combatants and civilians even many years after the end of hostilities. Post-war Italian generations know something about this. Ukrainian generations will likely know it too: it is estimated that approximately a quarter of the national territory is now mined.
But the decision's significance goes beyond the war contingency. It conveys a worrying political message: when war escalates, the constraints of humanitarian law become negotiable, surmountable .
It's a message coming from many quarters. In recent years, international politics has shown signs of regression : so-called gunboat diplomacy has reemerged, a concept in which a state's rights extend only to its military capabilities. The Russian invasion of Ukraine, the ongoing conflict in Gaza, and even Donald Trump's off-the-cuff statements about a possible annexation of Canada or Greenland are contemporary variations on a theme thought to be history's last resort.
This logic challenges the very foundation of modern international law : the prohibition of territorial conquest by force. The abandonment of the right of conquest marked a shift from a coercive management of relations between states to one governed by shared norms and supranational institutions. In this context, international humanitarian law is not legal pedantry, but the expression of a principle: even in war, there are limits. This is what distinguishes conflict from barbarism.
The return of gunboat diplomacy represents a not-so-veiled attempt to reverse that transition and transcend international law.
AI applied to defense could prove to be the decisive tool for making this effort a success. The use of AI in defense allows for a gradual and silent erosion of international law , potentially reducing it to an ineffective formal structure. AI in defense could spell the end of international law not through an open revolution, but through a series of skillfully distributed tactical violations.
The war in Ukraine marked a watershed moment in the adoption of AI in defense , an acceleration unaccompanied by regulation. Not even the AI Act regulates the use of AI in defense. Yet in Ukraine, both sides are using lethal (potentially) autonomous weapons before there is even a consensus on their legality. Israel used AI to identify targets in Gaza without any agreed-upon rules on acceptable error thresholds or minimum levels of human control. Meanwhile, the representatives of the member states of the UN group that has been working on autonomous weapons since 2013 have not even reached a consensus on the definition of these weapons systems.
It would be naïve to consider this regulatory vacuum a temporary stalemate, the result of the age-old dilemma between premature and ineffective regulation or late and superfluous regulation. It is something more worrying: the creation of a regulatory limbo that serves the interests of both liberal and authoritarian states . Both converge and fuel a regulatory vacuum in order not to limit the potential of AI in defense, even when this might violate fundamental principles such as the distinction between combatants and non-combatants.
We've seen the same approach to regulating international postures with cyber attacks between states. In this case, the regulatory vacuum facilitated aggressive postures, causing damage and risks to our countries' digital infrastructures. Not exactly the most enlightened of choices. It would be best to avoid it when cyber escalates to conventional warfare.
The overlap between the muscular rhetoric of gunboat diplomacy and the discreet pervasiveness of AI is disturbing. While public attention legitimately focuses on the visible effects of the former, the latter operates in the shadows, slowly eroding the stability of international law.
The solution is not to abandon AI in defense, but to clearly reconfigure the regulatory framework within which it can operate. An updated interpretation of the principles of international humanitarian law is needed, extending their validity to emerging technologies. Technological progress must not coincide with, or be the instrument of, legal regression or, worse, moral regression.
*Full Professor of Digital Ethics and Defense Technologies, Oxford Internet Institute, University of Oxford. Author of The Ethics of Artificial Intelligence in Defense , Oxford University Press, which will be published in Italy by Raffaello Cortina.
La Repubblica