GIGA Focus Global
Number 4 | 2026 | ISSN: 1862-3581

Autonomous weapon systems (AWS) operate in various crises today, yet no binding international law governs their use. This regulatory vacuum allows machines to make life-or-death decisions while accountability is murky, spread across an array of programmers, commanders, and manufacturers, threatening the core principles of international humanitarian law.
AWS deployed in Ukraine, Gaza, and Sudan cannot reliably distinguish civilians from combatants in complex urban environments, contravening Geneva Convention Articles 48 and 51 of Additional Protocol I, which prohibit indiscriminate attacks.
Dual-use technology renders traditional arms control obsolete. Unlike nuclear weapons, AWS emerge from civilian AI frameworks, making proliferation impossible to monitor. Iran’s drone programme exemplifies how actors exploit commercial technology to bypass export controls.
Machine-learning systems evolve unpredictably during deployment, rendering pre-deployment legal reviews meaningless. Article 36 of the Geneva Conventions requires states assess whether weapons can be used lawfully; autonomously evolving systems cannot fulfil this obligation.
Distributed responsibility across commanders, operators, programmers, and manufacturers creates an accountability void. When AWS commit unlawful acts, international humanitarian law frameworks cannot assign responsibility, as no single actor sufficiently understands or controls the system’s decision-making to bear legal culpability.
States must negotiate binding prohibitions on unpredictable AWS and establish technical standards for explainability and human control. Existing voluntary measures, including a US Political Declaration, are clearly insufficient. Without treaty obligations and verification mechanisms, proliferating AWS will entrench systems incapable of complying with international humanitarian law.
The integration of artificial intelligence (AI) into military operations marks a critical juncture in warfare. Major powers and regional actors are heavily investing in AWS (Bazoobandi 2025). Current AI technology cannot yet fully replicate the professional judgement of trained soldiers, but this may soon change. These developments have brought what scholars once termed “killer robots” (Folly 2021) closer to operational reality than ever before. Humanity stands at a watershed moment where decisions over life and death may soon rest with pre-programmed algorithms (Kmentt 2025).
The debate over AWS reveals deeply divided opinions. Military planners view these systems as revolutionary tools capable of operating in environments beyond human capacity, potentially reducing casualties and operational costs while enhancing battlefield efficiency. Proponents also argue that AWS could minimise atrocities by eliminating from combat decisions stress-induced cognitive impairment that can lead to compromised decision-making under fire, all the while saving soldiers’ lives through superior performance. By contrast, critics identify insurmountable challenges, particularly regarding compliance with international humanitarian law (IHL) principles of distinction, proportionality, and precaution. Indeed, the adoption of AI in defence technologies fundamentally threatens the role that predictability, reliability, and accountability traditionally play in weapons assessment and public acceptance (Roff 2014).
The global debate on AWS has predominantly focused on operational principles (i.e. proportionality, distinction, precaution, and military necessity) while neglecting critical questions surrounding design, manufacturing, and commercialisation. However, technology companies and international investors are playing an increasingly decisive role in advancing autonomous weapons capabilities. Yet, their responsibilities and potential liabilities remain largely unaddressed in regulatory discussions.
Recent global conflicts in Ukraine, Gaza, Yemen, Iran, and Sudan have already witnessed heavy AWS deployment. Unlike unconventional weapons such as nuclear and biological arms, whose accountability frameworks are clearly established under international law, AWS present an unprecedented challenge: defining responsibility when machines make lethal decisions (Panneerselvam 2024). This absence of human decision-makers raises profound ethical questions about the dehumanisation of warfare.
AWS struggle to reliably distinguish combatants from civilians in complex battlefield environments. At the same time, AWS operators, who are physically and psychologically removed from combat’s realities, become desensitised to committing acts of violence. This disengagement also risks dehumanising the enemy and fundamentally altering the nature of armed conflict for the worse. Experiences of US drone warfare offer an instructive parallel: operators based thousands of miles from the battlefield, with little local knowledge or situational awareness, make lethal targeting decisions in real time. Remote killing creates a paradoxical intimacy with violence that erodes traditional ethical constraints on the use of force (Gusterson 2017). In fact, the physical removal of operators from the battlefield in drone warfare has already generated profound accountability gaps, as responsibility becomes diffuse across chains of command far removed from the consequences of individuals’ decisions (Strawser 2013). Because AWS are developed across multiple actors (i.e. programmers, manufacturers, and military commanders), such responsibility becomes further fragmented and difficult to assign: programmers cannot predict how learning algorithms will evolve in deployment; military commanders and operators lack understanding of the system’s internal decision-making processes; and manufacturers may be disconnected from operational contexts (Taylor 2021). International humanitarian law (IHL, summarised in Table 1) holds that accountability lies with the commander who authorises deployment. Yet no single commander can capture the complexity of responsibility distributed across all actors involved in AWS (Hellman 2024). This accountability gap is exacerbated by the absence of meaningful human control in real-time targeting decisions by AWS. This is a challenge that drone warfare has already exposed, and that fully autonomous systems will intensify.
IHL Principle | Legal Basis | Core Requirement | Challenge Posed by AWS |
Prohibition of Unnecessary Suffering | Article 35, Additional Protocol I | Weapons must not inflict suffering beyond what is deemed “necessary” to achieve legitimate military objectives, nor cause widespread or long-term environmental damage. | AWS cannot calibrate suffering or environmental impact against military necessity; their targeting logic operates without the moral reasoning required to apply this constraint. |
Principle of Distinction | Article 48, Additional Protocol I | Parties must always distinguish between civilians and combatants, and between civilian objects and military objectives. Indiscriminate attacks are prohibited. | AWS cannot reliably distinguish combatants from civilians, particularly in complex urban environments, making compliance with this principle structurally uncertain. |
Principle of Military Necessity | Customary IHL / Additional Protocol I | Force may be used only where indispensable to achieving a “legitimate” military objective. This requires context-dependent moral judgement. | Although AWS can execute high-risk tactical operations, they are devoid of the moral reasoning required to minimise civilian casualties and unnecessary suffering. |
Principle of Humanity | Article 86, Additional Protocol I | All planned means and methods of warfare must be subject to legal review before deployment. Legal advisers must be available to military commanders. | Commanders cannot seek meaningful legal advice on algorithmic targeting processes they do not understand, while legal advisers lack the technical expertise to assess AWS compliance. |
These accountability gaps are symptomatic of a broader failure: the international community has yet to develop regulatory frameworks capable of keeping pace with the realities of autonomous warfare. International humanitarian standards have not kept pace with the advancement of defence technology.
Recognising the urgent need for coordinated international governance of military AI applications, the Dutch government launched the Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM) during the 2023 REAIM Summit in The Hague, which was co-hosted by the South Korean government. The Commission serves as a bridge between diverse stakeholders (including governments, military institutions, technology developers, civil society organisations, and academic communities) working on AI governance issues in the defence sector.
The REAIM 2023 Call to Action, endorsed by 57 countries at The Hague summit, acknowledges that rapid AI adoption in the military domain carries significant risks (including unreliability, unclear liability, and the potential for unintended escalation) that states do not fully understand. It affirms that humans must remain responsible and accountable for AI-assisted decisions. Further, it stresses the need for appropriate safeguards and oversight. Rather than proposing binding rules, the Call to Action is explicitly non-binding and encourages action rather than obligating it. It calls on states, industry, civil society, and academia to share knowledge, develop national frameworks, and continue the multilateral dialogue on responsible military AI. The United States Department of State and Department of Defense (now also referred to as the Department of War) have similarly taken the initiative to advance military AI governance, convening in March 2024 the inaugural meeting of states endorsing the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which was launched at REAIM.
These initiatives represent important developments, yet they also point to a persistent challenge. Voluntary commitments of this kind are insufficient, as without binding treaty obligations and verification mechanisms, states face no enforceable consequences for non-compliance. A further challenge lies in the absence of an internationally accepted definition of autonomous weapon systems. None of the initiatives discussed above establish consensus on at what point along the autonomy spectrum a system becomes legally impermissible.
Autonomy is not binary but exists along a spectrum, ranging from systems requiring constant human input to those that act nearly entirely independently. Without agreement on the definition of these weapons and their autonomy threshold, any regulatory effort risks either over-inclusion (i.e. restricting systems with meaningful human oversight) or under-inclusion (i.e. leaving the most dangerous systems unregulated).
The US Department of Defense proposed defining AWS as a weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes systems capable of functioning without further input by an operator after activation (United States Department of Defense 2011). This definition is based on the system’s capacity to make targeting decisions independently after initial human activation.
Based on the above-mentioned proposed definition, the legal and ethical risks posed by AWS are determined by two interacting factors: a) their level of autonomy and b) the operational environment in which they are deployed. Table 2 provides some examples of existing AWS.
System Name | Developer/ Country | Deployment Zone | Autonomy Level | Description |
SGR-A1 Sentry Robot | Samsung Techwin / South Korea | Korean Demilitarised Zone | Unclear | Stationary security robot |
Switchblade | United States | Ukraine | Human in the loop | Loitering munition (kamikaze drone) |
Shahed-136 | Iran | Ukraine | Human in the loop | Loitering munition (kamikaze drone) |
HARPY | Israel | Multiple locations | Human out of the loop | Loitering munition (for suppression of enemy air defence) |
Sensor-Fuzed Weapon | United States | Battlefield: multiple locations | Human out of the loop | Cluster bomb: multiple submunitions equipped with infrared sensor detecting vehicle heat signatures |
Human involvement in AWS operations is a matter of degree. In 2011 the US Department of Defense published a roadmap outlining a progression from human-operated systems through human-delegated and human-supervised configurations to fully autonomous platforms. Thus, AWS can include systems requiring continuous human authorisation before engagement, those operating under human supervision with intervention capacity, and those functioning entirely independently once activated. In configurations where operators must actively authorise each engagement phase while setting objectives and validating system decisions, meaningful human control remains theoretically intact. Systems operating under supervisory monitoring – where humans establish mission parameters and retain override capability, but the system executes targeting decisions autonomously – occupy an ambiguous middle ground. In such systems, human control exists in principle but may prove illusory given the speed of autonomous operations: supervisors are expected to monitor, understand, and, if necessary, override in real time, straining the limits of human attention. At the furthest end of the spectrum, fully autonomous systems pursuing predetermined objectives without human oversight or intervention capacity represent the most legally and ethically problematic category, as they eliminate the possibility of meaningful human judgement in life-or-death decisions.
The existing initiatives on military AI governance have failed to establish concrete thresholds that distinguish between acceptable and unacceptable levels of autonomy in weapon systems. This failure is as much political as it is technical. Major powers (most notably the United States and China) have shown a fundamental disinterest in binding military AI control frameworks. This attitude is driven by intensifying geopolitical competition and accelerating arms races in autonomous systems. Where military technological supremacy is at stake, international norms have consistently been subordinated to strategic advantage.
The operational environment also shapes compliance of AWS with international law. There are three operational contexts in military conflicts: The first is demilitarised zones, as buffer areas where both military forces and civilian movement are restricted. These might be relatively straightforward conditions for autonomous systems, because the absence of civilians should simplify target identification. The second is frontline combat zones: fluid, fast-moving environments where opposing forces are actively engaged. These present a greater challenge to AWS because they require split-second judgements about who is a combatant and what constitutes a legitimate target. The third is urban warfare, the most legally and ethically critical context for AWS deployment. In cities and towns, combatants operate among dense civilian populations, using civilian infrastructure as cover. Such conditions are already complex for human soldiers, as they need to exercise contextual judgement. For AWS, the challenge is far greater: without genuine contextual understanding, autonomous systems cannot reliably distinguish a fighter from a civilian.
This regulatory vacuum is further deepened by the widespread disregard for IHL that characterises contemporary conflicts. In Gaza, Ukraine, and Sudan, the rules of armed conflict have been systematically violated, making the prospect of negotiating meaningful constraints on AWS even more remote. Until the international community grapples with the question of whether fully autonomous lethal decision-making is ever permissible under IHL, and until major powers demonstrate genuine commitment to arms control over strategic competition, autonomous systems will remain a largely ungoverned feature of modern warfare.
Another critical dimension of the AWS regulatory challenge lies in the dual-use nature of the underlying technologies. Unlike conventional weapons, which require specialised facilities, materials, and expertise, AWS rely primarily on software and widely available commercial technologies such as open-source algorithms, and computational techniques originally developed for civilian purposes. The same technologies that enable lethal autonomous weapons also power beneficial civilian innovations. For instance, the computer vision algorithms that allow an autonomous drone to identify and track human targets are identical to those enabling medical delivery drones. The machine-learning systems that process battlefield sensor data to classify threats employ the same network architectures used in autonomous vehicles. Commercial drones, 3D printing technology, and AI development platforms, which are all readily available on global markets, can be repurposed for military applications with relatively minimal modification.
Dual-use technology creates an unprecedented regulatory void with no effective global oversight mechanisms. Traditional arms control regimes (such as the Nuclear Non-Proliferation Treaty or the Chemical Weapons Convention) function by controlling access to specialised materials, monitoring production facilities, and restricting exports of weapons-specific components. These approaches are inapplicable to AWS, where the critical enabling technology is software that can be written anywhere, transmitted instantaneously across borders, and implemented on hardware indistinguishable from civilian equipment.
Iran’s drone programme illustrates precisely how dual-use technologies render traditional non-proliferation measures obsolete. Despite decades of international sanctions and export controls designed to restrict access to advanced military technologies, the Islamic Republic has successfully acquired critical capabilities through systematic exploitation of the civilian technology sector. Many Iranian scientists have gained expertise in Western universities under the guise of civilian research programmes, then transferred this knowledge back to military applications at home. Iranian operatives, under the guise of being businesspeople and technology entrepreneurs, have been able to access dual-use components in the international markets.
The implications extend beyond state actors. Because dual-use technologies are affordable and widely available, non-state actors (i.e. terrorist organisations, insurgent groups, and private military contractors) can now acquire such capabilities as well. This undermines the state-centric assumptions underlying international humanitarian law and arms control frameworks. Compounding this challenge, restrictions on the civilian development of AI would impact beneficial innovation across health care, transportation, and many other sectors. Moreover, the technology companies and states developing AI have strong commercial and strategic incentives to resist regulation: in competitive global markets, where falling behind a rival risks both economic and military disadvantage, any restriction imposed in one jurisdiction simply relocates development to another. This dynamic makes unilateral or even regional regulatory approaches extremely challenging.
The principle of meaningful human control, a pillar of IHL, depends on clear accountability across the hierarchy of military command levels (Ekelhof 2019). Military organisations operate through three distinct command levels: 1) strategic command, which translates political objectives into broad military goals; 2) operational command, which converts these into concrete missions and tasks; and 3) tactical command, which directs the specific employment of weapon systems in direct contact with adversaries and civilian populations (Ekelhof and Persi Paoli 2020). At the tactical level, although life-or-death decisions may be authorised by commanders several levels up the chain of command, autonomous weapons engage targets without a human making the final decision.
Further complicating this chain are civilian software engineers, AI researchers, and data scientists working for private companies or academic institutions who design the machine-learning algorithms that determine how AWS behave in the field. They operate entirely outside military command structures, lack training in IHL, and may never know how their technology will ultimately be used in combat.
The problem is not that AI is insufficiently advanced; it is that the very features that make AWS militarily effective also render existing legal frameworks inadequate to govern them. For AWS to function most effectively in battlefield environments, they must adapt continuously to changing conditions, recognise and respond to human combatants and civilians, and interpret behavioural cues. These capabilities require advanced machine learning that evolves beyond the original design.
Machine-learning approaches fall into two categories with fundamentally different implications for human control, the first being offline learning, by which systems are trained based on fixed datasets before deployment. In these systems, performance metrics remain constant during operations because the system does not continue learning after training concludes. The second category, online-learning systems, update continuously during operations, fundamentally altering their own behaviour over time in ways developers cannot predict or control. These systems may learn incorrect or unintended patterns and develop (new) biases. Offline learning, in principle, allows for meaningful pre-deployment legal review. Online learning, however, can autonomously evolve behaviours that violate IHL principles – without human operators recognising the change.
This matters directly for legal compliance. States deploying AWS with online-learning capabilities cannot demonstrate compliance with Article 36 of Additional Protocol I, which requires weapons reviews to ensure new weapons can be used in accordance with international law. If the weapon’s behaviour evolves unpredictably during deployment, pre-deployment review is rendered meaningless. The distributed command structure compounds this: strategic commanders lack technical understanding of machine-learning systems, operational commanders cannot predict how learning algorithms will respond to novel situations, and tactical commanders cannot determine how AI models have evolved since training.
The accountability problems created by distributed command structures and unpredictable machine learning converge on two technical requirements that AWS must meet if they are to operate lawfully: predictability and explainability. Predictability refers to the capacity of a system to behave consistently and within anticipated parameters across different operational contexts, allowing operators to anticipate how it will respond before and during deployment. Explainability refers to the capacity of a system to make its decision-making logic transparent and comprehensible to human operators, so that the basis for any given targeting decision can be understood and scrutinised.
Predictability alone proves insufficient when systems fail unexpectedly or operate outside anticipated parameters. Meaningful human control requires that operators be able to intervene when AWS malfunction, which demands both technical training and genuine conceptual understanding of the system’s decision-making logic. When an operator cannot determine whether an autonomous system is misidentifying targets, rapid corrective action is impossible and violations of IHL become inevitable.
Explainable systems allow operators to exercise more effective oversight, particularly when the system behaves unexpectedly and the operator must rapidly determine whether intervention is required. However, what constitutes a useful explanation varies significantly across application domains, specific tasks, and operator expertise levels (Martinez and Rodriguez 2025). Current AWS, particularly those employing deep-learning architectures, are inherently opaque. Neither developers nor operators can interpret their decision-making processes in real time.
Effective governance of military AI demands progress on three fronts: First, a legally binding treaty must prohibit AWS with online-learning capabilities and systems lacking explainability mechanisms. International initiatives such as the voluntary Political Declaration (initiated by the United States) and the REAIM 2023 Call to Action (initiated by the Netherlands and South Korea) are positive though inadequate steps. Only treaty obligations with verification mechanisms would have a meaningful impact on preventing proliferation of AWS. The prospects for such a treaty being ratified are, however, unknown. No major military power has signalled genuine appetite for binding negotiations. The two global leading powers on military AI, the United States and China, have indicated no desire for legally binding constraints. This position has been consistent across administrations in Washington, though the Trump administration has more explicitly rejected multilateral governance frameworks altogether. This is mostly due to the United States’ prioritisation of strategic competition over collective governance. Meanwhile, the pace of AI advancement continues to outstrip the pace of diplomatic progress, widening the regulatory gap. Binding frameworks are most achievable before technologies become militarily entrenched, and that window is closing rapidly. Second, any treaty must establish technical standards defining meaningful human control at the operational level, requiring operators to possess real-time intervention authority, not merely nominal oversight. Third, the framework must extend accountability beyond military commanders to include technology developers and manufacturers throughout the system lifecycle. Without binding legal obligations established before autonomous weapon systems become further entrenched, international humanitarian law risks becoming progressively unenforceable as machines replace human judgement in warfare’s most consequential decisions.
The author would like to thank her dear friend Mehdi Mahdavi, whose thoughtful conversation sparked the initial reflection that led to this paper. That exchange was the catalyst for exploring the topic.
Bazoobandi, Sara (2025), Emerging Defence Technologies in the Middle East: Strategic Implications and Regional Security Dynamics, Digital Cooperation with Global Partners – Policy Study, 8, accessed 8 January 2026.
Ekelhof, Merel (2019), Moving beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation, in: Global Policy, 10, 3, 343–48, 19 March, accessed 8 January 2026.
Ekelhof, Merel, and Giacomo Persi Paoli (2020), The Human Element in Decisions about the Use of Force, UNIDIR, 31 March, accessed 8 January 2026.
Folly, Maiara (2021), ‘Killer Robots’: The Danger of Lethal Autonomous Weapons Systems, Southern Voice (blog), 29 November, accessed 8 January 2026.
Gusterson, Hugh (2017), Drone: Remote Control Warfare, Cambridge, MA: MIT Press.
Hellman, Jacqueline (2024), The Impact of Autonomous Weapons Systems on Armed Conflicts: Are International Humanitarian Law Norms Offering an Adequate Response?, in: David Hernández Martínez, and José Miguel Calvillo Cisneros (eds), International Relations and Technological Revolution 4.0, Cham: Springer, 155–172.
International Committee of the Red Cross (1977), Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 8 June, accessed 11 May 2026.
Kmentt, Alexander (2025), Geopolitics and the Regulation of Autonomous Weapons Systems, Arms Control Association, January/February, accessed 8 January 2026.
Martinez, Maria Vanina, and Ricardo O. Rodriguez (2025), Reflections on the Use of Artificial Intelligence for Weapons Applications, in: Alger Sans Pinillos, Vicent Costa, and Jordi Vallverdú (eds), Second Death: Experiences of Death Across Technologies, Cham: Springer, 119–135.
Panneerselvam, Prakash (2024), Autonomous Weapon System: Debating Legal–Ethical Consideration and Meaningful Human Control Challenges in the Military Environment, in: Sangeetha Menon, Saurabh Todariya, and Tilak Agerwala (eds) (2024), AI, Consciousness and the New Humanism: Fundamental Reflections on Minds and Machines, Singapore: Springer, 243–258.
Roff, Heather M. (2014), The Strategic Robot Problem: Lethal Autonomous Weapons in War, in: Journal of Military Ethics, 13, 3, 211–227, accessed 8 January 2026.
Scharre, Paul (2018), Army of None: Autonomous Weapons and the Future of War, New York: W.W. Norton & Company.
Strawser, Bredley Jay (ed.) (2013), Killing by Remote Control: The Ethics of an Unmanned Military, Oxford: Oxford Universty Press.
Taylor, Isaac (2021), Who is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military-Industrial Complex, in: Journal of Applied Philosophy, 38, 320–334, accessed 8 January 2026.
United States Department of Defense (2011), Unmanned Systems Integrated Roadmap FY2011–2036, accessed 8 January 2026.
Bazoobandi, Sara (2026), How Autonomous Weapon Systems Threaten International Humanitarian Law, GIGA Focus Global, 4, Hamburg: German Institute for Global and Area Studies (GIGA), https://doi.org/10.57671/gfgl-26042
The GIGA Focus is an Open Access publication and can be read on the Internet and downloaded free of charge at www.giga-hamburg.de/en/publications/giga-focus. According to the conditions of the Creative-Commons license Attribution-No Derivative Works 3.0, this publication may be freely duplicated, circulated, and made accessible to the public. The particular conditions include the correct indication of the initial publication as GIGA Focus and no changes in or abbreviation of texts.
The German Institute for Global and Area Studies (GIGA) – Leibniz-Institut für Globale und Regionale Studien in Hamburg publishes the Focus series on Africa, Asia, Latin America, the Middle East and global issues. The GIGA Focus is edited and published by the GIGA. The views and opinions expressed are solely those of the authors and do not necessarily reflect those of the institute. Authors alone are responsible for the content of their articles. GIGA and the authors cannot be held liable for any errors and omissions, or for any consequences arising from the use of the information provided.