Remote-controlled weapons have been used on battlefields for decades, but artificial intelligence is now on the cusp of making independent lethal decisions without direct human input.
In a scene reminiscent of a computer war game, three battle-fatigued soldiers, dressed in white snow camouflage, emerge from a war-torn alley with their hands raised above their heads. They crouch down, following orders blasted at them from a loudspeaker, fear and shock etched across their faces as they stare down the barrel of a machine gun mounted on a so-called ground robot.
This footage, released in January by Ukrainian defence company DevDroid, is said to show the moment Russian soldiers were captured by a Ukrainian robot using artificial intelligence. It is a small but striking preview of a future where machines may not only fight but also dictate the terms of surrender.
In April, Ukrainian President Volodymyr Zelenskyy announced a major milestone: for the “first time in the history of this war, an enemy position was taken exclusively by unmanned platforms ground systems and drones.” He later wrote on X that “ground robotic systems have already carried out more than 22,000 missions on the front in just three months,” sharing images of green machines with tank tracks and mounted weapons.
But for analysts who have studied the intersection of AI and warfare, the footage reflects an expected evolution one that will unfold far beyond the front lines in Ukraine as the world wrestles with the ethical, legal, and strategic implications of ceding control to algorithms.
UAVs, Naval Drones, and Robot Dogs
For years, militaries have used ground robots primarily for bomb disposal and reconnaissance. But in Ukraine, their role has expanded at unprecedented speed. Some brigades now report that up to 70 percent of front-line supplies are delivered by robotic systems rather than by soldiers. These machines transport ammunition, food, and medical supplies, and they evacuate wounded troops from dangerous positions often under fire.
Yet the sight of robotic systems moving across the battlefield is only part of a much broader shift in warfare, one that has been building for decades. The modern debate about AI in warfare was largely driven by the rise of US unmanned aerial vehicle (UAV) operations in the early 2000s. In 2002, the MQ-1 Predator drone carried out one of the first targeted US air strikes in Afghanistan, marking a turning point in how wars could be fought remotely. Its use expanded rapidly throughout the 2000s, peaking in the late 2000s to mid-2010s in Pakistan, Yemen, and Somalia.
As AI has advanced, the debate has moved beyond remote-control operations. The focus has shifted toward systems that can help identify targets, prioritise strikes, and guide battlefield decisions raising deeper questions about how much autonomy should be delegated to machines.
Analysts say the question of human autonomy must remain central, rather than being overshadowed by rapid technological developments, however striking the sight of increasingly anthropomorphic machines on the battlefield may be.
“These technologies are here to stay,” Toby Walsh, an AI expert at the University of New South Wales, told media. He described AI-driven military operations as “the third revolution of warfare” following gunpowder and nuclear weapons.
The transformation is also spreading beyond land targets. Naval drones packed with explosives have already reshaped battles in the Black Sea, while autonomous underwater systems are being developed for surveillance, mine clearance, and sabotage missions by militaries worldwide. Robotic dogs, meanwhile, are already being tested for surveillance, reconnaissance, and bomb-disposal missions, with some experimental versions even fitted with weapons.
Human Involvement: The Core Dilemma
In recent years, the emergence of fully autonomous drones so-called “killer robots” has triggered a fierce debate, especially after a United Nations report suggested that Turkish-made Kargu-2 loitering munition drones, operating in fully autonomous mode, had identified and attacked fighters in Libya in 2020. That incident prompted intense discussions among experts, activists, and diplomats worldwide, as they grappled with the moral and ethical implications of a machine making and executing the decision to take a human life.
However, Anna Nadibaidze, a postdoctoral researcher in international politics at the Centre for War Studies, University of Southern Denmark, told Al Jazeera that more focus is needed on the regulatory debate surrounding semi-autonomous weapon systems, “where humans are still so-called in the loop.” A major concern, she said, is whether “enough time and space” is being given to the “exercise of human judgement that’s necessary in the context of warfare.”
The extent of human involvement is often something observers must take militaries at their word on a difficult task when their actions leave trust in short supply, said Toby Walsh. In the case of ground robotics in Ukraine, a human operator has so far remained in control, directing machines that can still be halted by obstacles such as uneven terrain or electronic warfare.
However, when AI is involved in the decision-making process as has been reported in Israel’s operations in Gaza and the wider region the scale of attacks which have resulted in “huge collateral damage and civilian casualties for a small number of military targets” challenges the rules of international humanitarian law, particularly the principle of proportionality, Walsh said.
The core problem, Nadibaidze noted, is that it is hard to enforce rules on the use of AI in warfare because it remains “a matter of each military to decide what they consider to be the appropriate role for the human, and there isn’t enough international debate on that.”
An April report by the Stockholm International Peace Research Institute (SIPRI) warned that the AI supply chain is fragmented, global, and heavily dependent on civilian technologies further complicating efforts to govern or control military uses of AI. The United States Department of Defense and the Pentagon have consistently incorporated privately developed software systems into their war apparatus. In mid-2024, the Defense Department awarded OpenAI a $200 million contract to implement generative AI into the US military, alongside similar contracts for xAI and Anthropic.
“If we’re not careful, warfare will be much more terrible, much more deadly, a much quicker, much faster thing that humans can no longer actually really be participants in, because humans won’t have the speed, won’t have the accuracy or the ability to respond,” Walsh warned.
Ukraine as a Testing Ground
Technology and AI are not inherently harmful, experts say it is how they are used that matters. In Ukraine, ground robotic systems have also been deployed to rescue civilians and provide logistical support in heavily mined and treacherous conditions. They have saved lives that would otherwise have been lost to snipers, artillery, or booby traps.
Yet what is unfolding on the front lines is, in many ways, a testing ground for the rest of the world. The international community will need to look ahead to how these technologies might be applied and regulated in future conflicts, from conventional wars to counterinsurgency and peacekeeping operations.
There is also room for cautious optimism. Despite what Walsh calls the “moral failure” over certain military actions in Gaza, he said there is a growing recognition in the international community that these issues must be addressed. A series of UN meetings focused on regulating Lethal Autonomous Weapons Systems (LAWS) is already underway. The United Nations Institute for Disarmament Research (UNIDIR) is set to meet in June to examine the implications of AI for international peace and security.
Walsh points out that this is not the first time new weapons technologies have threatened to upend the rules-based order. Chemical weapons, for example, were eventually brought under international agreements imperfect though they remain. “There are a lot of actors based in the Global South that do want regulation, so there might be regional initiatives forming,” said Nadibaidze. She added that even if such efforts do not initially include major powers or leading tech developers, they could still help shape emerging norms and provide a foundation for broader treaties in the future.
For now, the robot soldier is no longer science fiction. It is already on the battlefield, watching, waiting, and in some cases, pulling the trigger. The question is no longer whether autonomous weapons will be used but who will control them, and at what cost.
Our Pashto-Dari Website

Support Dawat Media Center
If there were ever a time to join us, it is now. Every contribution, however big or small, powers our journalism and sustains our future. Support the Dawat Media Center from as little as $/€10 – it only takes a minute. If you can, please consider supporting us with a regular amount each month. Thank you
DNB Bank AC # 0530 2294668
Account for international payments: NO15 0530 2294 668
Vipps: #557320
Comments are closed.