Wars of the future: Can combat robots fully replace humans?

Robots are gradually entering all areas of human life. We use various robotic devices, such as robot vacuum cleaners, even in everyday life. Of course, the introduction of robots into the military sphere has not gone unnoticed. However, humans still fight military conflicts, which sometimes result in the loss of tens of thousands of lives. Hence the question arises: can robots completely replace humans in the future? Is war between robots possible, while humans will just observe it? At the same time, we will not face terrible consequences in the form of destroyed cities, civilian deaths, forced conscription, and sending to the front.

Also, robots have no human element. It means that they will observe all the laws of war. In times of danger, robots won't scatter or quit the army, nor will they start looting, terrorizing the local population, and committing other crimes.

And the capabilities of combat robots will be well known to the military leadership. It means that there is less possibility of overestimating their own forces and failures in particular military operations, as with life force. The list of other advantages of combat robots is long. But how realistic are their use instead of humans, or do they represent a danger to humanity in and of themselves?

History of combat robots

The idea of using machines instead of humans in the war appeared a long time ago. The first UAV was created in 1935 in Britain. The device had a remote control and could fly a distance of up to 5 kilometers with a max speed of 170 km/h. However, it was not quite a combat vehicle, as the drone was only applied as a target for training.

The first drones which could carry out combat reconnaissance missions appeared in the USA in 1948. AQM-34s performed well in tests and were put into mass production three years later.

Modern combat robots

Currently, many companies around the world are focusing on developing walking robots. A four-legged combat robot was designed in 2005. It could carry loads of up to 110 kg and travel at 6.5 km/h. The robot could even overcome a slope of 35 degrees. However, it was too noisy, which is unacceptable for the army.

Other manufacturers later focused on another direction. They created a particular rifle for four-legged robots. It was equipped with a thermal camera with 30x magnification, which allowed the operator to detect any live targets. The gun could be loaded with 6.5 millimeters or 7.62-millimeter ammunition. But these robots could not be fully described as military.

One of the first genuinely military robots was created for reconnaissance and surveillance. The robot is equipped with machine guns, four grenades, and laser weapons. It has a loudspeaker, a fire detection system, and a siren. The combat vehicle has 12-hour autonomy. The robot can operate in autonomous mode, but the operator makes the decisions.

Various robotic complexes on tracked platforms were also created for offensive and defensive operations, mining and de-mining of territories, surveillance and target engagement, and patrolling the area and cargo delivery.

Why combat robots will not soon replace humans

As we can see, combat robots are increasingly being introduced into the military, and the list of tasks they perform is constantly expanding. As previously said, new robots enable more efficient completion of some combat tasks. However, we are interested not just in their application but in the total replacement of humans with robots.

But the mass introduction of robots is hindered by several problems. Industrial robots, which are actively implemented in production facilities, are intended to replace heavy and monotonous manual labor. Thus, they allow increasing the efficiency of production and reducing production costs.

In the case of combat robots applying, the situation is the opposite - they should replace highly qualified specialists. In addition, the risk of losing a robot foot soldier is unjustified. High-tech equipment may end up in the hands of the enemy, which is highly undesirable. But it is not just about the danger of technology falling into enemy hands. The main problem with the mass introduction of military robots is their high cost.

Rise of machines and other disadvantages of modern combat robots

Modern robots can be divided into three main types: fully remotely controlled, semi-autonomous, which can act independently, but the operator, and fully autonomous make all decisions. Given the current pace of warfare, the future robots, for them to fully replace humans, must act fully automatically. They must be able to examine mistakes, gain experience, and self-learn in order to do so.

Recently, the impetus for the emergence of fully autonomous machines has been the development of neural self-learning networks. They are, however, not yet perfect enough to work completely automatically. Consequently, they are only suitable for helping people, not fighting on their own.

But even if AI makes it possible to create fully autonomous machines, there are serious dangers associated with their use. During the Libyan crisis in 2020, a Turkish unmanned quadcopter, the Kargu-2, operating in autonomous mode, spotted and attacked the enemy. There is already a precedent for the destruction of people by AI.

It has caused an adverse reaction from activists. In the case of a software error, innocent people could die. Many companies that develop robots and human rights organizations wrote an open letter to the UN Convention, in which they demanded to ban the development and use of autonomous robots.

Therefore, according to many experts, paramilitary robots should operate exclusively under the control of the military. Their independent operation must be ruled out.

Fully controlled and semi-autonomous robots also have one significant disadvantage - the need for radio communication with the operator. It means that they are vulnerable to electronic warfare. Virtually every army in the world has electronic warfare. They can simply cut off communication between the robot and the operator. In addition, the radio signal can be lost due to some terrain.

Who is fastest

Meanwhile, experts predict that robots and artificial intelligence in military conflicts of the XXI century will increase exponentially. A new arms race and even a change of geopolitical landscape are associated with the total robotization of the military technosphere. Robotization does not only mean saturation of armies with all kinds of drones - flying, riding, or floating. Elements of autonomy and artificial intelligence will permeate all areas of military confrontation. Technologies are emerging that can add the capabilities of combat robots to traditional weapons systems - nuclear complexes, satellite systems, missile defense systems, etc. Soon, we should expect the appearance of new types of weapons and units: cyber commands, AI-based reconnaissance facilities, autonomous vehicles, and robotic units.

As for the new arms race, it is already underway. In the 20 years since the mid-1980s, the U.S., Britain, Germany, France, China, and Israel have increased funding for programs to create combat robots.

US military believes that by 2030 the share of unmanned systems will reach 30% of the total fleet of combat vehicles, resulting in a 2 - 2.5 times increase in combat capabilities of units. About 200 prototypes of such cars have appeared in the US over the past 20 years.

Today, more than three dozen countries are developing UAVs of various types, and the armies of more than fifty countries have them in service. In recent decades, tens of thousands of UAV systems have been involved in military conflicts. And the experience of recent conflicts in the Middle East has shown that drones are available not only to regular armies but also to guerrilla units: you can literally assemble a working kamikaze drone out of tape and sticks.

Robots learn to think: good or bad

Experts link the prospects of robotic complex (RTK) combat development with the transition from remotely controlled systems to autonomous systems capable of solving tasks with minimal or no human involvement. They find the target themselves, identify it and destroy it. At the same time, there is a set of risks associated with excluding humans from the decision-making chain. It is called the "problem of meaningful human control in international forums. Is it possible to delegate the right to kill to a machine? How will intelligent robots interact with humans, and can a robot give an order to a human?

And the military conflict itself turns from a confrontation between armed people (at least partially) into a fight between a human and a thinking machine or robots among themselves.

Fully autonomous weapons may be outside the framework of international humanitarian law. There is also the legal responsibility problem for the actions committed: if a drone misidentifies targets and uses weapons on them, who will be responsible for the mistake?

Partially autonomous vehicles already exist and are actively used - humans control them, but specific programs can be practiced independently. There are no fully autonomous combat systems based on artificial intelligence yet. At least not officially. But some experts believe that elements of AI are used in US strike UAVs. Unofficial comments even suggest that drone strikes on civilian objects could have been carried out precisely because of an AI error.

Whether artificial intelligence can compete on the battlefield with the human brain is still a matter of debate. AI specialists say: yes, absolutely. The military doubts it. Neural networks with deep learning require much computing power and work on arrays of relevant verified data. It is unknown how they will work on incorrect data.

Elements for robots

Another problem is how autonomous RTKs will be able to perform tasks in different environments (air, land, water) and where they will be most effective. Many experts believe that the ideal setting for intelligent weapons is the sea.

The rapid development of surveillance and strike drones will be a promising trend for the next 30-40 years. Flotillas of unmanned underwater vehicles will be able to ensure the safety of ship formations by detecting submarines, mines, and other trouble. Some sensors are already being installed on remotely piloted submersibles. As AI systems develop and protocol tasks are prescribed, controlled submersibles will be replaced by autonomous ones.

For the marine environment, it is relatively easy to prescribe the necessary algorithms. The air environment is much more complicated; for example, a strike drone will have to independently select targets on a complex surface (relief, residential buildings) with various objects. Some of them have to be hit, while others cannot be beaten in any case.

If a machine is programmed for a specific set of tasks, it will fulfill it with more efficiency than a human. But in combined complex battles with a lot of diverse information and objects, it can behave as it pleases. Technically, this problem is solvable, including with the use of AI for data processing. Gradually, machines will learn to recognize images and will be able to identify targets no worse and possibly better than humans. However, this is a difficult task, and therefore remotely piloted strike vehicles will probably be around for a long time to come.

Problems and prospects for further development

The prospects for ground-based combat RTKs remain murky. Today, land-based robots are wheeled or tracked platforms ranging from a child's radio-controlled car to a small tank. Their weapons range from pistols to modules with automatic cannons, grenade launchers, and guided anti-tank missiles. Many companies worldwide - from defense monsters to small private companies - are trying to make something similar today. For the most part, these are experimental systems, and the range of their tasks is not very clear.

But all these developments have two big problems: control and passability. Maintaining a communication channel on the ground is much more complex than in the air - the terrain and buildings get in the way. This is why the range of wheeled and tracked RTKs is many times or even orders of magnitude smaller than that of UAVs. As for the cross-country capability, wheeled and tracked robots do not move well on rugged terrain, hardly (or not at all) overcome areas of solid ruins and stairs. They cannot accompany soldiers in street combat, in rocky terrain, etc.

Therefore, the range of tasks assigned to land combat robots is limited. It is entirely unrealistic to replace soldiers with them yet. But they can help a lot in combat and logistical support: reconnaissance, including combat surveillance, protection, and delivery of consumables. The function of ground systems is also mine clearance and, in the long term - decontamination of territories, work on radiation, chemical, and biological protection.

However, the search for a propulsion system for ground-based RTKs continues. Most Russian and foreign experts see the future in walking systems. A human only determines the conditions under which a machine identifies a target as hostile and opens fire without order or at least permission from the operator.

Challenges for humanity

It raises many new challenges for humanity, from technical applications to political, legal, and ethical issues. If in the traditional use of weapons - including remotely piloted drones - there are always those specific people who gave the order and pulled the "trigger," then who exactly and how to be found guilty of murder, the decision of which was made by an autonomous robot according to algorithms? Who will be responsible if the program fails and machines start killing civilians, medics, peacekeepers, or their own military? Who will be able to say for sure whether it was a malfunction, a hack, or a malicious imitation of a mistake? Who will be held responsible if the war machine gets entirely out of control and starts killing everyone it can?

Legal and ethical issues

At the UN, the topic of autonomous combat vehicles and their compliance with international humanitarian law, human rights law, and the convention on "inhumane" weapons has been under active discussion since 2013. Proposals to impose a general moratorium on the development and operation of autonomous combat systems have been made repeatedly, but no drastic decisions and actions have been taken so far.

At the UN-sponsored consultations in Geneva in 2018, only 26 of the 88 countries that participated supported a ban on autonomous combat systems - and the leading military powers were not among them. The delegations could agree on a dozen "potential principles" of the most general nature: that the developments comply with "humanistic principles" and that the responsibility for their use must be borne by at least one human being anyway.

The 2018 European Parliament resolution and the 2019 campaign to ban autonomous combat robots led by Nobel Peace Prize laureate Jody Williams - although 130 civil society organizations from 60 countries and UN Secretary-General António Guterres have joined in. To this day, autonomous combat systems are in a virtually undescribed legal "gray zone" of international humanitarian and military law.

Human Rights Watch's Stop Killer Robots campaign in August 2020 found the support of already 165 organizations from 65 countries - but only 30 of nearly 200 countries have spoken out in favor of banning combat robots.

International forums and meetings of varying representation on banning autonomous combat vehicles are held regularly in different parts of the world - but there are no serious prospects for banning combat robots because the demand for such systems in today's military is too great and will continue to grow in the future.

Humanism and aversion to violence are steadily growing in most societies on the planet. According to research, the number of people willing to personally go into battle with a gun in their hands rather than sit behind a drone console in a safe bunker is falling in a growing number of countries. The same goes for the willingness to harm others, even if they are military enemies.

Also, an autonomous machine can have incomparably better reaction speed and accuracy than any human operator. The device does not question the ethics and legitimacy of participation in some particular operation, especially not a very official one. Responsibility turns out to be blurred, which is sometimes very convenient when no one wants to be responsible for the consequences.

A separate big issue will inevitably be the combination of autonomous combat systems with neural networks. Neural networks can significantly increase the flexibility, adaptability, and overall efficiency of combat vehicles, making them self-learning and lethal - this makes their behavior much less predictable even for developers.

All this opens up a host of complex issues that, one way or another, will have to be resolved in the coming decades. The intensity of the international discussion about autonomous combat systems will increase in parallel with the power of their combat application. Much will depend on the practice of their use: on their actual effectiveness and usefulness on the battlefield for the military and the number of high-profile tragedies associated with their service. Perhaps the future wars will turn into clashes between armies of robots almost without human participation. Possibly public pressure will be so intense and the benefits of use so questionable that the leading military powers themselves will eventually see fit to uphold the ban and monitor its implementation. However, even in this case, the course of progress will lead to the fact that quite soon, an autonomous combat vehicle, including one with a neural network, can be assembled "in the garage. It will result in another mountain of complicated military, social, and political problems.


One thing is clear: no battle of machines, not people, is out of the question shortly. However, the mass use of robots for military purposes is just a matter of time.