AI threats: how not to become a victim of robots

Artificial Intelligence is changing the world. Some people are frightened by this prospect. Others are invested in new technologies. Others, fearful of robots, propose a new ethic: problem-solving, emotion, and free will in a world of virtual intelligence. What is the threat to humanity from robots with advanced artificial intelligence, and how to prevent it (and not try to stop progress)?

How can intelligent machines be dangerous: the problems of robotization and the development of AI

Artificial Intelligence may fundamentally change human society, and the scale of changes from its introduction is comparable to the consequences of the Industrial Revolution. According to the international consulting firm Frost & Sullivan, the global volume of investment in AI development, primarily from IT giants such as Alibaba, Amazon, Baidu, Google, Facebook, and others, will reach approximately $52.5 billion in 2022, almost four times more than in 2017. 90% of the funds go to internal R&D. The most significant industrial players also fund research, primarily in machine learning and robotics: ABB, Bosch, GE, Siemens, as well as automakers - BMW, Tesla, Toyota, and others. Many researchers and futurologists believe that artificial intelligence may equal human intelligence in the XXI century. In the next 10-20 years, there will be significant breakthroughs in speech recognition and control of robots (including autonomous cars). AI will also learn to simulate emotions perfectly.

Won't this be disastrous for humanity?

The technology has enormous malicious potential. The biggest fear is the loss of human control over superintelligence. There are also ethical difficulties. The list of ethical concerns about using artificial intelligence, in particular, includes the question of how this technology will affect human behavior and interaction. McKinsey Global Institute, in its research, points to another problem: human biases - racism, sexism, etc. - can be introduced into artificial intelligence. It is not yet clear what morals will be instilled in AI, but by internalizing human biases, it will base its conclusions on loans, hiring, etc. Artificial intelligence recognizes not only apparent gender and race but also sexual orientation. How can we avoid the imposition of goods and services by artificial intelligence? Won't AI-powered apps manipulate people into spending? Won't artificial intelligence try to limit people's free emotional choices in general? There are also concerns about the wrong decisions that a computer might tell a doctor or a judge. Who will be responsible for AI errors - developers or users? Nick Bostrom, a famous Swedish philosopher, writes in his Ethics of Artificial Intelligence that a person may forget or stop understanding why AI advises doing things this way and that way. Nevertheless, technology has not yet stopped evolving, leading to more significant interaction and cooperation between humans and machines in production and everyday life. Inevitably the question arises, which is the first and most dangerous threat: How can we make sure that machines do not accidentally or intentionally physically harm humans?

Real victims of robots

Even in the lifetime of the science fiction writer and futurologist Isaac Asimov, who formulated three laws of peaceful coexistence between a robot and a human, a machine was found that broke them.

The laws formulated by Asimov

  1. A robot cannot harm a human or allow a human to be harmed by its inaction.
  2. A robot must obey all orders given by a human, except when these orders contradict the First Law.
  3. The robot must take care of its own safety to the extent that it does not contradict the First or Second Laws.

On January 25, 1979, in Flat Rock, Michigan, USA, Robert Williams, a 25-year-old Ford factory warehouse worker, was tasked with removing parts stored on a massive shelf, which at the same time was being operated by a five-level manipulator machine. It performed the operations of moving the blanks in the warehouse. Part of the machine was one-ton transport vehicles - rubber-wheeled carts equipped with mechanical manipulators to move the bollards. Williams was sent to do a job the robot couldn't handle - some parts were left out of the machine. The worker climbed to the third level of the rack and began the task. In the meantime, one of the carts appeared here and instantly killed the worker who hadn't noticed its approach with a blow from the manipulator. Williams' body remained on the shelf for half an hour until the workers discovered it. The robot continued mundane as it moved the billets. The worker's relatives filed a lawsuit for $10 million in damages, and Williams, not knowing about it, went down in history as the first victim of the robot.

However, according to legal standards, to call Robert Williams' death murder is to allow a great exaggeration because the factory machine did not have the main "qualifying attribute" of premeditated murder: motive. And even in July 2016, when a police robot with explosives was used to eliminate a criminal in Dallas, Texas, USA, the machine's actions were still directed by a human.

Do robots often harm humans?

Although all robots except the military are designed so that none of Asimov's rules are violated, it is impossible to avoid casualties altogether. Are there many of them? The statistics for dramatic incidents involving robots are not extensive, although injuries and even murders involving machines occur regularly. According to a study by the US Occupational Safety and Health Administration (OSHA), industrial robots caused at least 33 workplace deaths and injuries in this country over 30 years. However, there are darker data as well. In 2013, German insurance companies estimated that about 100 incidents involving industrial robots occur yearly in the country. An earlier 1987 study of companies from the United States, Germany, Sweden, and Japan exactly found how robots harmed workers: in 56 percent of cases, they caused penetrating injuries, and in 44 percent, they struck. Most accidents were caused by poor workplace organization (20 of 32 incidents analyzed), while human error caused only 13 unpleasant situations.

There are statistics about the unsuccessful use of robots in specific application areas. For example, in the field of medical сobots: in 2013, a team of scientists analyzed statistics from the US Food and Drug Administration (FDA) and found that from 2000 to 2013, there were 144 deaths, 1,391 injuries, and 8,000 device malfunctions during surgical procedures assisted by robots. Among these, two fatalities and 52 injuries were caused by the robot shutting down spontaneously during surgery or making an improper movement. One death and 119 injuries were due to parts of the robot or its accessories falling on the patient.

Why have robots become more dangerous?

The fundamental and almost philosophical problem is that robots are unaware of themselves as part of the world and can be dangerous to others. However, if 20-40 years ago, a robot was an electromechanical machine with a minimal range of repetitive tasks, and access to it could be restricted so that it would not accidentally harm anyone, today the situation is changing. Two sorters - a live one and a mechanical one - can work simultaneously on the same conveyor belt. Moreover, robots become multitaskers, for which they need to move around the territory. Accordingly, a single workspace of man and machine is formed, in which anything can happen.

Industrial robots present several types of hazards, depending on their origin:

  • Mechanical hazards, which result from unintentional and unexpected movements or loss of tools by the robot;
  • Electrical hazards, such as contact with live parts or connections;
  • thermal hazards, such as those associated with hot surfaces or exposure to extreme temperatures;
  • Noise that can be harmful to the hearing. So, what triggers these hazards? OSHA has observed that many incidents involving robots do not occur under normal operating conditions but rather in abnormal situations, such as during reprogramming, maintenance, repair, testing, adjustment, or adjustment. Then there are external factors beyond human control, natural or technical, such as power failure.

There are a total of seven main reasons why robots get out of control:

  1. control errors, that is, errors in the control system or software that lead to unstable behavior or an increase in the dangerous energy potential of the machine;
  2. unauthorized access - a violation by an untrained technician of the security zone near the machine;
  3. mechanical malfunctions - the most unpredictable and dangerous malfunctions that can lead to improper or unexpected operation of the robot;
  4. natural factors - this group of causes includes everything that can affect the robot's behavior due to genuine reasons, in particular, electromagnetic or radio frequency interference, as well as adverse weather conditions;
  5. power system failure - e.g., pneumatic, hydraulic, or electric actuators can disrupt electrical signals in the control lines; the result is a release of energy, electric shock, and increased risk of fire, especially when something sparks in robots using combustible hydraulic oil;
  6. improper installation of the robot or its components provokes a lot of accidents, including attempts to correct errors;
  7. the human factor: programming, interface, control errors, violation of safety rules.

Most often, abnormal situations occur due to human intervention in the robot's work or changes in the environment to which the mechanical worker cannot react. What technologies prevent this from happening?

What technologies make robots safe?

Three essential skills make a robot worker safe:

  1. controlled stopping,

  2. speed control and zone separation,

  3. power and force limitation.

Safety-rated Monitored Stop (SMS)

A safety-rated monitored stop occurs as soon as a person is in a particular area near the machine, e.g., for tool changes, adjustments, settings, or other direct work on the robot. The power supply continues, and the robot automatically enters the inactivity mode. As soon as the operator leaves the control area, the robot resumes operations without additional commands. Different types of distance sensors (optical, acoustic, etc.) are used to measure the distance from the robot to a person (or other disturbance), which determine the object's length by sending a signal and receiving a response. To ensure maximum safety, the sensors have two parallel systems for transmitting and processing signals on the occurrence of interference in a given area. The signs go to two modules in the robot's controller and are processed separately and by different algorithms. Then, they are cross-checked. Accordingly, if one of the channels fails for some reason, the robot will still stop.

Speed and Separation Monitoring (SSM)

This more complex safety technology involves changing the robot's behavior when a human is in a specific area near the machine - for example, slowing the robot down. Technically, it works like this: the machine continuously measures the position and speed of an object in the line of sight. SSM can be implemented in both static and dynamic conditions. The key here is the technology to recognize surrounding objects. If the robot has to see what it is grabbing, then why can't it learn to recognize peers, including people? The recognition technology is based on using two cameras, an RGB camera, and a 3D camera. By combining the images they capture, the robot can determine an object's position in space and its movement's direction.

Power- and Force-Limited (PFL)

It's one of the most widely used technologies to avoid human injuries when a person and a machine are in contact. It involves the use of various techniques, including Force-Force-Limited Sensors. They convert the measured components of force and moment vectors into signals suitable for processing by a robot that "touches" specific objects. For example, such sensors are installed on the arms of robots that sort different types of parcels, including fragile ones. Before picking up a lot, the robot "feels" it, and the sensor determines which object the machine is dealing with in terms of shape, resilience, and size. Based on the data obtained, the robot selects the clamping force and the speed of handling the object.

What technologies will make the robot safer in the future?

Industrial robots in the future will interact with the changing environment and humans, unlike their predecessors, which were only capable of rhythmic repetition of a limited range of tasks. For a robot to do this and perform atypical tasks, it must, to some extent, learn basic human skills, particularly the ability to listen. Toshiba presented a speech recognition technology based on artificial intelligence. It is the world's first development of its kind: any robot or simple electronic device will be able to process voice commands without connection to the Internet or cloud data. In other words, the processing device is built into the machine. First, the neural network processes sound, separating voice commands from extraneous noise, then the technique of data expansion in the neural network is used. Data expansion is a technique for learning small amounts of information, such as verbal utterances. Successful identification of people occurs by training the AI-based on samples of their speech, allowing it to recognize specific speakers even when only a small number of utterances are available. Toshiba has reduced the number of speech samples required to the point where the new technology can recognize a user based on just three utterances. This technology is great for robots and cobots that don't need to be verbose. At least for now. Artificial intelligence will help robots in the future not only hear better but also see, think, and move around better, which in time will eliminate barriers between human workers and their artificial counterparts.

Regulation and control


Thus, robotization and AI improvements are only a threat if uncontrolled. These processes need to be controlled. For example, it is suggested that instead of introducing machines with increasing degrees of autonomy from humans, create devices whose control requires more attention to the ethical side of the issue. Technology giants are already creating special divisions that will advise corporate executives on ethical issues of AI development. Amazon, Apple, Facebook, Google, IBM, Microsoft, and others are creating various associations and partnerships on AI, which are an open forum to discuss robotization and artificial intelligence and their impact on people and society. Despite all the risks associated with introducing robots with AI elements, governments and companies should not delay using artificial intelligence. There is a risk of not being prepared in time for new global challenges and opportunities. For example, education systems should begin to restructure now to train personnel not to compete with machines but to monitor and control them. This distribution of areas of responsibility in the future must be carried out within each organization, public or private. Rules should be put in place to limit the scope of implementation of artificial intelligence in operational processes. After all, artificial intelligence is not being developed for the sake of artificial intelligence but for new opportunities for humans.