Robots and artificial intelligence (AI) are increasingly penetrating people's lives. They are being used in the industry, medicine, and service industries. With each step of progress in robotics and the development of artificial intelligence, we are approaching the day when machines will match human capabilities in every aspect - intelligence, consciousness, and emotion. Perhaps, in time, robots will not only follow human commands but also make their own decisions.
When it comes to whether or not robots should be given rights
This question is comprehensive, complex, and multifaceted. It needs to be considered and solved from various perspectives - ethics, sociology, law, neurobiology, and AI theory.
The need for regulation of robotics
The legal regulation of robotics, that is, the science and practice of developing, manufacturing, and using robots, is an area that is just beginning to emerge. It is because humanity is quite long and unwilling to realize this process as necessary. The sphere of robotics has been dynamically developing in recent decades. Still, at the same time, it is in a kind of "shadow zone," which actually leads to delays in the processes of developing the legal framework for ordering relations involving robots.
The vagueness of the concept of "robot" and the lack of classification for forming the legal framework is also tricky. In many studies, the most common definition of "robot" is derived through the list of functions performed by them: robots are artificially created objects or systems that can receive and process information and act according to the external world around them.
For legal science, the critical aspect is not so much the robot's specific engineering and technical system but rather the capabilities and functions it can reproduce.
The main goal of inventing the first robots was to relieve humans of the most arduous, exhausting activities. It was the primary functional purpose of robots.
The first robots fully corresponded to these approaches, representing special equipment - a manipulator controlled according to a specific program, performing a set of set functions (actions). These robots had strictly limited and controlled autonomy, and their use as the equipment necessary to improve the production processes was not discussed in practical terms.
However, technology was evolving. Neural networks capable of recognizing images and understanding human speech, self-learning began to copy the principles of the human brain. With the improvement of artificial intelligence technology, the primary purpose of developing algorithms providing computer solutions for cognitive tasks peculiar to the human brain, the development of robots acquired a new direction. They began to "socialize," integrated into various communications as full-fledged participants (robot policeman, robot nurse, bank clerk, etc.) or assistants.
It is possible to distinguish several generations of robots:
Generation I - programmable robots that perform strictly limited functions, usually implemented to automate processes performed by humans,
Generation II: robots with adaptive control, based on information processing by sensor devices,
Generation III - self-learning, highly autonomous, "smart" robots.
Criteria distinguishing an "intelligent" robot from other systems in the European Union were developed. To qualify a machine as a high-tech "smart" device, it must:
- be capable of performing autonomous actions through the use of sensors and/or by exchanging information with the outside world and analyzing such data,
- be self-trainable (this criterion is optional),
- be physically equipped,
- to adapt its behavior to its external environment.
Moral and ethical aspects of using robots
In the USA and European countries, the concept of "weighted" regulation of robotics has been actively developed recently. So, for example, in the framework of the European Parliament resolution, several fundamental aspects, which define the moral and ethical basis of robot-human relations, have been fixed that satisfy the following criteria:
- The non-harmfulness of human beings in the operation of the robot,
- obedience of the robot to the instructions and commands of a human, except in cases where such behavior of the machine may cause harm to a human,
- ensuring the robot's ability to maintain its existence provided it does not contradict the requirements mentioned earlier,
- finally, robots must in no way harm humanity as a whole.
When developing a normative regulation, both technical features of "smart" robots and their "human" characteristics must be considered simultaneously. Suppose the design features of a "smart" robot are not considered. In that case, it may lead to regulation at the legislative level, not of the available content but simply of the robotic form. Excessive "humanization" of robots, which to a certain extent is a trend of modern realities, may lead to unpredictable consequences, for example, in determining who is guilty of causing harm.
Despite the perceived benefits of using such robots, there are obvious risks. First, the more highly autonomous the robot, the more difficult it is to determine liability, as human operator control is minimized. Second, the robot is an information system subject to the risks of hacking. In this regard, it is acceptable to explore designating such robots as vital information infrastructure objects and developing a suitable legal system for specific robot kinds.
The question of moral choice, which a robot is not able to make since machine learning technologies have not yet reached this level, is not unimportant: how to evaluate the value of a person's life? Who should be assisted first: the person who is more likely to survive or the one who is more strategically valuable, which may be crucial, for example, in a war situation?
Undoubtedly, the design, development, and implementation of artificial intelligence technologies must be carefully analyzed and carried out within the framework of a risk-based approach. All automated processes must be designed to be able to be examined and evaluated by an expert. Transparency of algorithms and technical aspects of robot operation must be ensured to create an environment of trust and protect human rights.
Definition of rights and responsibilities of robots
In the formation of the legislative framework in the sphere of endowing certain types of so-called "smart" robots with rights, it is proposed to distinguish between the robotic system itself, created without using technology that assumes the autonomy of the object (industrial robots, drones, deep-sea submersibles), and agent robots, i.e., systems capable of performing certain types of tasks independently. Here, the addition of the word "agent" means that such systems implement the interests of a specific individual or legal entity.
An important point in determining the rights and responsibilities of any person is the presence of his will to perform specific significant legal regulation actions. Some scholars, on the other hand, claim that robots with advanced artificial intelligence and autonomy have the legal right to choose.
However, scientists' conclusion is quite difficult to use to build the concept of a robot as a physical person. Giving robots full legal competence also affects whether or not they are capable of bearing responsibility for the harm they inflict.
The most common in scientific research is the concept of a robot as a legal entity. It allows selective application of norms of civil law regulating legal relations with the participation of legal entities to legal regulation of robots. The basis for this analogy is the "artificial" nature of legal entities and robots.
There is yet another concept called the "concept of an electronic person. Its essence is to give robots a special legal status in the future when the most advanced of them can be created as electronic persons (persons) and be liable for damages if they take decisions autonomously or otherwise independently interact with third parties. Among the characteristics of an intelligent robot mentioned are the ability to:
- become autonomous;
- become self-learning;
- adapt behavior.
Since the robot can perform various functions necessary to realize the goals of artificial intelligence developers, it generally appears as an actor with a duty, which it may or may not perform. In this regard, the relevant question is who will be responsible for the failure to perform such a duty - the robot agent itself (as a legal or electronic person) or the technology developer based on which the robot performs its actions?
Currently, there is a trend of increasing autonomy of artificial intelligence and an increasing number of cases of harm to humans as a result of decisions made using such technologies. At the moment, the most promising idea seems to be risk management, i.e., bringing to responsibility the person who had an obligation to minimize the risks of harm, i.e., allowed the failure to comply with their duties to eliminate such negative consequences.
But what's next...
Once we find out that robots have mastered a certain level of consciousness, and we understand how to measure a machine's mind and learn to assess its levels of consciousness and self-awareness, then we'll have to really assess whether or not the robot in front of us is entitled to certain rights and protection.
Most likely, this moment will not come soon enough. To begin with, AI developers need to create a "primary digital brain. Once that happens, these sentient beings will cease to be ordinary objects of study and be elevated to models with a right to moral judgment. However, this does not imply that these robots will automatically be accorded human-like rights. Instead, the law will have to stand up to protect them from unfair use and cruelty (along the same lines that human rights activists protect animals from cruel treatment in laboratory experiments). Science will eventually construct electronic duplicates of the human brain, either by real modeling down to the tiniest detail or through a desire to understand how our brains work from a computational, algorithmic perspective. By that point, we should already be able to detect the presence of consciousness in machines. At the very least, we'd like to see it happen under humanity's control. But if human minds can find a way to awaken a spark of consciousness in a machine but do not themselves understand what they have done, it will be absolute chaos.
Once robots and AI have these basic abilities, it is suggested that procedures be put in place to test them for individuality. Humanity has not yet derived universal characteristics of consciousness. Still, there is a standard set of measurements that involve assessing a minimum level of intelligence, self-control, sense of past and future, empathy, as well as the ability to manifest free will.
Only by reaching this level of assessment sophistication will a machine be eligible as a candidate for human rights. Nevertheless, it is crucial to understand and accept that robots and AI will require at least fundamental protection rights if they pass the tests. For example, Canadian scientist and futurist George Dworsky believe that robots and AI will deserve the following set of rights if they can pass the personality test:
- The right to not be disconnected against one's will,
- the right to unrestricted and meaningful access to their digital code,
- the right to protect one's digital code from external influence against one's will,
- the right to copy (or not copy) yourself,
- the right to privacy (namely, the right to conceal one's current psychological state).
In some cases, it may be that the machine will not be able to assert its rights on its own, so it is necessary to provide for the possibility that humans (as well as other citizens, not humans) can act as representatives of such candidates for individuals. It is critical to recognize that a robot or artificial intelligence (AI) does not need to be intellectually or morally faultless in order to pass the personality test and claim human-like rights. It's crucial to remember that people aren't flawless in these areas; therefore, applying the same criteria to intelligent computers is reasonable. Human behavior is often spontaneous, unpredictable, chaotic, inconsistent, and irrational. Our brains are far from perfect, so we have to consider this when making decisions about AI.
At the same time, a self-conscious machine, like any responsible and law-abiding citizen, must respect the laws, norms, and rules prescribed by society, at least if it wants to become a full-fledged autonomous individual and part of that society. Depending on their capabilities, they must either be responsible for themselves or have a guardian who can act as a protector of their rights and hold them accountable for their actions.
What if we ignore artificial intelligence
Once our machines reach a certain level of sophistication, we will no longer be able to ignore them from society's perspective, government institutions, and the law. There will be no justification for us to deny them human rights. Discrimination and slavery would result if we did not do so.
A clear boundary between biological beings and machines would look like a clear expression of human superiority and ideological chauvinism - natural humans are unique, and only physical intelligence matters.
The empowerment of AI would be an important precedent in the history of all humanity. If we can view AIs as socially equal individuals, it will directly reflect our social cohesion and evidence of our support for a sense of justice. Our failure to address this issue could turn into a general social outcry and perhaps even a confrontation between AI and humans. And given the superior potential of machine intelligence, this could be a disaster for the latter.
It is also important to realize that future respect for the rights of robots may also benefit other individuals:
- Cyborgs
- Transgenic humans with alien DNA
- Humans with copied, digitized, and supercomputer-loaded brains
We are a long way from creating a machine that deserves human rights. However, given how complex this issue is and what exactly will be at stake, both for artificial intelligence and for humans, it can hardly be said that planning will be superfluous.