Artificial intelligence, its integration, and its impact on society

Artificial Intelligence (AI) is often called the most impressive technology of our era, promising to transform our economy, lives, and opportunities. Some even see AI as a way to rapidly progress in creating "intelligent machines" that will soon surpass human skills in most areas. The development of AI has been very successful in the last decade, especially in developing new methods for statistical processing information and machine learning, which allow us to work with vast amounts of unstructured data. It has already affected almost all areas of human activity: AI algorithms are now used by all online platforms and in various industries, from manufacturing and healthcare to finance, wholesale, and retail. Government agencies, such as the judiciary, customs, and immigration, have also come to rely on AI.

Concerns about AI

Nevertheless, there are worries concerning the prospects of AI development regarding possible negative consequences. Some believe that intelligent computer systems could evolve into super-intelligence and get out of control. Others are concerned about less distant prospects - for example, the possibility that database classifiers used in extreme health care or criminal justice decisions may not function properly because of system errors and inaccuracies, resulting in unfair or incorrect choices. Skeptics also fear potential legal and ethical conflicts around decisions made by automated systems, difficulties in understanding the logic behind such decisions, new forms of surveillance and related threats to civil liberties, the possibility of influencing human consciousness through AI manipulation, the potential for using AI for criminal purposes, the far-reaching effects of military use of AI, and the prospect of reduced human labor, increased unemployment, and social inequality. The proliferation of artificial intelligence systems raises several technical, philosophical, legal, ethical, and other problems. Some of these are addressed in a 2016 Stanford University report, Artificial Intelligence and Life in 2030, including security issues; defining the legal personality of artificial intelligence; ensuring the privacy of personal data; civil and criminal liability; certification of artificial intelligence systems; and the negative impact of artificial intelligence systems on human jobs. Some recommendations for resolving issues arising from the use of artificial intelligence systems are contained in the European Parliament Resolution and the Commission's recommendations of 16.02.2017, "Civil law rules on robotics."

Challenges and threats to AI

The Institute of Electrical and Electronics Engineers (IEEE) is conducting research in artificial intelligence ethics, which is expected to result in technical documents regulating the development of artificial intelligence systems by ethical standards. The first document became the Recommendations on ethically aligned design, which received a non-commercial license from the United States of America - Creative Commons Attribution-Non-Commercial 3.0 (Institute of Electrical and Electronics Engineers n. d.). These recommendations contain a list of threats and problems to implementing autonomous artificial intelligence systems, including ethical.

  • in the area of general principles: the need to define and enshrine the principles of human rights supremacy, responsibility, transparency, learning, and awareness in the development of artificial intelligence systems,
  • in the sphere of integration of ethical norms and values into artificial intelligence systems: not universality of moral standards and their variability depending on users and tasks; possibility of conflict between moral norms and values; possible built-in or algorithmic errors of artificial intelligence systems that can lead to violation of ethical standards about specific subjects; need to achieve a certain level of trust between people and artificial intelligence
  • in the area of security of the use of artificial intelligence and artificial super-intelligence: the risk of unexpected behavior of Artificial Intelligence; the difficulty of improving the safety of future artificial intelligence systems,
  • in the realm of personal data: the ability of artificial intelligence to draw inferences about personal data based on information that humans distribute in everyday life,
  • in the realm of autonomous weapon systems: the unpredictability of such systems; eliminating human control of the battle space could lead to escalating tensions and unintended human rights violations; unlimited buyers of autonomous weapon systems would lead to their proliferation and uncontrolled use,
  • in the economic sphere: restrictions on artificial intelligence technology could slow innovation; technological change outpaces the way workers are trained with new technology; risks of rising unemployment; widening the socioeconomic gap between developed and developing countries,
  • In the area of law: the need for transparency in the work of artificial intelligence and respect for individual rights; problems of legal liability in the case of harm caused by artificial intelligence systems; the need for legal enforcement of verification of artificial intelligence systems.

Code of ethics for robotics developers

The European Parliament Resolution of 16.02.2017 contains recommendations for the European Commission on civil law rules on robotics, and the Code of Ethics for Robotics Developers is annexed to the resolution. The main principles of this Code are:

  • "do good" - robotic activities should be for the benefit of humans,
  • "not harm" - robots must not harm humans,
  • Principle of autonomy - human beings have the right to make an informed decision freely about the conditions of interaction with robots,
  • the principle of justice - all benefits resulting from robot activity must be fairly distributed.

Ethical and Legal Problems of Application of AI

Considering the studies mentioned above and recommendations, the following ethical and legal problems of applying artificial intelligence systems can be distinguished.

  1. The possibility of recognizing a human-like carrier of artificial intelligence as an equal subject with a human. Indeed, this question is not yet on the agenda, considering the level of science and technology development. But in the future, the development of artificial intelligence systems could raise the question of the need to give humanoid carriers the status of subjects of law with giving them equal powers with people.

If a machine can think and feel like a human, should it be considered human? Some researchers believe that if a carrier of artificial intelligence possesses the will and consciousness, it can be endowed with all human rights. They contend in favor of this point of view that human technologies such as in vitro fertilization and genetic cloning create humans with souls who are no different from those born in the "traditional" way. If humans learn to encode the human brain digitally, then artificial intelligence will become our digital version, which must have a soul.

Before any accountability measures are established, there will, in any case, be a moral question about the need for the ethical treatment of robots that are carriers of artificial intelligence and completely copy humans.

Related to the issue of the legal personality of artificial intelligence is the question of the legal status of the results of intellectual activity created by artificial intelligence and the holder of intellectual rights to such works. For example, already robot poets compose poems using existing grammatical models and news and lexical bases, constantly improving and expanding their vocabulary. Computer musicians write music.

Scientists offer the following possible regimes of legal regulation of intellectual property rights for the results of intellectual activity, produced with the participation of the carrier of artificial intelligence or directly by him:

  • The granting of intellectual property rights to the carrier of artificial intelligence in the case of endowing it with legal personality,
  • Complete refusal to endow the carrier of artificial intelligence with any intellectual property rights and occurrence of intellectual rights either to the person who created the basic concept of the result of intellectual activity, or to the user-operator, or the producer of a computer system equipped with artificial intelligence, or to the owner of the essential software of the carrier of artificial intelligence, or the owner of the computer system equipped with artificial intelligence,
  • transfer of works created by artificial intelligence into the public domain,
  • the works of the carrier of artificial intelligence are regarded as service works,
  • Granting intellectual rights to a medium of artificial intelligence and one of the persons specified in the first variant of the legal regime simultaneously.

It seems that before granting artificial intelligence systems, the status of a subject of law intellectual rights to the works created by artificial intelligence should arise for the owner of the artificial intelligence system because the developer of such systems gets an economic effect from their sale. For the owner, the possibility of acquiring intellectual rights will be an incentive to develop such strategies, which will increase public demand for their development and production.

Artificial Intelligence is an object of admiration and worship. Unlimited opportunities provided by artificial intelligence, its efficiency, productivity, and ability to help man in solving a variety of tasks, from household issues to space exploration and comprehension of the mysteries of the universe, make it not only a good and fascinating object of research but also an object of admiration, which may reach the level of worship and transformation into an idol.

American engineer Anthony Levandowski created the first-ever religion of worship of artificial intelligence, called Way of the Future. The charter documents of this religious organization state that its activities will be focused on worshiping a deity based on artificial intelligence, developed with the help of computer hardware and software. At the same time, E. Lewandowski stresses that if something is a billion times smarter than the most competent person, it can be called a deity.

Of course, it is possible to treat the emergence of a new religion with a certain degree of irony and skepticism, but we should not underestimate its prospects. Already, a significant part of the population, especially young people, have a cult of technical innovations: the latest models of smartphones, computer games, programs, etc. People sometimes go to great lengths to possess new gadgets: from serious self-restraint and austerity to crime. And they are driven not only by fashion and a desire to demonstrate a certain standard of living but also by an established need to be in a virtual, digital space that is constantly updated, for which it is necessary to use the latest technical developments in the field of electronic devices, which have become an important, sense-determining part of life. The promising results of artificial intelligence, which provide much more possibilities than the existing smartphones, have all chances to absorb the inner world of people and become a global object of worship.

  1. The possibility of artificial intelligence systems harming the highest values - human life and health. Incorrectly putting in an artificial intelligence system algorithm can lead to large-scale negative consequences.

Cases of artificial intelligence systems harming humans are spreading as artificial intelligence evolves. In March 2018, an unmanned Uber car struck a woman on the road without seeing her, killing her. In May that year, IBM's digital assistant Watson recommended inappropriate and health-threatening medications to cancer patients.

Is it ethically acceptable to develop artificial intelligence systems that can lead to human death? There can be no definite answer to this question. Many technical inventions, from the bicycle to the airplane to the spaceship, whether operated by man or the machine, can, given the confluence of certain factors, cause harm to humans. But to abandon scientific development in such technical fields would halt the advance of civilization.

Another ethical and legal question arises: who is responsible for the harm caused by artificial intelligence systems? Artificial intelligence makes its own decisions and implements them. The developers of artificial intelligence systems do not set exhaustive algorithms for action and decision-making; artificial intelligence systems can self-learn and function autonomously. How reasonable is it to hold a pilot or a doctor using artificial intelligence technology responsible, for example, for AI errors? If responsibility is placed on the people who use artificial intelligence systems, people will avoid using them, which will hinder the development of the relevant technologies. If guilt is placed on the developers of artificial intelligence systems, it can have similar consequences. If liability is not assigned to any of these individuals, then there is no way to repair the damage caused by artificial intelligence to humans.

Many believe that to solve this problem, it is necessary to introduce compulsory civil liability insurance for harm caused by artificial intelligence systems to humans, which insurance companies will compensate for the damage. In addition, if insurance compensation is insufficient to pay for the injury, the developer and the manufacturer should be jointly and severally liable for the person using the artificial intelligence system.

  1. Decision-making by artificial intelligence regarding the rights and duties of people and their legal responsibility may contradict fundamental legal and ethical values. If artificial intelligence systems make decisions about the rights and obligations of people, about the responsibility of people for violations, then, taking into account opaque algorithms of decisions made by artificial intelligence, fundamental rights, defended by man for centuries, may be violated: right to familiarization with all documents, affecting human rights and freedoms; suitable to a reasoned decision, which gives detailed legal qualification of actions committed by a citizen; right to appeal against decisions of authorities and officials

Therefore, if a carrier of artificial intelligence is endowed with the right to make legally significant decisions concerning people, the following principles must be observed:

  • it must be possible to set forth the operations of Artificial Intelligence in a human-understandable form (initial information, ways of its processing, legal qualification, motives for the decision made),
  • A person must have an opportunity to appeal to a person against the decision made by artificial intelligence.
  1. Artificial Intelligence exacerbates stratification and inequality, creating conditions for the centralization of power. The development of artificial intelligence will lead to even greater inequality between those who have the technology and those who do not because it creates conditions for the centralization of power and the concentration of resources in those who have it.
  2. The introduction of artificial intelligence will create mass unemployment. The Bain Consulting Company estimates that the introduction of robots and artificial intelligence will add 2.5 million Americans to the unemployed every year (by comparison, in the early 20th century, during the transition to an industrial economy, this figure was half as many). It would hurt all social institutions, including the family, and lead to demographic problems.
  3. Intellectual superiority of carriers of Artificial Intelligence over humans. Human beings have certain biological limits to their development, while artificial intelligence has none. Humans are limited by slow natural evolution and cannot compete with Artificial intelligence regarding the speed of development. As a result, artificial intelligence, which begins to be perceived as a threat, could be wiped off the face of the earth. Synthetic intelligence carriers may start to view humans as an obstacle to achieving their designed goals. As a result, humans may find themselves enslaved or purposefully destroyed by artificial intelligence. Algorithms embedded in artificial intelligence should provide for the unconditional possibility of their deactivation by humans.
  4. Alienation of people from each other, loneliness of man. Distribution of artificial intelligence systems in spheres where personal communication and interaction and the manifestation of human feelings and emotions are significant can lead to alienation of people from each other and increase loneliness.

For example, these days, robot babysitters can monitor children, give parents information about the child, play with children, maintain a conversation with the child, and participate in the child's education. The problem is that instead of parents, whose love, care, and affection are necessary for the child's total development, there is a robot with him, from which the child cannot get what he needs most. As a result, this can affect the child's mental, physical, intellectual, and emotional development, leading to a lack of the necessary emotional connection between the child and the parents and future difficulties in upbringing. Similar problems arise when using robotic caregivers to care for sick people, who also need human attention and care from humans, not machines.

  1. The possibility of following ethical norms in decision-making by artificial intelligence. Artificial intelligence systems can be faced with moral choices, particularly when faced with non-standard situations. For example, an intelligent unmanned vehicle control system must choose between hitting a pedestrian and a maneuver that threatens to harm the lives and health of passengers. Is it possible to lay down the need to follow ethical norms when making decisions with artificial intelligence?

The need for regulation


To summarize all of the above: the current problems with AI are those of unregulated AI and ignoring its large-scale consequences for society. Given the widespread use of AI and big data, AI experts suggest introducing a new regulatory approach called the precautionary regulatory principle. Also, a system of ethical principles that should be observed in the development of artificial intelligence systems should be developed and approved at the legislative level. International cooperation and the involvement of many experts who understand and can analyze the interplay between artificial intelligence technologies, software goals, and ethical categories are needed to address this challenge effectively. Algorithms embedded in artificial intelligence must provide for the unconditional possibility of human disablement. Also, the control algorithms of the artificial intelligence system should be such that the system could not function if ethical norms are violated, i.e., moral norms should be the basis of the artificial intelligence system, rather than an additional set of criteria that the artificial intelligence system will use when making decisions. Compulsory civil liability insurance for harm caused by artificial intelligence systems to humans should be introduced, in which case insurance companies will compensate for the damage. If insurance indemnity is insufficient to pay for harming the person using the artificial intelligence system, the developer and the manufacturer should be jointly and severally liable. When empowering the artificial intelligence to make legally significant decisions concerning humans, it is necessary to legislate the following principles: it should be possible to present the operations of the Artificial Intelligence in a human-comprehensible form (source information, how it was processed, legal qualification, the motives for the decision made); a person should have the opportunity to appeal a decision made by the artificial intelligence. Thus, the prospects for artificial intelligence primarily depend on the attitude of humanity to its introduction and the degree of its regulation.