Artificial intelligence in law

The overall economic impact of AI technologies could be as much as 35% of economic growth between now and 2035 (Accenture, 2018). The field of technologies identified as "artificial intelligence" is vast and, depending on the definitions used, can lead to different assessments of its positive and negative impact on society. At the same time, the universality of AI is considered not only from the perspective of its technological compatibility with any production process but also from the perspective of replacing humans. Some scientists think there is a danger in making AI into a "superintelligent" whose objectives might not be the same as those of people. According to various estimates by members of the Association for the Development of Artificial Intelligence and AI specialists, the creation of an artificial intelligence comparable in cognitive capabilities to that of humans is possible in 2075, while about 70% of surveyed researchers believe the result of a "super AI" is possible. Given these predictions and concerns, more than 1,000 AI researchers have signed a joint letter proposing a ban on using fully autonomous systems as weapons. In this regard, forming a unified international regulatory framework to regulate AI on a global scale is crucial.

Problems of AI regulation

First of all, regulation in the field of AI must be comprehensive and cover several related technologies, including "big data," cloud computing, machine learning, and robotics as the technical and software base for AI. In addition, AI research is extensive and interdisciplinary in nature: AI is the object of study in both the technical sciences and the humanities. Currently, there is no holistic regulation of AI at the national and international levels that considers all the issues related to the development, production, and implementation of AI systems in various fields. Only some states, such as the U.S., Germany, South Korea, China, and the European Union, have taken a thorough approach to solve individual problems in the field of legislation - the most accent is maid on the regulation of the use of unmanned vehicles. But overall, the current rulemaking lacks coherence and an understanding of a unified approach. The creation of optimal standards can lead to an increase in the region's investment appeal. An excellent example of the relationship between legislation and technological development is the Red Flag Act of England in 1865, which established a speed limit for cars (up to 2 miles per hour) and required that a signaller must march in front of every car to signal the movement of the vehicle. Consequently, the automobile industry in England was in decline, which led to the ceding of the leading position in the industry to France and Germany. Thus, proper regulation will encourage technological development. At the same time, there is an ambitious and challenging task of creating a set of regulations that balances the interests of all participants in the AI relationship. Macro-factors such as:

  • Legislative rigidity,
  • social and economic consequences, taking into account the level of unemployment and social stratification,
  • confidentiality and protection of personal data,
  • security,
  • ethics, including human-robot attitudes, potential human harm, and so on.

Issues of AI development and regulation in international organizations

Today, the general global landscape, basic approaches, and principles in the field of the legislative decree of AI are shaped by the most authoritative international organizations and platforms.

DAPRA

DARPA's scope of interest also includes AI projects, in which three areas of AI can be distinguished:

  • "manual knowledge" - is a set of rules created by engineers that represent knowledge in a well-defined domain. In this case, the structure of knowledge is determined by humans, and the machine explores the specifics of this knowledge,
  • "statistical learning" - engineers create statistical models for specific domains and train AI to work with the data. This direction is distinguished because it provides for good classification and the ability to predict specific events. A significant disadvantage of AI is its minimal ability to reason,
  • "conceptual adaptation" - this direction provides for the development of systems that generate contextual explanatory models for actual-world classes and phenomena (e.g., a generative model designed to explain how a given object could have been created. The DARPA concept is deeply and logically related to the current situation in the field of AI development; however, these technologies have not been studied in terms of functionality and definition of moral, legal, and organizational limits.

The United Nations

In 2018, the United Nations reported that, among other things, examined the impact of AI technologies on human rights in the information environment. The paper notes that companies should consider how to develop professional standards for AI engineers, translating human rights responsibilities into guidance for technical choices when designing and operating AI systems. It is also recommended that grievance and remedy systems be established to respond to user complaints and appeals promptly. Data should be regularly published on the frequency of complaints and requests for remedies for situations where AI systems have occurred, as well as the types and effectiveness of available treatments. AI issues have also begun to be addressed within the core competencies of the Commission on Economic and Social Development (ECOSOC) and the UN Conference on Trade and Development (UNCTAD). UNCTAD has taken some initiatives to broaden the discussion on data transfer and its associated risks for countries lagging in the digital economy. In 2018, the Digital Economy Report was released, and one of its central issues was the transfer of data and its potential impact on a blockchain, cloud, IoT, and AI development.

The United Nations Educational, Scientific and Cultural Organization (UNESCO)

UNESCO notes that the accumulation of data on human behavior and the use of computing technologies to process it raises new questions regarding human rights, freedom of information sharing, and education. UNESCO's AI policy goal is to harness the power of new IT technologies to build a "knowledge society" and achieve the Sustainable Development Goals (SDGs). UNESCO recommends approaching global AI development from the perspective of the ROAM principles (human rights, openness, accessibility, and participation of all stakeholder groups) articulated earlier in the context of "universal Internet development". In addition to the ROAM principles, the UNESCO position paper notes the need to develop ethical principles for AI, as well as the need to respect gender and social minorities and to pay special attention to overcoming the digital divide especially, concerning African countries.

The European Union's experience with legal regulation of AI

On April 10, 2018. Twenty-five European countries signed a declaration on cooperation in the field of AI. Member states agreed to work together on the most critical issues in the digital age, from ensuring Europe's competitiveness in AI research to addressing the social challenges of AI adoption. In addition, the states defined a "proactive approach" to AI regulation, which means modernizing the European education and training system, including advanced training and retraining of European citizens. The declaration was followed by an equally important document, the Policy and Investment Recommendations for Robust AI. This guideline highlights the following elements in the area of legal regulation of AI:

  • Increasing socially practical knowledge about AI,
  • accelerating the adoption of AI technologies and services in all sectors of the European Union,
  • promoting and scaling AI solutions by innovating and facilitating the transformation of the technology sector,
  • developing legally compatible and ethical data management and data sharing initiatives within the European Union,
  • developing and supporting AI-centric cybersecurity infrastructures. The guidance emphasizes the need to develop a cybersecurity regime for AI infrastructure and AI methods. In addition, the document highlights the point of developing user-centric AI cybersecurity policies to protect user rights on the Internet. It should be noted that a disadvantage of the Guide is its multitasking and low level of AI regulation. AI regulation does not end here; on February 19, 2020. The European Commission issued a White Paper that shifts the focus to creating an AI ecosystem. The White Paper highlights the following items in the area of AI regulatory activities:
  • Establishing key AI innovation and research centers and developing their policy provisions,
  • promoting the adoption of AI in the public sector. The European Commission is initiating open and transparent sector dialogues, giving priority to healthcare, etc.,
  • creating ethical principles, as well as developing recommendations on AI. The White Paper also proclaims an "ecosystem of trust," which acts as a normative framework for AI. The guidelines are underpinned by cybersecurity principles, which should be based on technical reliability, privacy, transparency, diversity, nondiscrimination, and fairness. This document broadly outlines the direction in the regulatory framework, which implies eliminating possible risks of AI (meaning fully autonomous AI). The most critical challenge of the EU legal framework is its applicability to AI. The white paper emphasizes that an adequate assessment of the legal framework and its relevance is needed. A change in the concept of security is noted. The fact is that the use of AI in products and services generates a lot of risks, which creates the need for legal regulation in this area (for example, the creation of standards related to new applications of AI). Based on the analysis of the White Paper, we can conclude that the problems of legal regulation of AI must be solved not only at the level of rules but also by considering the essence of AI, its prospects of use in the markets (for example, the problem of autonomous AI systems). It is also essential to agree on the legal issues of AI regulation (based on the essence of AI), the limits of its development, and the dynamics of its introduction into other industries. After the white paper was published, a public consultation was conducted at the European Union level. During this exercise, all EU member states and relevant stakeholders (including civil society, industry, and academia) were invited to participate in the consultation. The event mentioned above provided a relevant experience for further improvement of legislation. The European Union is researching AI as part of the Horizon 2020 project. This project allows AI research within the following vectors of development:
  • Strengthening AI research centers within the European Union,
  • supporting the development of an "AI-on-demand" platform that will provide access to relevant AI resources in the EU,
  • supporting the development of specific AI applications in key economic sectors. Also, the AI-on-demand platform project is being actively developed at the European Union level. This platform is a set of news in the field of AI developments, which makes it possible to provide a specific informative component for the formation of new initiatives in the field of AI. On September 15, the European Parliament and the EU Council approved the 2030 Path to a Digital Decade development program. The program targets that by 2030 at least 75% of all European businesses should implement AI technologies, big data, and cloud computing in their work. Path to a Digital Decade is consistent, compatible, and complementary with other EU development programs and policy documents such as Digital Compass Communication, Strategy for Shaping Europe's digital future, Industrial Strategy, Cybersecurity Strategy, Digital Europe Program, and Horizon Europe. Path to a Digital Decade is based on existing and planned legislative acts, such as the Data Governance Act, the Digital Services Act, and the Digital Markets Act. In general, the European Commission has formulated seven primary conditions for the formation of a strong base of AI:
  • Governance and oversight. Comprehensive reference is made to a kind of "fair society" to be achieved through AI,
  • reliability and security. The security of AI must be guaranteed through its system of algorithms (and at all stages of the AI lifecycle),
  • privacy and data management. Citizens should have complete control over their data, while data concerning them will not be used to harm or discriminate against them,
  • transparency. The EC statement indicates that there should be some AI filtering system in terms of solutions proposed by the AI itself and coordination with the subject of processing this information. However, the EC does not give specifics about the issue of processing,
  • AI must be multifunctional and take into account the full range of human abilities,
  • social and environmental well-being. In this block, the EC notes that AI should also be used to improve social sustainability and environmental objectives (even mentions the concept of "environmental responsibility, but it seems unclear how it should be implemented),
  • accountability. The EC rightly informs the member states that it is necessary to create mechanisms that would ensure the responsibility and accountability of bodies in the field of operation of the AI system. These blocks characterize not the general attitude of the European Union toward the development of AI but point to specific problems that may become a difficulty in the development of the AI ecosystem and the legal framework in this area. Conventionally, we can identify the following issues in the enforcement of the conditions formulated by the EC:
  • Liability issues. The scheme of responsibility of the authorities for the work of the AI and how this will be argued in terms of evidence is not yet transparent,
  • the inconsistency of the provisions to each other. In addition, it is worth mentioning that there is a hidden contradiction between the block mentioned above and the block on the development of AI in general, where the latter is intended to develop AI by taking into account the full range of human abilities. We are talking about creating a pretty autonomous system with human skills in decision-making, and the causal link is not revealed because of this,
  • the transparency system is vague, i.e., how AI decisions will be filtered is not entirely clear. The EC has made critical legal attempts but does not contain specific institutional and legal actions (e.g., technical regulations, supplementary policies, directives). Such experience with international legal regulation of AI is perceived in two ways. On the one hand, using adopted legal documents helps outline the whole range of AI issues (including cybersecurity). On the other hand, international legal regulation is not limited to these acts. Further detailed elaboration of the blocks identified by the EC is needed to eliminate their contradictions. It should be noted that despite the initial stages of development of most blocks, the EC has created an organizational unit - the group in the field of AI (AI Group). It was this group that developed the "Rules of Ethics in AI", which contain the following recommendations:
  • Design and use of AI systems by ethical principles: respect for human beings, the autonomy of AI (permissible limits to be gradually defined and worked out), prevention of wrongdoing, and explainability of AI processes. The direction indicates the elimination of contradictions between the principles.
  • attention to vulnerable groups, such as people with disabilities and children, who are most exposed to the risk factor of new technologies (AI),
  • attention to the risks of the AI system, which are difficult to predict (in this regard, attention should be paid to the legal measures that can be taken),
  • the design, deployment, and use of AI systems should meet the following requirements: human control of specific AI functions (supervision); technical reliability and safety; confidentiality; transparency; diversity, nondiscrimination, and fairness; environmental and social well-being; responsibility
  • the recommendations of the AI Group should be implemented in the national legislation of other states. It should be noted that the model as mentioned earlier of legal regulation chosen by the EU has several advantages:
  • Strategies and recommendations allow for a rapid response to the dynamics of AI and its capabilities, according to which constantly subject the legal matter within the EU and member states to change,
  • The Declaration on cooperation in the field of AI does not imply any framework definitions and wording, which allows for further elaboration of individual issues in the documents analyzed above. It should be emphasized that, generally, the EU documents still lack systematization.

The ways to improve the situation


Presently, there is an unprecedented growth of interest in regulating social relations related to AI in the world. This trend is due to some factors, among which should be noted, is the dramatic increase in the influence of information technology on people's lives and human dependence on technology, which has occurred and become evident since the beginning of the COVID-19 pandemic. The second factor, of course, is the objective emergence of information technologies, particularly those united under the term AI, to a new qualitative level, as well as the rapid growth of their mass implementation. These factors provoke anxious expectations and lead many to believe that the well-known science-fiction scenarios may become a reality unless urgent measures are taken to establish control over AI and those in whose hands it is held. Along with this comes the realization that information technology is inherently extraterritorial and cross-border and that regulation at the national level alone is unlikely to be effective. No jurisdiction by itself is capable of regulating and controlling AI. Regulating AI requires flexible approaches and a combination of different tools, including self-regulation, standardization, moral and ethical regulation, and technical means of rule (labeling, use of code, metadata). It is these factors that seem to have created the prerequisites for the relevant discourse to reach the level of interaction between major international organizations (UNESCO, OECD, CoE) and a universal international organization (UN), as well as for the beginning of the development of legally binding international documents (the leader here is traditionally the Council of Europe).