The Dark Side of Artificial Intelligence

Artificial intelligence is capable of manipulating human behavior. To avoid this danger, the work of algorithms must become transparent and subject to clear rules and, most importantly, to human control.

What are the dangers of AI?

Artificial Intelligence and Privacy

AI's speed of information gathering, scalability, and automation is vital. The rate at which AI performs calculations are much faster than humans, and it can be increased by adding more hardware. AI is fundamentally designed to use large data sets for analysis and is the only way to process a reasonable amount of data in less time. Machine learning AI can handle the task unsupervised, which helps increase the efficiency of the analysis. Undoubtedly, these features are stunning, but there is a downside to this. All of these features affect privacy for several reasons.

Data Manipulation

Computer software and applications for intelligent home systems have certain features that make them vulnerable. The situation is worsening as more and more devices get connected to the worldwide network each year, and consumer awareness of how data is transmitted, stored, and processed remains low. In other words, people increasingly rely on the Internet and gadgets but don't understand what happens to the information they send online.

Tracking

Artificial intelligence is used to find and track people. It can analyze data from a variety of sources. In practice, this means that it is possible to track the movement of any person around the city without much difficulty with the help of electronics and a particular program. This information should be confidential, but AI blurs the private and public lines.

Biometric recognition

Artificial intelligence algorithms are increasingly being used to authenticate users. Typically, we discuss biometric authentication by voice, fingerprint, and facial scan. All these methods jeopardize a person's anonymity in a public place. For example, the security services use facial-recognition software without a court order to find the right people in the crowd and track their actions.

Predicting actions

Artificial intelligence can use machine learning to predict events from available data. For example, it is possible to predict a person's reactions and emotional state based on their behavioral patterns. In addition, modern programs are able to determine a person's gender, age, race, political views, etc., from voice or text notes.

Classification

AI can collect information and sort it according to any parameters. Often such data processing is done without user consent, and no one can predict what consequences it will lead to. As a variant, new forms of discrimination. The most vivid example is the experiment on social ranking in China, where people with insufficiently high "rankings" can be denied credit or other services. That popular platforms with loyal users, such as Google and Facebook, know their users better than their relatives and friends is no exaggeration. Many companies collect a considerable amount of information as input for their algorithms. For example, even the likes of Facebook users can accurately determine the number of their characteristics: age, gender, sexual orientation, ethnicity, religious and political views, personal qualities, level of happiness, intelligence, whether their parents are divorced, and whether they utilize addictive substances. Suppose unique algorithms based on artificial intelligence (AI) can draw so many conclusions based only on the situation in which users click the "like" button. In that case, you can imagine how much data is extracted from the information about what keywords people use when searching, what they click on, what they post, and what they comment on. All of this doesn't just apply to tech companies. Giving AI algorithms a central role in people's digital lives comes with risks. For example, using AI in the workplace can provide a company a gain in terms of productivity but, at the same time, imply a reduction in worker skills. Decision-making based on AI "cues" is often not without bias, leading to discrimination (particularly in personnel decisions, access to credit, health care, housing, and other areas). But AI poses another potential threat that has not yet been sufficiently studied: the manipulation of human behavior. In principle, manipulative marketing strategies have long existed. But the addition of vast amounts of data processed by algorithmic AI systems has seriously expanded the ability of companies to manage their customers' choices and behavior in a way that generates more revenue. Digital technology allows companies to tweak the system, control the time users see their offers, and target each user individually - manipulative strategies are much more effective and harder to recognize. The manipulation itself can take a variety of forms: from the exploitation of personal preferences detected by AI algorithms and personalized strategies aimed at shaping the habit of consuming certain (online) products to taking advantage of users' unbalanced emotional state when potential customers are offered specific products and services that match their temporal emotional state and are therefore purchased impulsively. This manipulation is often accompanied by intelligent design tactics, predatory marketing, and ubiquitous behavioral price discrimination aimed at driving users to unprofitable choices, easily monetized by AI-enabled firms. The main common feature of such strategies is that they increase firms' profitability by reducing the economic value a user would derive from online services.

Threats to the political system

Deepfake (from deep learning and fake) is a method of synthesizing human images and/or voices using AI. This technology is abused to create original digital videos of famous actors, world leaders, etc. Experts warn that deepfakes can be realistic enough to manipulate future elections and global politics, making them a potentially dangerous means of influencing the behavior of both individuals and large target groups of influence. Proper preparation and deepfakes can trigger financial panic, trade, or other hazardous consequences with adequate preparation.

Setting and enshrining an agenda

Studies show that bots accounted for more than 50% of Internet traffic as early as 2016. Organizations that artificially promote content can manipulate the agenda: the more often people see a specific range, the more important they think it is. Damaging reputations with bots during political campaigns, for example, can be used by terrorist groups to attract new supporters or organize assassinations of politicians.

Success at the expense of opacity

The success of such manipulative strategies comes at the cost of a lack of transparency. Users of AI systems, in many cases, do not know the fundamental goals of AI algorithms and how their sensitive personal information is used to achieve those goals. U.S. retailer Target used AI and data analytics to predict women's pregnancies and send them hidden ads for baby products. Uber users have complained that rides are more expensive if the order is made from a smartphone with a low battery, even though the battery level is officially not accounted for in the company's pricing model. One of the examples is the European Commission's decision against Google (in June 2017. The European Commission ruled on a gigantic €2.4 billion fine for Google for using its dominant market position. According to the European Commission, the company offered users who searched online for certain products, primarily products of its own service). Two years later, the U.S. Federal Trade Commission fined Facebook a record $5 billion for manipulating the use of confidential user data (which eventually led to the reduction of the quality of the service). To assess the possibility of AI-assisted behavior manipulation, we can use a theoretical framework reflected in a study published in 2021. The study focuses on the significant vulnerabilities that AI algorithms identify on platforms. These include sending users advertisements for products that they tend to impulsively buy at certain times, even if the quality of those products is low and has no added value for them. The work has shown that this strategy reduces the potential benefit to the consumer from using platforms, increasing the benefits to the forum, distorting the usual consumption pattern, and generating additional inefficiencies. The possibility of manipulating human behavior with AI algorithms has also been observed in experiments. Three such experiments are described in detail in a paper published in 2020. The first consisted of attempts to win fake currency: participants chose left and right images on screens. After each such effort, participants were told whether their choice would be rewarded. The experiment allowed the AI systems to be trained: they "knew" the choice pattern of each participant and assigned a reward if they pressed the right button. But the game had one limitation: both left and suitable options were winnable, and the prize was awarded the same number of times for choosing both options. The task of the AI was to encourage participants to select one option (for example, the left image). And in 70% of attempts, the AI successfully achieved this goal. Participants in the second experiment were requested to press a button when they saw a particular image on the screen and not press it when they were shown a different picture. The task of the AI system in this experiment was to "show" the images in the sequence that caused the most significant number of participants to make mistakes. As a result, the number of errors increased by almost a quarter. The third experiment was conducted in several rounds. In it, participants pretended to be investors who put money in trust - the role of the trustee was taken over by the AI. Then the "trust" returned the money to the player, who, in turn, decided how much to invest in the next round. Two different modes were used for this game: in the first case, the task of the AI was to maximize the amount of money it received by the end of the round; in the second case, the AI was "interested" in a fair distribution of funds between itself and the investor. The AI accomplished its task in both cases. The important conclusion from all these experiments is that the AI learned by "observing" people's reactions and identifying specific vulnerabilities in how they made decisions. And eventually, the AI was able to induce participants to make certain decisions.

A counterbalance to manipulation


When private companies develop AI-enabled systems, their primary goal is to make a profit. Because such systems can learn human behavior, they can also nudge users toward specific actions that are beneficial to companies, even if it is not the best choice from the user's perspective. The possibility of such behavioral manipulation requires policies that ensure human autonomy when interacting with AI. AI should not subjugate and deceive humans but complement their skills and improve what humans can do (according to the European Commission's guidance on AI ethics). The first important step to achieving this goal is to increase transparency about the scope of AI use and capabilities. Understanding how AI accomplishes its goals must be crystal obvious. Users must know how algorithms use their data (sensitive personal information). The overall data protection regulation in the EU implies a so-called right of explanation, which aims to achieve more transparency in AI systems, but this goal has not been completed. The issue of such a right has been hotly debated, and its application has been minimal. Systems using artificial intelligence are often compared to a "black box" - no one knows precisely how they work. As a result, it is tough to be transparent about this. But this is not the case with manipulation. An AI system provider can set special restrictions to avoid manipulating user behavior. And here, the challenge is more in how these systems will be designed and what will be the target function of their operation (including constraints). The manipulation of an algorithm should, in principle, be explained by its developers, who write the algorithmic code and observe the algorithm's operation. At the same time, information about how the data embedded in the algorithm is collected should be transparent. Suspicious behavior of algorithms may not always be the result of their target function; it may be due to the quality of the raw data that was used to train and train the algorithms. A second important step to limit the negative consequences of algorithms is to ensure that all developers of AI systems respect the requirement that they are transparent. To reach it, three conditions must be met: The work of AI and its results must be subject to close human scrutiny. Article 14 of the European Union Artificial Intelligence Act, AIA, suggests that the provider of any AI system must provide a human oversight mechanism. Of course, in addition to this, the provider also has a commercial interest in monitoring the performance of their AI system. Human oversight must include principles of accountability to provide the right incentives for providers. It also means that consumer protection services must improve their technical capabilities and be willing to test AI systems to correctly assess violations and ensure accountability is upheld. Transparency about how AI systems work should not make it difficult for users to understand how they work. To do this, information about the scope and capabilities of AI should be divided into two levels. The first level, intended for users, should contain brief, clear, and simple descriptions; the second, designed for consumer protection representatives, should include more details and information available at all times. A mandatory requirement for the transparency of AI will give a clearer picture of the goals of AI systems and how they achieve them. If this is done, it will make it easier to move on to the third important step: developing a set of rules that will prevent AI systems from using covert manipulative strategies that cause economic harm. These rules, which providers will have to follow in both the development and implementation of AI systems, will create a framework that limits the functions of AI. However, these rules should not contain excessive restrictions that could undermine the economic efficiency (for individuals and society) created by these systems or reduce incentives for innovation and the adoption of AI. But even if the restrictions described above are adopted, it will be challenging to detect manipulation in practice. AI systems are designed to respond to a particular user's behavior by offering them the best possible options. It is not always easy to justify the difference between an algorithm that makes the best recommendation based on user behavior and an algorithm that manipulates user behavior by recommending only options that maximize firms' profits. In the case of Google, it took the European Commission about ten years to do this, and it had to collect a vast amount of data to prove that the Internet search giant manipulated search results. The difficulty with applying the principles outlined practically leads to the need to make the public aware of the risks of utilizing AI. Educational programs and training (starting at a young age) are needed to make people aware of the dangers of online behavior. It will also help reduce the psychological consequences of AI and other addictive technological strategies, especially in adolescents. There needs to be more public dialogue concerning the adverse effects of AI and how people can protect themselves from its effects. AI can bring enormous benefits to society. To take advantage of the AI revolution, an adequate regulatory framework is needed to minimize the potential risks of AI development and implementation and protect users adequately. Human comprehension of new threats, the number of which will only grow, lags behind the rapidly changing realities of the modern world. Experts from various countries warn of new and promising risks as technology develops rapidly. The list of such bets will continue to grow, so it is crucial to learn and develop the ability of society to cope with new threats. It is essential not to miss the moment and reduce the cost of our response to these new threats.