Expert opinions, TECHNOLOGY

Why AI needs a code of ethics

Back in 1942, science fiction author Isaac Asimov devised the three laws of robotics. They were part of a book and a philosophical concept that was taken up by other science fiction writers. But technologies are developing rapidly and artificial intelligence (AI) becomes more sophisticated: in 2017, Sophia the robot even received official citizenship. By the way, earlier she cheerfully announced that she would destroy humans, but then said it was a joke.

Credit: depositphotos.com
Credit: depositphotos.com

One way or another, technological progress provides new opportunities, poses new challenges and raises a wide range of ethical issues. For instance, what information can AI collect about the user, what advice can it offer, and where are the boundaries of acceptable interference? How can human-AI interaction be made safe and who is in charge of final decisions and responsibility for negative consequences? Some of the answers can be found in the Russian Artificial Intelligence Code of Ethics (AICE), signed in late October. Does the society need such documents and will they affect the use of AI?

A code of ethics is the need of the hour

The AICE is the first such document in Russia, but there are many similar ones adopted by international community. For instance, UNESCO has been working out this issue since 2019, and in 2020, it developed the First Draft Recommendation on the Ethics of Artificial Intelligence. In addition, the Global Partnership on Artificial Intelligence was established in 2020. It comprises the European Union and 14 more states and is aimed at developing joint solutions on responsible use, development, theory and practice of artificial intelligence.

Many countries have their own national documents on AI. A joint study by RAEC and Microsoft says that as of December 2020, 32 countries had their own strategies for the development of artificial intelligence, and 22 countries were working on them. In September 2021, the Artificial Intelligence Code appeared in China. As for Russia, the creation of such a document was a matter of time; officials at the highest level, including the President, repeatedly spoke about it.

Artificial intelligence and its developers need to know where the boundaries are because new technologies can affect us, our view of the world and decisions we make. Sometimes it happens without bad intentions or desire to manipulate, but most often in the interest of business. For instance, the information bubble issue seems harmless. This term is used when speaking about the algorithm of social networks and search engines that collect data about the user and then show them only the content they might be interested in. The user watches a funny puppy video or a soup recipe and is offered a range of similar videos. One might wonder, why is it bad?

However, due to smart algorithms’ intention to keep the user on the website and offer them recommended content, the users might found themselves locked out in an information bubble. For instance, let’s take the example of posts and news about the coronavirus, when those who oppose the use of vaccines or are concerned about 5G radiation are shown numerous materials that strengthen their confidence in such theories. Similarly, those who fear COVID-19 are offered the scariest news about the aftermath of the disease. As a result, there are groups with extreme views where no one has a full unbiased picture.

What are the ethical principles of AI?

In general, the codes of ethics, including the Russian one, cover a similar range of issues, which already shows how relevant they are for the entire world. The objective of an ethical AI is to serve the interests of people. AI must not allow discrimination or unfairness, it must protect safety and privacy (including personal data) and report to humans; its actions must be transparent. Users need to understand when they interact with AI and they should be able to stop this interaction.

One of AICE’s most important provisions is that the final decisions are up to the human and they bear the responsibility for negative consequences. The code also prioritizes a risk-oriented approach. In particular, Article 1.5 of the code says:

[It is necessary to] “Assess the potential risks of the use of AI, including social impact on people, society and the state, as well as the social impact of AI on human rights and freedoms, <…> conduct long-term monitoring of such risks.”

The code also urges to “preserve and develop human cognitive abilities and their creative potential” (Article 1.1) and “preserve human autonomy and free will” (Article 1.2). It also says that:

“In the process of creating AI system, AI actors should foresee possible negative effects on the development of human cognitive abilities, and prevent development of AI systems that purposefully cause such effects.”

There is often a fine line between AI assistance and negative cognitive effects. For instance, we may not remember information such as birthdays, work meeting dates, phone numbers and addresses, because virtual assistants Siri or Alice will remind us the schedule for the day, call the necessary user, and build a door-to-door route. Technologies are convenient, and although convenience sometimes makes us lazy, people will most likely decide themselves what to entrust to AI. The common use of calculators — which are far from being AI — led to many quitting mental calculations, so we are unlikely to abandon calculators for the sake of mental gymnastics.

How does it work in practice?

The Russian code of ethics cannot force actors to do anything: it does not have such authority and can only provide recommendations. Accession to the code is voluntary, and those who wish to ascend are obliged to follow its provisions. This is a form of so-called soft regulation; it can nevertheless become a basis for future regulations to enshrine rights and obligations of developers and companies that use AI.

So far, the code outlines directions for ethical development of technology, and recommends that actors should unite — for instance, in order to verify information about powerful, or universal, AI. Such AI systems can improve themselves by entirely changing the original code, and also make decisions, learn, and use background experience. Some researchers claim that powerful AI will have consciousness (or its equivalent) — and even self-awareness. However, others argue that creating such AI system is impossible.

Anyway, the code of ethics identifies existing or potential problems and offers to solve them jointly — which appears to be one of the document’s major tasks. As shown by information bubbles or obtrusive user data collection, researchers and businesses often get carried away with their goals and forget about the ethical aspect.

Let’s have a look at the code provision on termination of interaction at a person’s request and AI identification. During a conversation with tech support in a live chat, it is not always possible to understand whether we are speaking with a chatbot or a human specialist. Also, users complain that they cannot connect with a tech specialist for quite a while and have to spend time communicating with a bot, which is unable to provide assistance. People even collect life hacks to bypass AI. If companies actually adhere to ethical principles, we will see fewer cases of misuse, as well as of data and privacy manipulation.

We should note that the code is not a static document; its provisions will be revised with due account of advancing AI technologies, as well as of evolving social ideas about the ethics of the use of such technologies. Therefore, the adoption of the code is only the initial step towards creating ‘ethical’ AI; these efforts should involve developers, users, and members of the scientific community and the government.

Wrapping it up

Attempts to envisage different scenarios, even the most pessimistic ones as well as those that are so far superior to AI, seem to be the right strategy. It is better to set boundaries in advance and make attempts to comply with them than to see that Elon Musk was right when he voiced his concerns calling AI humanity’s “biggest existential threat.”

Yet, we should understand that like any computer program, AI is a tool which itself is a neutral phenomenon and can both produce benefits and create problems. According to UNESCO, AI is expected to help 11 mio students receive secondary education; the technology is currently contributing to efforts to combat COVID-19 and solve environmental issues, but will increase gender inequality.

AICE emphasizes human responsibility for efforts to introduce AI and its effects. It is humans who should calculate risks and make sure the technology is safe, protected, does not violate boundaries, and, most importantly, will produce benefits. In this case, advantages of the latest technologies will outweigh undesirable consequences or misuse.

AICE has been adopted for Russian developers and companies to have common guidelines, including solutions to ethical issues. The government’s attention to the AI-related industry is a testament to relevance and necessity of the document. We see that Russia is interested in further development of AI and its more expansive use — which is particularly evidenced by the initiatives taken in the past two years. In 2019, the National Strategy for the Development of Artificial Intelligence was adopted, and the year 2020 saw the launch of the Artificial Intelligence federal project, with a series of AI-themed hackathons and lectures held within its framework this year.

By Sergei Plugotarenko, Director, Russian Association of Electronic Communications; Head, project office of the Digital Breakthrough hackathon, a flagship project of the presidential platform Russia — Land of Opportunity

Previous ArticleNext Article