Expert opinions, TECHNOLOGY

Scientists call for pausing AI experiments

Admitting lost control over AI: is confrontation inevitable?

Tesla and SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak and 1,500 other major IT experts signed a petition calling for immediately pausing giant experiments with AI for at least six months. Quoting the principles of the Asilomar conference on safe AI development, the petition authors note that “advanced AI could represent a profound change in the history of life on Earth.”

As a researcher of various scenarios of these changes for many years, I can add that fundamental changes are almost inevitable but their scenarios, to be discussed further below, are extremely far from what we may have seen in Terminator and similar sci-fi movies, when the loss of control over AI results in ‘a coup’ and AI domination over the human civilization.

The petition authors have similar fears:

“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

The petitioners are particularly concerned about GPT-4 by OpenAI and similar AI systems, whose development and learning cannot be completely controlled by humans already at this point while the intelligence of these systems is approaching the human level.

The letter calls on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

Experts say, “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

They suggest creating protocols of a safe development of AI “that are rigorously audited and overseen by independent outside experts.”

Moreover, they urge to develop “new and capable regulatory authorities,” introduce “liability for AI-caused harm,” and ensure “robust public funding for technical AI safety research.”

In fact, the authors of the open letter claim that we are on the brink of losing control over AI and urge to not cross this line until we understand how to behave in this new future.

Human-AI confrontation is inevitable

Even though the scientists’ concerns are justified, their concrete proposals as to how protect the humankind are quite naïve.

The thing is, their proposals would work only in a world where there are no conflicts, like Mahatma Gandhi’s ideal world where each region is self-governed, small and large states have equal rights and the UN and other supranational international agencies have more powers than the states themselves. In the real world, the suggestion to implement a set of shared safety protocols only raises a question, shared by every country? How would you make them do it?

Moreover, the statement that the government should interfere if these protocols are violated, as well as provide public funding for technical AI safety research will in practice mean not the control over the emerging transhuman AI by independent experts, but on the contrary, the security services’ control over experts and developers.

In fact, something like this is already happening. For instance, European tests revealed hidden capabilities of Chinese-made smartphones to automatically delete or censor about 500 various topics, as well as to send data to the server in Singapore (this particular functionality was turned off for European consumers).

So, unfortunately, the question of the authors of the letter, should we risk loss of control of our civilization? makes no sense, given that, unfortunately, we have never had any control of our civilization.

If the mankind has failed to stop the development of such direct threat as nuclear weapons, how can we come to an agreement, reconcile with each other and protect ourselves from dangerous kinds of AI in these six months?

So, what’s in store for us?

AI takeover scenario

As I said before, the AI takeover will be far from what we may have seen in Terminator and similar movies about the machine rebellion. On the contrary, the book Civilization after People describes a different, evolutionary transition from a living to a thinking matter. Such transition happens in accordance with the evolutionary laws of interconnected changes in the environment and adapting to these changes.

However, this transition can come with unexpected turns that can be considered an IT takeover of sorts. The beginning of the COVID-19 pandemic can be considered such an event. More precisely, this scenario concerns a conspiracy version of the beginning of the pandemic, that is, that it all began with a leak from a lab studying (or developing) new viruses. The control over the virus was lost and, when leaked from the lab to the human environment, the virus began spreading quickly and adapting to the new conditions. No “takeover,” just development through adapting. Moreover, something like that (though less disastrous) happens constantly: new viruses appear, epidemics begin and end.

It is similar with AI. Just like viruses, parts of AI (programs, apps, websites) go out into the human environment and many of them “mutate” (that is, learn by interacting with humans), and the most advanced ones begin interacting with each other, collecting and combining data by themselves.

Essentially, in accordance with the law of stability of a hybrid civilization, information structures such as algorithms, programs and memes are able to self-replicate and mutate in thinking matter just as viruses are in living matter.

It appears that the fundamental difference is that we spread viruses and “train” them beyond our will, while we handle AI information structures in a similar way yet consciously, filtering out what we see as unnecessary.

However, if we look at this from the perspective of living matter (mechanisms of living cells) rather than of thinking matter (mechanisms of consciousness), we would also claim that a person (namely, his cells) thoroughly separates the necessary from the unnecessary, which is called immunity. And information structures, from the perspective of living matter, get disseminated uncontrollably.

But let’s go back to the scenario: among information structures wandering in thinking matter, there will gradually appear an increasing number of those more and more capable of developing and spreading both through people’s actions, like viruses, and independently, like cellular life-forms. This overall mix of all information structures will evolve, adapting to the needs of humans and, conversely, shaping them as well. Again, we should note that the latter will not result from AI acquiring some kind of its “own will” (humans do not have it either: according to the latest research in neurophysiology, what we normally call free will is an illusion).

This creation of human needs will occur not as an AI uprising and suppressing humans, but simply as an evolution of the entire system. Similarly, it can be claimed that a need for virtual communication in social media was developed in people and a need to communicate with bots and robots will be created soon through suppressing live communication.

Ways to escape AI dangers

Apparently, the method of AI control proposed by the petition authors for international and state organizations is a utopia which will likely lead to classifying AI developments and using them to crush political opponents. This means that following AI leakage and humans’ loss of control over it, the proposed option will lead to dangerous AI variants to evolve, mutate and independently develop their own methods of suppressing and restricting humans. At best, these AI forms will continue deleting and censoring information in your computers and smartphones, acting similar to computer viruses. At worst, they will use your media files to create fake videos which will cause outbreaks of bullying in social media, or criminal cases in investigators’ databases and medical records in mental institutions, with neither investigators nor doctors having any tools to check or handle this flow of information that emerged upon secret AI breaking free from our control.

So what are ways to avoid the most dangerous superhuman AI and survive in the coming human-AI confrontation?

The answer is suggested by the theory of evolution: experts should focus their efforts on creating counterbalances similar to the evolution of living matter, with each species having rivals and natural enemies, rather than on developing rules (which will be secretly violated in any case).

For instance, it is no use preventing development of AI that can create false information; such attempts will result only in everyone blaming each other. Conversely, we should create counteractive AI software freely distributed in the human environment and capable of identifying fake photos and videos, detecting sources, and so on. It should be noted that fake software will also appear among these programs, claiming as false what is needs to be. Yet, these programs will also have their antidotes and counterbalances, some of them developed by people, and some created through autonomous AI – and so forth. For this entire mix of information structures to develop in a balanced way, we need each threat to lead to a clearly articulated demand for protection and therefore to an ecological niche – that is, to the potential for the replication of corresponding forms of thinking matter.

All this is obviously complicated and indefinite, with no guarantee of result. It would seem much easier for everyone, just as in Yevgeny Zamyatin’s novel “We”, to get together, spend six months thinking of ways to figure it out, and then act in unison to create some safe, correct and common ‘integral.’ But it never works this way; quite the opposite – the evolution goes on complicatedly and indefinitely, with no guarantee of outcome, creating more and more wonderful forms of living, and now also thinking matter.

By Ales Mishchenko, senior researcher at INRIA (National Institute for Research in Digital Science and Technology, France)

Previous ArticleNext Article