News

Hackers learned how to crack neural networks

Hackers are successfully attacking neural networks, tampering with their image recognition capabilities. Yury Vizilter, Head of Department for Machine Vision and Data Mining at the State Research Institute of Aviation Systems (GosNIIAS), shared his insights during a roundtable on Artificial Intelligence in Recognition: Digital Fascism or a New Step to Freedom organized by the Invest Foresight business magazine on November 8 as part of the Engineering the Future Club project.

According to the expert, attackers can target image recognition.

“A recognition network knows a dog when there is a dog on the image, but after an attack, it starts making mistakes. For example, it can identify the face of a famous politician as the face of a protest leader,” said the expert.

Attacks can also be carried out on the optical stream in autonomous driving, he warned.

“This means unmanned vehicles can be attacked. Suppose the cars moving in front of the drone disappear from its radar – this can provoke an accident. Such attacks can be carried out from the real world,” said Yury Vizilter. “Scientists are debating whether natural human neural networks can also be attacked in a similar way. In fact, it is already happening, only people do not see this as an attack. Take news filtering through aggregators: we can think the news feed is being adapted for us, but perhaps this is done to control us.”

In conclusion, Yury Vizilter said artificial intelligence is dangerous, and the state is unlikely to be able to maintain its monopoly on technology in the future. On the other hand, these technologies can greatly improve the quality of life, so they still should be developed further.

Previous ArticleNext Article