TECHNOLOGY

Machine Learning, AI, neural networks: Most important trends in 2017 and 2018

Neural Information Processing Systems (NIPS) is one of the leading forums on Machine Learning and applied neural networks. The platform serves for demonstration of specific achievements in Artificial Intelligence (AI) and deep learning as well as its main areas. The conference that started in 2017 and reports included in the program for NIPS 2018 (December 2018) outline the main four popular areas of discussion in the ML and AI community.

Objectivity in Machine Learning

Objectivity, or fairness, is one of the topical problems in analytics and Machine Learning. During the previous stages of ML development, it was expected that models, or robots of the future, unlike people, will produce objective and unbiased results. However, now the ML community is asking the following question:

“Who can guarantee that the models and systems in development are free of bias and prejudice of their developers?”

Prejudice in ML may be a result of several factors, including incorrect or incomplete input data, errors in model design, incorrect interpretation of results. Also, due to complex algorithms, it is often difficult to get an explanation from modern ML on how the solution was obtained (the black box problem). As a result, biased (subjective) results of a model lead to erroneous business solutions. Currently, for businesses this is important in such applications as credit risk assessment, fraud prevention and labor market analytics. The importance of the ML subjectivity is emphasized by the fact that companies like Google, Facebook, Amazon, Microsoft and IBM are actively participating in the discussion and starting to offer first solutions.

Games with incomplete information

“When will the machine finally be able to continually beat people at intellectual games?” – this question has been asked since the beginnings of machine learning.

As regards the games where competitors have full information, machine has already defeated man: it has found optimal algorithms for winning or at least for a draw (such as at chess in 2011 and at draughts in 2007). The games with incomplete information have been won by man until recently. One of such games is poker, a popular card game where players do not know each other and they may bluff by making a bet with an inferior hand. A person can try to figure out whether a player is bluffing using their intuition, an ability that machine theoretically does not possess. During NIPS-2017, a Libratus model was presented that was able to beat man at poker. In 2017, Libratus took part in a poker tournament playing against four people, some of the world’s best poker players, and won it. During the games with inexact information, it is impossible to prove that the algorithm is optimal, and man may defeat machine in the poker re-match. But the mere fact of machine beating the world’s best poker players speaks volumes.

Artificial intelligence and self-learning

The scope of artificial intelligence has greatly advanced in such areas as automatic recognition of complex objects, such as faces recognition, or processing and performing audio commands. However, human intellect also includes other abilities – particularly, the ability to learn on one’s own. One of today’s hottest topics regarding AI is remodeling human intellect abilities and using them to make machines think like humans do, which would also include the self-learning ability. One of the approaches here is applying learning methods with the use of advanced gaming platforms and the theory of probability. In particular, graphic and videogame platform algorithms make models of how humans perceive 3D structures, and during the games they teach how to handle objects basing on their physical characteristics (round, soft, large, hazardous, useful, and others). These algorithms are used in self-learning models, with artificial intelligence modeling and replicating the process of human videogame learning process. This approach is already used for interactive data analysis, robotic industry, and modeling scientific discoveries, with AI operating as a researcher or a scientist.

Deep learning for forecasting the future

People want to know what awaits them in the future, be it the stock market, economy, business planning, or traffic congestion. Currently, the most common method used to predict future processes is a time series. With all its advantages, its main shortcoming is the assumption that a predicted process (such as financial quotations, economic situation, or currency rate) will inherit the basis characteristics of the changes that have taken place. This complicates the exact prediction of events that are rare or have never occurred, such as crises due to terrorist attacks or wars, sudden economic and market shrinks, overlong economic growth, and others.

The projects featured at NIPS-2017 showed attempts to circumvent this restriction. The discussions, in particular, concerned the application of deep learning for making forecasts. The main advantage of deep learning is the opportunity to synthesize a host of factors that affect a predicted process. It was noted that deep learning is taking its first steps in the time series area; given the rate of machine learning development, it will soon be possible to discuss specific results.

Previous ArticleNext Article