Expert opinions, TECHNOLOGY

When algorithms meet justice: Can AI develop a bill?

This issue is no longer out of the realm of fantasy — governments around the world are actively testing artificial intelligence (AI) in legislative processes. Can a machine write a fair law? The co-founders of the startup Plevako.ai Igor Gunyashov and Egor Tykov argue.

The results of using AI in lawmaking are interesting. In April 2025, the UAE made a breakthrough by creating the Office of Regulatory Intelligence. This is a government agency where AI becomes a full participant in the development of laws. The goal is ambitious: to speed up lawmaking by 70% through automatic processing of data arrays.

The UAE is not a pioneer. Back in October 2023, Brazilian MP Ramiro Rosario introduced a bill written entirely by ChatGPT. The document on support for local businesses was unanimously adopted by Parliament. In addition, in Costa Rica, neural networks were used to create documents on AI regulation.

Modern AI is really becoming a powerful analytical tool in lawmaking. There are three key areas where it is already being used effectively:

Legal database analysis — in minutes, AI processes thousands of pages of laws, identifies gaps and contradictions. This is especially valuable when regulating new areas: cryptocurrencies, biotechnologies, as well as AI itself.

Draft generation — creates primary versions of draft laws and amendments. According to the Law360 Pulse AI Survey, the proportion of American lawyers using generative AI has grown from one third to more than one half over the past year.

Monitoring of international law — AI monitors changes in the legislation of other countries and helps to adapt national norms.

Where algorithms fail

Lawmaking is not only logic, but also ethics, politics, and social justice. Therefore, the fact that AI does not understand the context is the main problem.

Analyzing the effectiveness of education spending, he can suggest anything: for example, close neighboring schools in order to save money. From the point of view of numbers, it is logical, from the human point of view, it is the destruction of entire communities.

Besides, AI has no moral judgment. In 2022, the Chinese iTutor Group introduced an AI candidate selection system that automatically screened out applicants over the age of 55. The algorithm has “learned” how to discriminate based on historical data. As a result, the company received a fine of $365,000.

The risk of “hallucinations” is another problem. In 2023, two American lawyers were fined $5,000 for a document with quotes from non-existent court decisions – all generated by ChatGPT.

Laws are not formulas. They are born not in experimental laboratory conditions, but in disputes, negotiations, and concessions. In parliament, deputies from different regions, with different views, and with different constituencies are fighting for the interests of their constituents. They are not looking for the “most effective” — they are looking for what is suitable for the majority, and AI does not know how to negotiate. It does not feel the tension and does not understand that behind the numbers there are the destinies of people. It just optimizes.

For example, the city authorities decide to use AI to develop a proposal on what to do with an abandoned park: build a playground or a mini-shopping center. The algorithm, based on the data, proposed the construction of retail space — the trade will pay off in three years, and the site already exists within a kilometer radius, and it will be enough. Residents are outraged: the old park is their only green space, and the “two playgrounds” are in crowded courtyards, and it takes 20 minutes to walk to them. The AI did not take into account the context: habits, accessibility, and expectations. A man in his place would have offered a compromise — to improve the park, save on scale, coordinate with the residents. Because the work of the legislature is not just optimization, but also dialogue.

The future belongs to the hybrid model

The most effective option is when the AI handles the routine, and the human makes decisions. The machines analyze, search for gaps, and generate drafts. At the same time, people remain responsible for final decisions, especially in ethically sensitive matters.

Ahead is the development of an “explicable” AI that will be able to argue decisions based on precedents and norms. However, new supervisory authorities, quality standards, and mandatory verification of AI texts for compliance with legislation will be required.

Today, it is necessary to train a new generation of lawyers who understand not only the law, but also the principles of AI. AI is already entering boardrooms around the world. Of course, it will not replace a human, but it will become a powerful analytical ally. The main thing to remember is that laws are created for people, and only a human person can decide what is fair.

Previous ArticleNext Article