Installation packages for operating systems have been on the market for ever, a wide variety to suit every taste and need. Microsoft Windows has been the most popular operating system for desktop computers, the de facto worldwide standard for desktop client infrastructure. Despite occasional news of another ‘killer’ of conventional systems, the situation has not changed much: Windows continues to control about 56.4% of the market. On the other hand, Apple’s Mac OS is noticeably sidelining the Microsoft product. Linux accounts for a share, too, and it is no longer used exclusively by geeks — import substitution trends in a number of industries have made Linux-based systems a viable choice for user workstations. But the situation is different in the server platforms segment. Here, although it has not ousted other operating systems at this stage, Linux is clearly targeting market dominance in the near future.
These developments were predicted years before. In the 2000s and 2010s, the server market was dominated by commercial Unix systems such as Solaris, AIX, HP-UX, with industrial-grade servers all running on them. With the spread of more cost-effective x86 platforms, and with virtual infrastructures and cloud services increasingly gaining popularity, the market began shifting towards open systems. Many large businesses still use monolithic core applications running on Unix because heavy databases, large ERP and OLTP systems require immense computing power and are optimized for Unix. However, their maintenance requires significant financial costs, while they lack flexibility and dynamics in adding new features. It is increasingly noticeable that developers create new services and products with a view to using Linux and open-source components.
The new approach offers a clear benefit of scaling-out to add capacity. While scaling-up — a practice used in classic mainframe systems — is limited to available hardware, a scale-out infrastructure adds capacity by distributing the load across many small and inexpensive systems. Scale-out infrastructures are often geographically distributed across several buildings or even regions. Consider social media and, in general, the entire web industry, which are in fact using this principle. Data is distributed and stored across a huge network of computing nodes deployed in different regions, not in one giant data center. As a result, the required level of performance is achieved in a very cost-effective way, and this level can be increased indefinitely.
As another advantage, Linux is open and free, preventing vendor lock-ins, or a situation in which customers are dependent on a single manufacturer. Naturally, there are some commercial distributions like Red Hat or SUSE, for customers who install their mission and business critical applications and want the operation of these essential services to be protected by the OS developer. However, there are completely free distributions like Debian, CentOS and Ubuntu. If necessary, one can always book a third-party expert review from professionals on the market if there seems to be a lack of required competences in one’s own company.
The share of Linux users on the global OS market doubled over the first six months of 2020 alone. It is possible that the dynamic will be similar for the next few years, due to a wider availability of cloud technology and growing interest in container-based virtualization. Hypothetically, a Docker can run on Windows as well, but Linux is the native environment for containers and orchestrators like Kubernetes. The comprehensive business digitalization trend is also driving the popularity of Linux infrastructure.
Companies often look for ready-made platforms and components that could be easily adapted to specific tasks. CROC’s cloud platform is one example. It is based on Linux KVM and consists of other integrated open-source proprietary products and services. Linux is the best option for creating this type of unique products.
By Alexander Dubsky, Head of Infrastructure Design and Maintenance Group, CROC Cloud Services