By The Open Group
One significant example of “fake news” in 2016 was the announcement that Dennis Ritchie, one of the original authors of the UNIX® Operating System, had passed away. In fact, he’d done so in 2011, a week after the death of Steve Jobs. This year was in fact the fifth anniversary of his passing, but one where the extent of his contribution to the world was not overshadowed by others, and could be correctly acknowledged.
A lot of the central UNIX philosophy that he engineered alongside Bell Labs colleagues Ken Thompson and Brian Kernighan lives on this day. That of building systems from a range of modular and reusable software components; that while many UNIX programs do quite trivial things in isolation, that they combine with other programs to become general and useful tools. The envisioned ability to design and build systems quickly, and to reuse tried and trusted software components, remain as cultural norms in environments that employ Agile and DevOps techniques some 45 years later.
Their foresight was such that the same tools and user interface norms were replicated with the GNU project atop the Linux kernel. With the advent of the Internet, with interconnect standards agreed by the IETF and more latterly the W3C consortium, the same philosophy extended to very low cost industry standard servers. This, followed by the substitution of vendor specific buses to ever faster Ethernet and IP based connections, gave the ability for processors, storage and software components to be distributed in a scale-out fashion. The very nature of these industry standards was such that the geography over which these system components could be distributed extended well beyond a single datacentre; in some cases, and cognizant of latency and reliability concerns –to be able to work worldwide. The end result is that while traditional UNIX systems embody reliability and n+1 scaling, there is another approach based on the same core components that can scale out. With that, an operation as simple as a simple search on Google can involve the participation of over 1,000 geographically dispersed CPUs, and return results to the end user typically in under 200 milliseconds. However, building such systems – which are architected assuming individual device and communication path failures – tend to follow a very different set of design patterns.
The economics of using cloud based Linux infrastructure is often perceived as attractive, though we’re just past the “war” stage where each cloud vendors stacks are inherently proprietary. There are some laudable efforts to abstract code to be able to run on multiple cloud providers; one is FOG in the Ruby ecosystem. Another is CloudFoundry, that is executing particularly well in Enterprises with large investments in Java code. Emergent Serverless platforms (event driven, auto scalable function-as-a-service, where the whole supporting infrastructure is abstracted away) are probably the most extreme examples of chaotic evolution – and very vendor specific – at the time of writing.
The antithesis of open platforms is this effort to make full use of unique features in each cloud vendors offerings, a traditional lock-in strategy (to avoid their services becoming a price led commodity). The sort of thing that the UNIX community solved together many years ago by agreeing effective, vendor independent standards. Where certification engendered an assurance of compatibility and trust, leading to the ability for the industry to focus on higher end services to delight their end customers without fear of unnecessary lock-in.
Given the use of software designed to functionally mirror that of UNIX systems, one very valid question is “What would it take for Linux vendors to themselves have their distributions certified against recognized compelling industry standards – such as UNIX 03?”. This so that customers could ascribe the same level of vendor-independent assurance and trust as achieved by the largest Enterprise UNIX system vendors – but to the “scale out” sibling.
Given the licensing conditions on the Linux kernel and associated open source components, both Huawei and Inspur have achieved certification of their Red Hat Linux derivative EulerOS 2.0 and Inspur K-UX 3.0 operating systems. No mean feat and an indication that their customers have the most Enterprise ready Linux OS available on Intel architecture server platforms today.
This is a level of certification that we don’t think will go unnoticed in large emerging markets of the world. That said, we’d welcome any other Linux vendor to prove their compliance to the same standard. In the interim, well done Huawei, and well done Inspur – proving it can be done.
Version 3 of the Single UNIX® Specification: the UNIX 03 Product Standard:
Huawei Technology achieve UNIX® v03 Conformance of Huawei EulerOS 2.0 Operating System:
Inspur achieve UNIX® 03 Conformance of Inspur K-UX 3.0:
UNIX® Philosophy: https://en.wikipedia.org/wiki/Unix_philosophy