By The Open Group
“High availability”, in its simplest definition, “is a characteristic of a system, which aims to ensure an agreed level of operational performance for a higher than normal period.”[1] For computer systems high availability generally focuses on the technologies that maximize system uptime by building in fault tolerance to overcome application, operating system, and hardware failures. Uptime is often measured by the “number of 9s” of availability percentage with, for example 99% (two nines), meaning a system is down for 3.65 days a year for planned and unplanned downtime. Based on the 2015-2016 ITIC Reliability Report, 99% was a common expectation in the mid-90s, but of the 71% companies surveyed, there is an expectation of 99.99% (four 9s or 52.56 minutes per year) and 25% expect 99.999% (five 9s or 5.26 minutes of downtime).[2] For mission critical environments, 99.999% is a great example of high availability operational performance expectation.
The ITIC Report enumerates numerous reasons for lack of achieving high availability (Exhibit 2 in the report) with human error being a major contributor (49%) and several others revolving aspects of the operating systems/software (security flows, bugs in OS, integration/interoperability issues, lack of documentation, lack of training, etc.).
The UNIX Standard is intended to address many of those issues impacting system reliability and uptime by providing a robust foundation including standard/apis to make it easier to build reliable and interpolatable software, common utilities/commands to make it easier to learn and administer, supported by robust documentation (1700+ manual pages), and a fair degree of quality assurance with more than 45,000 functional tests. Collectively, all of these features and technical documentation creates a great foundation within a UNIX operating system, which then can compliment the software and hardware solutions focused on improving high availability.
The UNIX standard and the compliant operating systems are only one piece of the high availability story since it is part of the broader ecosystem that companies have come to rely on for their high availability solutions across the globe in key vertical industries such as telecom, banking, stock trading, Pharma/medical, infrastructure, etc. The five-9s can support multi-millions of dollars in revenue; save lives; deliver astronauts safely into space; deliver a robust foundation in global defense, and much more.
“A solid foundation is built upon standards, because standards provide assurance. Hewlett Packard Enterprise UNIX standards develop and deliver consistency. As we look at this, we talk about consistent APIs, consistent command line, and consistent integration between users and applications. A deterministic behavior is critical to high availability, because when it’s non-deterministic, things go wrong,” said Jeff Kyle, Director, Mission Critical Solutions, HPE. The UNIX standard is evolving to nurture the ecosystem and deliver what the market demands in these mission critical environments. When continuous computing for workloads is vital to the enterprise, the UNIX operating system is the best solution. The result is a proven infrastructure that accelerates business value and lowers your risk for the “always on” mission critical environments.[3]
Watch the UNIX: Journey of Innovation video to learn more about the UNIX value and importance to the market.
© The Open Group 2016
UNIX is a registered trademark of The Open Group. HP-UX is a registered trademark of HPE.
[1] https://en.wikipedia.org/wiki/High_availability
[2] ITIC 2015 – 2016 Global Server Hardware, Server OS Reliability Report — http://www.idgconnect.com/view_abstract/33194/itic-2015-2016-global-server-hardware-server-os-reliability-report
[3] http://h17007.www1.hp.com/za/en/business-critical/operating-environments/hpux11i/index.aspx#.VvsPpT9ElhM
2 comments
Comments are closed.