Tag Archives: IBM

The Enviable Pedigree of UNIX® and POSIX®

By Andrew Josey, VP, Standards and Certification, The Open Group

Technology can be a fickle thing. Spurred by perpetual innovation, the one constant in the tech industry is change. As such, we can expect that whatever is the hottest thing in the industry today—Cloud, Big Data, Mobile, Social, what have you—will be yesterday’s news within a few years’ time. That is how the industry moves and sustains itself, with constant development and creativity—all of which is only getting faster and faster.

But today’s breakthroughs would be nowhere and would not have been possible without what came before them—a fact we sometimes forget. Mainframes led to personal computers, which gave way to laptops, then tablets and smartphones, and now the Internet of Things. Today much of the interoperability we enjoy between our devices and systems—whether at home, the office or across the globe—owes itself to efforts in the 1980s and 1990s to make an interoperable operating system (OS) that could be used across diverse computing environments—the UNIX operating system.

Created at AT&T Bell Laboratories in the early 1970s, the UNIX operating system was developed as a self-contained system that could be easily adapted and run on commodity hardware. By the 1980s, UNIX workstations were widely used in academia and commercially, with a large number of system suppliers, such as HP, IBM, and Sun Microsystems (now Oracle), developing their own flavors of the OS.

At the same time, a number of organizations began standardization efforts around the system. By the late 1980s, three separate organizations were publishing different standards for the UNIX operating system, including IEEE, ISO/IEC JTC1 and X/Open (which eventually became The Open Group).

As part of the standardization efforts undertaken by IEEE, it developed a small set of application programming interfaces (APIs). This effort was known as POSIX, or Portable Operation System Interface. Published in 1988, the POSIX.1 standard was the first attempt outside the work at AT&T and BSD (the UNIX derivative developed at the University of California at Berkeley) to create common APIs for UNIX systems. In parallel, X/Open (an industry consortium consisting at that time of over twenty UNIX suppliers) began developing a set of standards aligned with POSIX that consisted of a superset of the POSIX APIs.  The X/Open standard was known as the X/Open Portability Guide and had an emphasis on usability. ISO also got involved in the efforts, by taking the POSIX standard and internationalizing it.

In 1995, the Single UNIX Specification was created to represent the core of the UNIX brand. Born of a superset of POSIX APIs, the specification provided a richer set of requirements than POSIX for functionality, scalability, reliability and portability for multiuser computing systems. At the same time, the UNIX trademark was transferred to X/Open (now The Open Group). Today, The Open Group holds the trademark in trust for the industry, and suppliers that develop UNIX systems undergo certification, which includes over 40,000 tests, to assure their compatibility and conformance to the standard.

These tri-furcated efforts by separate standards organizations continued through most of the 1990s, with the people involved in developing the standards constantly bouncing between organizations and separate meetings. In late 1997, a number of vendors became tired of having three separate parallel efforts to keep track of and they suggested all three organizations come together to work on one standard.

In 1998, The Open Group, which had formed through the merger of X/Open and the Open Software Foundation, met with the ISO/IEC JTC 1 and IEEE technical experts for an inaugural meeting at IBM’s offices in Austin, Texas. At this meeting, it was agreed that they would work together on a single set of standards that each organization could approve and publish. Since then the approach to specification development has been “write once, adopt everywhere,” with the deliverables being a set of specifications that carry the IEEE POSIX designation, The Open Group Technical Standard designation, and the ISO/IEC designation. Known as the Austin Group, the three bodies still work together today to progress both the joint standard. The new standard not only streamlined the documentation needed to work with the APIs but simplified what was available to the market under one common standard.

A constant evolution

As an operating system that forms the foundational underpinnings of many prominent computing systems, the UNIX OS has always had a number of advantages over other operating systems. One of the advantages is that those APIs have made it possible to write code that conforms to the standard that can run on multiple systems made by different vendors. If you write your code to the UNIX standard, it will run on systems made by IBM, HP, Oracle and Apple, since they all follow the UNIX standard and have submitted their operating systems for formal certification. Free OSs such as Linux and BSD also support the majority of the UNIX and POSIX APIs, so those systems are also compatible with all the others. That level of portability is key for the industry and users, enabling application portability across a wide range of systems.

In addition, UNIX is known for its stability and reliability—even at great scale. Apple claims over 80 million Mac OS X systems in use today – all of them UNIX certified. In addition, the UNIX OS forms the basis for many “big iron” systems. The operating systems’ high through-put and processing power have made it an ideal OS for everything from supercomputing to systems used by the government and financial sectors—all of which require high reliability, scale and fast data processing.

The standard has also been developed such that it allows users to “slice and dice” portions of it for use even when they don’t require the full functionality of the system, since one size does not fit all. Known as “profiles,” these subsets of the standard API sets can be used for any number of applications or devices. So although not full UNIX systems, we see a lot of devices out there with the standard APIs inside them, notably set top boxes, home routers, in-flight entertainment systems and many smart phones.

Although the UNIX and POSIX standards tend to be hidden, deeply embedded in the technologies and devices they enable today, they have been responsible for a great many advances across industries from science to entertainment. Consider the following:

  • Apple’s Mac OS X, the second widely most used desktop system today is a certified UNIX system
  • The first Internet server for the World Wide Web developed by Tim Berners Lee was developed on a UNIX system
  • The establishment of the World Wide Web was driven by the availability of connected UNIX systems
  • IBM’s Deep Blue supercomputer, a UNIX system, was the first computer to beat World Chess Champion Gary Kasparov in 1997
  • Both DNA and RNA were sequenced using a UNIX system
  • For eight consecutive years (1995-2002), each film nominated for an Academy Award for Distinguished Achievement in Visual Effects was created on Silicon Graphics computers running the UNIX OS.

Despite what one might think, both the UNIX and POSIX standards are continually under development still even today.  The community for each is very active—meeting more than 40 times a year to continue developing the specifications.

Things are always changing, so there are new areas of functionality to standardize. The standard is also large so there is a lot of maintenance and ways to improve clarity and portability across systems.

Although it might seem that once a technology becomes standardized it becomes static, standardization usually has the opposite effect—once there is a standard, the market tends to grow even more because organizations know that the technology is trusted and stable enough to build upon. Once the platform is there, you can add things to it and run things above it. We have about 2,000 application interfaces in UNIX today.

And as Internet-worked devices continue to proliferate in today’s connected world, chances are many of these systems that need big processing power, high reliability and huge scale are going to have a piece of the UNIX standard behind them—even if it’s deep beneath the covers.

By Andrew JoseyAndrew Josey is VP, Standards and Certification at The Open Group overseeing all certification and testing programs. He also manages the standards process for The Open Group.

Since joining the company in 1996, Andrew has been closely involved with the standards development, certification and testing activities of The Open Group. He has led many standards development projects including specification and certification development for the ArchiMate®, TOGAF®, POSIX® and UNIX® programs.

He is a member of the IEEE, USENIX, UKUUG, and the Association of Enterprise Architects (AEA).  He holds an MSc in Computer Science from University College London.

@theopengroup

1 Comment

Filed under Association of Enterprise Architects, Certifications, digital business, EA, enterprise architecture, Internet of Things, IoT, IT, operating system, Oracle, Single UNIX Specification, standards, Uncategorized, UNIX

The Cloud: What’s UNIX® Got to Do With It?

By The Open Group

Cloud computing has come of age and is the solution of choice for CIOs looking to maximize use of resources while minimizing capital spend.[1] Cloud solutions, whether it is infrastructure, platform or service, have the appeal of business agility[2], without having to understand what is “under the hood”. However, what’s under the hood becomes even more important in a Cloud environment because there can be multiple services running with potential impact on numerous customers and the services provided to them.  For software as a service (SaaS) and platform as a service (PaaS) the hosting operating system is a critical component included in with Cloud environment as it directly impacts the performance of the Cloud solution. For infrastructure as a service (IaaS) the operating system is a critical choice made by the customer.

The CIO View

The CIO loves the idea of having the ability to rapidly provide on-demand ubiquitous computing resources to their company without the management overhead and integration challenges. The hardware infrastructure, network infrastructure, storage, hypervisor and OS must have high availability, scalability, and performance to meet the “5-nines” reliability expected (SCIT Report) with the operating system being especially critical component in that stack.[3]

UNIX, A Robust Platform for Cloud:

The Cloud needs to be highly available, scalable, secure, and robust for high-demand computing.  A certified UNIX® OS can provide this and enables companies to innovate in the Cloud.  A CIO would be looking at each element of the stack with a high degree of assurance that the Cloud solution has been well tested and has proven system and application interoperability, which also simplifies solution integration. The UNIX OS amplifies this simplicity delivering peace of mind for IT directors and above.

Who Is Choosing a UNIX Cloud?

Cloud Solution/Hosting Providers look to a UNIX Cloud infrastructure to service financial institutions looking to support high transactional environments like online and mobile banking marketplace.[4] Moreover, UNIX Cloud infrastructure provides a cost-effective, secure, and redundant environment.[5]

“Verizon serves both customers and employees with a UNIX Cloud infrastructure that implements enhanced agility, superior performance, easy maintainability, and effective cost control,” said Chris Riggin, Enterprise Architect at Verizon.[6]

HPE, IBM, and Oracle have expanded their services offerings to deliver UNIX mission-critical cloud and enterprise infrastructure, including their branded systems.  These UNIX Cloud solutions help their customers continue to scale while delivering business continuity and a low total cost of ownership.[7]

By The Open Group

Get more tools and resources on UNIX innovation on www.opengroup.org/UNIX or review these other resources today:

@theopengroup

© 2016 The Open Group

UNIX® is a Registered Trademark of The Open Group. Oracle® Solaris is a registered trademark of Oracle Corporation. IBM AIX is a trademark of IBM Corporation. HP-UX is a registered trademark of HPE.

 

[1] Harvard Business Review, Cloud Computing Comes of Age, Page 3, 2015, https://www.oracle.com/webfolder/s/delivery_production/docs/FY15h1/doc16/HBR-Oracle-Report-webview.pdf

[2] Harvard Business Review, Cloud Computing Comes of Age, Page 3, 5, 6; 2015, https://www.oracle.com/webfolder/s/delivery_production/docs/FY15h1/doc16/HBR-Oracle-Report-webview.pdf

[3] UNIX: The “Always On” OS, 2016, https://blog.opengroup.org/2016/04/18/unix-the-always-on-os/

[4] Connectria / Sybase Customer Success Story:  http://www.connectria.com/content/case_studies/connectria_flyer_sybase_case_study.pdf

[5] Connectria AIX Hosting: http://www.connectria.com/technologies/aix_hosting.php

[6] UNIX Based Cloud, Harry Foxwell, Principal Consultant, Oracle, February 2016, https://blog.opengroup.org/2016/02/03/the-unix-based-cloud/

[7] a. http://www8.hp.com/us/en/business-solutions/solution.html?compURI=1167850#.VyfQzD9EmPB

  1. https://www.openstack.org/marketplace/distros/distribution/oracle/oracle-openstack-for-oracle-solaris
  2. http://www-03.ibm.com/systems/power/solutions/cloud/

 

Leave a comment

Filed under Cloud, cloud computing, The Open Group, Uncategorized, UNIX

UNIX®: Lowering the Total Cost of Ownership

By The Open Group

The value of UNIX, as a technology and as a standard, has clearly been significant over its 45-year history as a technology and its 20-years as an open standard leading to tremendous innovation across numerous industries.  Developers, integrators and customers have benefited from its origins as open development platform to becoming an open standard. Recent blog articles have showcased how UNIX makes software development easier[1], is highly available[2], more secure[3] and scalable.  Total cost of ownership (TCO) is another area that has benefited from the UNIX standard and the operating systems that are UNIX certified.  For this article, TCO is primarily defined as the cost of acquisition, maintenance, and updating of a solution.

UNIX, an Open Group standard, enables customers’ choices in the building blocks for their desired solution. The choices come from the numerous UNIX certified operating systems on the market today – IBM AIX, HPE HP-UX, Inspur K-UX and Oracle Solaris to name a few.  The acquisition cost, as a part of the total cost of ownership, is also lower because of the compatibility and interoperability benefits of the UNIX standard.  IT organizations do not have to spend time fighting integration interoperability and incompatibility issues often found in non-certified operating systems.  Bottom line is that there is greater choice with less integration overhead leading to lower cost of acquisition.

The UNIX standard benefits the maintenance component of TCO ensuring there is compatibility and interoperability at the level of the operating system (OS) and the software dependencies on that OS. A UNIX certified OS also provides assurance of a level of quality with more than 45,000 functional tests having been passed to achieve certification. Of course, the other benefit of the UNIX standard is that it provides consistent system commands regardless of what UNIX OS is running in your data center so you don’t need train administrators on multiple operating systems or even have different administrators for different operating systems. An estimated 49% of system downtime is caused by human error, which should be mitigated by having custom ways to manage systems. UNIX provides greater determinism, which helps reduce maintenance component of TCO.[4]

The UNIX standard improves cost for system updates. While most OS vendors have their own method of doing system updates, there is greater confidence with UNIX compliant OS that regardless of how the update occurs the software and overall solution can rely on the continued assurance of consistent APIs, behavior, etc.  This turns out to be important as solutions get bigger and more complex the need to ensure continuity becomes particularly critical. Having standards in place help ensure that continuity in an ever changing solution.

TCO is greatly reduced because a UNIX certified operating system lowers the acquisition, maintenance and updating costs. The benefits of UNIX mentioned above also hint at reduced administrative, training and operational costs which also reduces the total cost of ownership which also should be consider in evaluating solution cost. IT decision makers should consider how choosing an operating system that is UNIX certified will benefit the TCO profile of their solution(s). This is especially true because making standards a requirement, during acquisition, costs so little yet can have such substantial benefits to TCO, enabling accelerated innovation and demonstrating good IT governance.

Cost of Ownership Price Tag Good Value Investment ROI

Get more information on UNIX with new tools and resources available at www.opengroup.org/UNIX or review some selected resources below:

[1] https://blog.opengroup.org/2016/03/11/unix-allowing-engineers-to-engineer

[2] https://blog.opengroup.org/2016/04/18/unix-the-always-on-os/

[3] https://blog.opengroup.org/2016/03/24/o-armor-unix-armor/

[4] https://blog.opengroup.org/2016/04/18/unix-the-always-on-os/

@theopengroup

 

Leave a comment

Filed under Standards, The Open Group, Uncategorized, UNIX

The UNIX® Evolution: An Innovative History

By The Open Group

The history of computing would not be complete without Ken Thompson[1] and the late Dennis Ritchie[2] who were visionaries during the early days of computing. Both men couldn’t have anticipated the impact of their (and others) contribution of the UNIX system (initially dubbed as UNICS[3]) to the world starting in 1969. Ken Thompson, Dennis Ritchie, and others created a collaborative programing environment [4] that would promote, what now is commonly called “open development”.  In 1975, that vision became far more collaborative with the release of version 6 of the Bell Labs’ UNIX operating system, which was the first version made widely available outside of Bell Labs, and ultimately became the University of California Berkeley BSD UNIX[5]. The UNIX operating system is “now considered one of the most inspiring and influential pieces of software ever written.” [6]

What started out as a communal programing environment or even an early word processor[7], the UNIX system turned out to be a more durable technology than Thomson and Ritchie could have imagined. It’s not only a durable operating system, but it is adaptable, reliable, flexible, portable and scalable.  Ultimately, the UNIX OS would end up being supported across multiple systems, architectures, platform vendors, etc. and also spawn a number of look-alike compatibles. Lastly, UNIX technology would be the engine that drove innovation even beyond programming and data processing to markets and technologies beyond the realm of computer science.

The academic and commercial take-up of UNIX systems would help germinate the growth of many existing and new technologies. An example of that innovation would be in bioinformatics that was critical to advances in genetic engineering including the human genome project. Investigations of the physical world, whether it’s high energy physics, modeling proteins, designing Callaway’s Big Bertha Club, or simulating car crashes to improve passenger safety was part of the overall innovation enablement of UNIX. Moreover, UNIX systems contributed to more ethereal innovation being a driving force of the growth of ARPANET (to become the World Wide Web) and being the first World Wide Web server[8]. Examples of where science and business have been touched by UNIX innovation include assisting high-energy physics laboratories create standards to improve collaboration via HEPiX[9], NASA’s Solutions for Enterprise-Wide Procurement (SEWP) to maximize value while reducing cost[10], and the modern UNIX standard, which has helped vendors, developers and customers maximize their investment[11]. Even touching the world of entertainment in which computer generation visual effects have become ubiquitous[12]. There are few technologies and industries in which UNIX systems did not have an impact.

The UNIX legacy of Thompson and Ritchie is far from over with numerous UNIX systems being critical to both personal computing and enterprise computing. Apple, a truly iconic company, embraces UNIX technology as the core of the Mac OS X operating system, which is certified against the Single UNIX specification[13]. Major vendors such as HPE, IBM, Inspur, and Oracle offer UNIX products, which are also certified against the Single UNIX Specification; today’s UNIX systems provide solutions to most industries including driving current innovations around cloud computing, mobility, virtualization), etc. Most customers have come to depend on the enterprise grade, highly reliable, scalable, and secure UNIX systems that drive their daily business continuity, and the innovative solutions that help them scale their businesses to the next level.

Companies like Audi AG use certified UNIX systems as a robust, flexible, and high performance platform for managing its business operations using IBM AIX running a private cloud infrastructure[14]. Another example of innovation is Best Western, the hotel chain, which uses certified UNIX systems from HPE to deliver processing-intensive services providing their customers with real-time, 24X7 responsiveness[15]. Lastly, Toshiba has used certified UNIX systems from Oracle to reduce operational and maintenance costs by 50% creating a private cloud using virtualization technologies[16].

From the humble roots of Thompson’s and Ritchie’s original UNIX system to the current branded versions of the commercial UNIX systems, this OS continues to be at the core of the modern computing world driving innovation.

By The Open Group

Highlights from the Evolution of UNIX®
(Click the infographic to download the PDF)

For more information, please visit http://www.opengroup.org/unix

[1] https://en.wikipedia.org/wiki/Ken_Thompson

[2] https://en.wikipedia.org/wiki/Dennis_Ritchie

[3] https://en.wikipedia.org/wiki/History_of_Unix

[4] Dennis M. Ritchie, The Evolution of the Unix Time-Sharing System. 1979.

[5] https://www.albion.com/security/intro-2.html

[6] http://spectrum.ieee.org/computing/software/the-strange-birth-and-long-life-of-unix/

[7] http://www.catb.org/esr/writings/taoup/html/ch02s01.html

[8] http://webfoundation.org/about/vision/history-of-the -web/

[9] http://cds.cern.ch/record/1732257/files/vol34-issue2-p018-e.pdf

[10] The NASA SEWP (Solutions for Enterprise-Wide Procurement) began as a means for a NASA scientist to easily obtain his computer in 1992 and has grown to be one of the premier vehicles for the entre US Government to purchase Information Technology.  In the formative years of the SEWP program UNIX, and in particular the UNIX brand as trademarked and certified by The Open Group, was a keystone to ensuring a standardized set of solutions that met the needs of Government scientists and engineers.” – Joanne Woytek, NASA SEWP Program Manager, January 14, 2016

[11] http://www.unix.org/market_information/buscase.html

[12] http://www.sfgate.com/business/article/Special-Effects-ILM-SGI-on-Top-3033788.php

[13] https://blog.opengroup.org/2015/10/02/mac-os-x-el-capitan-achieves-unix-certification/

[14] http://ibmdatamanagement.co/tag/audi

[15] http://h41361.www4.hp.com/best_western_success.pdf

[16] http://www.oracle.com/us/corporate/customers/customersearch/toshiba-7-sparc-ss-2283278.html

 

 

5 Comments

Filed under Enterprise Transformation, Single UNIX Specification, Standards, The Open Group, Uncategorized, UNIX

Flexibility, Agility and Open Standards

By Jose M. Sanchez Knaack, IBM

Flexibility and agility are terms used almost interchangeably these days as attributes of IT architectures designed to cope with rapidly changing business requirements. Did you ever wonder if they are actually the same? Don’t you have the feeling that these terms remain abstract and without a concrete link to the design of an IT architecture?

This post searches to provide clear definitions for both flexibility and agility, and explain how both relate to the design of IT architectures that exploit open standards. A ‘real-life’ example will help to understand these concepts and render them relevant to the Enterprise Architect’s daily job.

First, here is some context on why flexibility and agility are increasingly important for businesses. Today, the average smart phone has more computing power than the original Apollo mission to the moon. We live in times of exponential change; the new technological revolution seems to be always around the corner and is safe to state that the trend will continue as nicely visualized in this infographic by TIME Magazine.

The average lifetime of a company in the S&P 500 has fallen by 80 percent since 1937. In other words, companies need to adapt fast to capitalize on business opportunities created by new technologies at the price of loosing their leadership position.

Thus, flexibility and agility have become ever present business goals that need to be supported by the underlying IT architecture. But, what is the precise meaning of these two terms? The online Merriam-Webster dictionary offers the following definitions:

Flexible: characterized by a ready capability to adapt to new, different, or changing requirements.

Agile: marked by ready ability to move with quick easy grace.

To understand how these terms relate to IT architecture, let us explore an example based on an Enterprise Service Bus (ESB) scenario.

An ESB can be seen as the foundation for a flexible IT architecture allowing companies to integrate applications (processes) written in different programming languages and running on different platforms within and outside the corporate firewall.

ESB products are normally equipped with a set of pre-built adapters that allow integrating 70-80 percent of applications ‘out-of-the-box’, without additional programming efforts. For the remaining 20-30 percent of integration requirements, it is possible to develop custom adapters so that any application can be integrated with any other if required.

In other words, an ESB covers requirements regarding integration flexibility, that is, it can cope with changing requirements in terms of integrating additional applications via adapters, ‘out-of-the-box’ or custom built. How does this integration flexibility correlate to integration agility?

Let’s think of a scenario where the IT team has been requested to integrate an old manufacturing application with a new business partner. The integration needs to be ready within one month; otherwise the targeted business opportunity will not apply anymore.

The picture below shows the underlying IT architecture for this integration scenario.

jose diagram

Although the ESB is able to integrate the old manufacturing application, it requires an adapter to be custom developed since the application does not support any of the communication protocols covered by the pre-built adapters. To custom develop, test and deploy an adapter in a corporate environment is likely going to take longer that a month and the business opportunity will be lost because the IT architecture was not agile enough.

This is the subtle difference between flexible and agile.

Notice that if the manufacturing application had been able to communicate via open standards, the corresponding pre-built adapter would have significantly shortened the time required to integrate this application. Applications that do not support open standards still exist in corporate IT landscapes, like the above scenario illustrates. Thus, the importance of incorporating open standards when road mapping your IT architecture.

The key takeaway is that your architecture principles need to favor information technology built on open standards, and for that, you can leverage The Open Group Architecture Principle 20 on Interoperability.

Name Interoperability
Statement Software and hardware should conform to defined standards that promote interoperability for data, applications, and technology.

In summary, the accelerating pace of change requires corporate IT architectures to support the business goals of flexibility and agility. Establishing architecture principles that favor open standards as part of your architecture governance framework is one proven approach (although not the only one) to road map your IT architecture in the pursuit of resiliency.

linkedin - CopyJose M. Sanchez Knaack is Senior Manager with IBM Global Business Services in Switzerland. Mr. Sanchez Knaack professional background covers business aligned IT architecture strategy and complex system integration at global technology enabled transformation initiatives.

 

 

 

Comments Off on Flexibility, Agility and Open Standards

Filed under Enterprise Architecture

Viewpoint: Technology Supply Chain Security – Becoming a Trust-Worthy Provider

By Andras Szakal, IBM

Increasingly, the critical systems of the planet — telecommunications, banking, energy and others — depend on and benefit from the intelligence and interconnectedness enabled by existing and emerging technologies. As evidence, one need only look to the increase in enterprise mobile applications and BYOD strategies to support corporate and government employees.

Whether these systems are trusted by the societies they serve depends in part on whether the technologies incorporated into them are fit for the purpose they are intended to serve. Fit for purpose is manifested in two essential ways: first, does the product meet essential functional requirements; and second, has the product or component been produced by trustworthy provider. Of course, the leaders or owners of these systems have to do their part to achieve security and safety (e.g., to install, use and maintain technology appropriately, and to pay attention to people and process aspects such as insider threats). Cybersecurity considerations must be addressed in a sustainable way from the get-go, by design, and across the whole ecosystem — not after the fact, or in just one sector or another, or in reaction to crisis.

In addressing the broader cybersecurity challenge, however, buyers of mission-critical technology naturally seek reassurance as to the quality and integrity of the products they procure. In our view, the fundamentals of the institutional response to that need are similar to those that have worked in prior eras and in other industries — like food.

For example:  Most of us are able to enjoy a meal of stir-fried shrimp and not give a second thought as to whether the shellfish is safe to eat.

Why is that? Because we are the beneficiaries of a system whose workings greatly increase the likelihood — in many parts of the world — that the shellfish served to end consumers is safe and uncontaminated. While tainted technology is not quite the same as tainted foods it’s a useful analogy.

Of course, a very high percentage of the seafood industry is extremely motivated to provide safe and delicious shellfish to the end consumer. So we start with the practical perspective that, much more likely than not in today’s hyper-informed and communicative world, the food supply system will provide reasonably safe and tasty products. Invisible though it may be to most of us, however, this generalized confidence rests on a worldwide system that is built on globally recognized standards and strong public-private collaboration.

This system is necessary because mistakes happen, expectations evolve and — worse — the occasional participant in the food supply chain may take a shortcut in their processing practices. Therefore, some kind of independent oversight and certification has proven useful to assure consumers that what they pay for — their desired size and quality grade and, always, safety — is what they will get. In many countries, close cooperation between industry and government results in industry-led development and implementation of food safety standards.[1]

Government’s role is limited but important. Clearly, government cannot look at and certify every piece of shellfish people buy. So its actions are focused on areas in which it can best contribute: to take action in the event of a reported issue; to help convene industry participants to create and update safety practices; to educate consumers on how to choose and prepare shellfish safely; and to recognize top performers.[2]

Is the system perfect? Of course not. But it works, and supports the most practical and affordable methods of conducting safe and global commerce.

Let’s apply this learning to another sphere: information technology. To wit:

  • We need to start with the realization that the overwhelming majority of technology suppliers are motivated to provide securely engineered products and services, and that competitive dynamics reward those who consistently perform well.
  • However, we also need to recognize that there is a gap in time between the corrective effect of the market’s Invisible Hand and the damage that can be done in any given incident. Mistakes will inevitably happen, and there are some bad actors. So some kind of oversight and governmental participation are important, to set the right incentives and expectations.
  • We need to acknowledge that third-party inspection and certification of every significant technology product at the “end of pipe” is not only impractical but also insufficient. It will not achieve trust across a wide variety of infrastructures and industries.  A much more effective approach is to gather the world’s experts and coalesce industry practices around the processes that the experts agree are best suited to produce desired end results.
  • Any proposed oversight or government involvement must not stymie innovation or endanger a provider’s intellectual capital by requiring exposure to 3rd party assessments or require overly burdensome escrow of source code.
  • Given the global and rapid manner in which technologies are invented, produced and sold, a global and agile approach to technology assurance is required to achieve scalable results.  The approach should be based on understood and transparently formulated standards that are, to the maximum extent possible, industry-led and global in their applicability.  Conformance to such standards once would then be recognized by multiple industry’s and geo-political regions.  Propagation of country or industry specific standards will result in economic fragmentation and slow the adoption of industry best practices.

The Open Group Trusted Technology Forum (OTTF)[3] is a promising and complementary effort in this regard. Facilitated by The Open Group, the OTTF is working with governments and industry worldwide to create vendor-neutral open standards and best practices that can be implemented by anyone. Membership continues to grow and includes representation from manufacturers world-wide.

Governments and enterprises alike will benefit from OTTF’s work. Technology purchasers can use the Open Trusted Technology Provider (OTTP) Standard and OTTP Framework best practice recommendations to guide their strategies.  And a wide range of technology vendors can use OTTF approaches to build security and integrity into their end-to-end supply chains. The first version of the OTTPS is focused on mitigating the risk of tainted and counterfeit technology components or products. The OTTF is currently working a program that will accredit technology providers to the OTTP Standard. We expect to begin pilot testing of the program by the end of 2012.

Don’t misunderstand us: Market leaders like IBM have every incentive to engineer security and quality into our products and services. We continually encourage and support others to do the same.

But we realize that trusted technology — like food safety — can only be achieved if we collaborate with others in industry and in government.  That’s why IBM is pleased to be an active member of the Trusted Technology Forum, and looks forward to contributing to its continued success.

A version of this blog post was originally posted by the IBM Institute for Advanced Security.

Andras Szakal is the Chief Architect and a Senior Certified Software IT Architect for IBM’s Federal Software Sales business unit. His responsibilities include developing e-Government software architectures using IBM middleware and managing the IBM federal government software IT architect team. Szakal is a proponent of service oriented and web services based enterprise architectures and participates in open standards and open source product development initiatives within IBM.

 

Comments Off on Viewpoint: Technology Supply Chain Security – Becoming a Trust-Worthy Provider

Filed under OTTF

How the Operating System Got Graphical

By Dave Lounsbury, The Open Group

The Open Group is a strong believer in open standards and our members strive to help businesses achieve objectives through open standards. In 1995, under the auspices of The Open Group, the Common Desktop Environment (CDE) was developed and licensed for use by HP, IBM, Novell and Sunsoft to make open systems desktop computers as easy to use as PCs.

CDE is a single, standard graphical user interface for managing data, files, and applications on an operating system. Both application developers and users embraced the technology and approach because it provided a simple and common approach to accessing data and applications on network. With a click of a mouse, users could easily navigate through the operating system – similar to how we work on PCs and Macs today.

It was the first successful attempt to standardize on a desktop GUI on multiple, competing platforms. In many ways, CDE is responsible for the look, feel, and functionality of many of the popular operating systems used today, and brings distributed computing capabilities to the end user’s desktop.

The Open Group is now passing the torch to a new CDE community, led by CDE suppliers and users such as Peter Howkins and Jon Trulson.

“I am grateful that The Open Group decided to open source the CDE codebase,” said Jon Trulson. “This technology still has its fans and is very fast and lightweight compared to the prevailing UNIX desktop environments commonly in use today. I look forward to seeing it grow.”

The CDE group is also releasing OpenMotif, which is the industry standard graphical interface that standardizes application presentation on open source operating systems such as Linux. OpenMotif is also the base graphical user interface toolkit for the CDE.

The Open Group thanks these founders of the new CDE community for their dedication and contribution to carrying this technology forward. We are delighted this community is moving forward with this project and look forward to the continued growth in adoption of this important technology.

For those of you who are interested in learning more about the CDE project and would like to get involved, please see http://sourceforge.net/projects/cdesktopenv.

Dave LounsburyDave Lounsbury is The Open Group‘s Chief Technology Officer, previously VP of Collaboration Services.  Dave holds three U.S. patents and is based in the U.S.

Comments Off on How the Operating System Got Graphical

Filed under Standards