By Mark Skilton, Capgemini
The recent announcement by Facebook of its decision to publish its data center specifications as open source illustrates a new emerging trend in commoditization of compute resources.
Key features of the new facility include:
- The Oregon facility announced to the world press in April 2011 is 150,000 sq. ft., a $200 million investment. At any one time, the total of Facebook’s 500-million user capacity could be hosted in this one site. Another Facebook data center facility is scheduled to open in 2012 in North Carolina. There may possibly be future data centers in Europe or elsewhere if required by the Palo Alto, Calif.-based company
- The Oregon data center enables Facebook to reduce its energy consumption per unit of computing power by 38%
- The data center has a PUE of 1.07, well below the EPA-defined state-of-the-art industry average of 1.5. This means 93% of the energy from the grid makes it into every Open Compute Server.
- Removed centralized chillers, eliminated traditional inline UPS systems and removed a 480V to 208V transformation
- Ethernet-powered LED lighting and passive cooling infrastructure reduce energy spent on running the facility
- New second-level “evaporative cooling system”, a multi-layer method of transforming room temperature and air filtration
- Launch of the “Open Compute Project” to share the data center design as Open Source. The aim is to encourage collaboration of data center design to improve overall energy consumption and environmental impact. Other observers also see this as a way of reducing component sourcing costs further, as most of the designs are low-cost commodity hardware
- The servers are 38% more efficient and 24% lower cost
While this can be simply described as a major Cloud services company seeing their data centers as commodity and non-core to their services business, this perhaps is something of a more significant shift in the Cloud Computing industry in general.
Facebook making its data centers specifications open source demonstrates that IaaS (Infrastructure as a Service) utility computing is now seen as a commodity and non-differentiating to companies like Facebook and anyone else who wants cheap compute resources.
What becomes essential is the efficiencies of operation that result in provisioning and delivery of these services are now the key differentiator.
Furthermore, it can be seen that it’s a trend towards what you do with the IaaS storage and compute. How we architect solutions that develop software as a service (SaaS) capabilities becomes the essential differentiator. It is how business models and consumers can maximize these benefits, which increases the importance of architecture and solutions for Cloud. This is key for The Open Group’s vision of “Boundaryless Information Flow™”. It’s how Cloud architecture services are architected, and how architects who design effective Cloud services that use these commodity Cloud resources and capabilities make the difference. Open standards and interoperability are critical to the success of this. How solutions and services are developed to build private, public or hybrid Clouds are the new differentiation. This does not ignore the fact that world-class data centers and infrastructure services are vital of course, but it’s now the way they are used to create value that becomes the debate.
Mark Skilton, Director, Capgemini, is the Co-Chair of The Open Group Cloud Computing Work Group. He has been involved in advising clients and developing of strategic portfolio services in Cloud Computing and business transformation. His recent contributions include the publication of Return on Investment models on Cloud Computing widely syndicated that achieved 50,000 hits on CIO.com and in the British Computer Society 2010 Annual Review. His current activities include development of a new Cloud Computing Model standards and best practices on the subject of Cloud Computing impact on Outsourcing and Off-shoring models and contributed to the second edition of the Handbook of Global Outsourcing and Off-shoring published through his involvement with Warwick Business School UK Specialist Masters Degree Program in Information Systems Management.
Agree with you Mark!.
It is a significant shift towards open architectures which is what Open Group stands for. I also see this event as the beginning of ‘open sourcing’ architecture specifications and it could have an impact on Enterprise IT’s future course of actions. Blogged about it few weeks ago.
http://architectsoul.blogspot.com/2011/04/open-compute-emerging-roles-for.html
Yes great blog too. Agree, the “design” aspect is key in putting together differentiating and non differentiating solutions. This applies across the IT portfolio from hardware to applications. The differences is that hardware is now also potentially a “template” and can be standardised. I like the White label mobile application builder example in the banking industry as this is a great example of how constructing solutions in the cloud can pull together commodity hardware and software tools to build new services and differentiation. This composite service was the vision of SOA and web 2.0. But we are now seeing this are becoming a reality with cloud and companies like Facebook encouraging the Internet ecosystem to be open at Browser, API and how datacenter levels.