Focus on Open Standards in Server Management: Open Compute Project

Focus on Open Standards in Server Management: Open Compute Project

Tech Blog

In our final post on the topic of open standards in server design and management, we will look at the Open Compute Project originally devised and promoted by social media giant Facebook back in 2009.

To see how the Open Compute Project was established, it is important to look at the history of Facebook, since the growth of both organizations truly go hand in hand. In 2009, as the company experienced another phase of rapid growth, it saw an opportunity to reshape the way in which it expanded its technology infrastructure, particularly with respect to its data centers.

The goal at that point in time was to design what Facebook imagined to be “the world’s most energy efficient data center”, such that could easily and efficiently scale and accommodate new technology as it was added into to the company’s technology stack. Doing so led Facebook to tremendous gains in cost and energy efficiency – and proved to the company that this model was one that should be further investigated and promoted.

Inspired by and drawing on the model and framework of the open software community, in 2011 Facebook decided to open its designs to the public and invite other major organizations dealing with the same challenges to participate in the project. This was done in the hope that it would enhance the quality of innovation and speed up the timeline of development.

Fast forward to 2017, and there are now hundreds of technology companies participating, including of course AMI, a partner for the last several years. As a key partner and supplier to Facebook, the decision for AMI to participate in Open Compute an easy one.


The Open Compute Project touts its mission as enabling innovation and is self-described as “reimagining hardware, making it more efficient, flexible, and scalable.” It is a global community of several hundred technology companies “working together to break open the black box of proprietary IT infrastructure to achieve greater choice, customization, and cost savings.”

According to its website, “the Open Compute Project Foundation is a rapidly growing, global community whose mission is to design, use, and enable mainstream delivery of the most efficient designs for scalable computing”. Reflecting its commitment to the same ideals that the open software movement is founded upon, its members are devoted to “openly sharing ideas, specifications, and other intellectual property (as) the key to maximizing innovation and reducing complexity in tech components”.


The Open Compute Project focuses on a number of different areas related to data center performance and efficiency. They include Compliance and Interoperability, Data Center, Hardware Management, High Performance Computing (HPC), Networking, Rack & Power, Server, Storage and Telco.AMI has technology interests and products in a number of these project areas, particularly in the hardware management and server spaces, as these relate to both our BIOS/UEFI firmware solutions like Aptio® V as well as MegaRAC® remote and BMC management solutions.

We sincerely hope you have enjoyed these blog posts covering our involvement in open standards for server and datacenter design and management? We would be very happy to have your comments or requests in the comments section below. You may always also drop us a line via social media or our Contact Us form to get in touch. As always, thanks for reading!

About AMI

AMI is Firmware Reimagined for modern computing. As a global leader in Dynamic Firmware for security, orchestration, and manageability solutions, AMI enables the world’s compute platforms from on-premises to the cloud to the edge. AMI’s industry-leading foundational technology and unwavering customer support have generated lasting partnerships and spurred innovation for some of the most prominent brands in the high-tech industry. 

You May Also Like…