Ampere Computing Logo
Contact Sales
Ampere Computing Logo

How Can Data Centers be More Sustainable?

The keys to data center sustainability with Cloud Native Processors
Team Ampere
08 June 2023

Sustainability begins at the nucleus of the data center: the microprocessor. Legacy x86 processors—designed and developed to address the compute demands of a different time in history—are still the mainstay of many data centers for cloud computing and on prem. While upgrades and new features are improving performance, they cannot deliver the level of efficiency and sustainability of processors built specifically for the new era of compute.


To reach carbon footprint and ESG goals for themselves and their customers, data center operators and architects must address these 5 keys to data center sustainability:

1. Adopt innovative hardware solutions 2. Reduce energy consumption 3. Increase compute capacity within existing space 4. Decrease cooling costs and water use 5. Reduce carbon emissions


Many organizations are setting aggressive ESG goals and are urgently looking to reduce their carbon footprint. In fact, seventy-five percent of organizations will have implemented a data center infrastructure sustainability program driven by cost optimization and stakeholder pressures by 2027, up from less than 5% in 2022, according to Gartner, Inc. Given the growing number of moratoriums and limits on data center power consumption and new construction, it’s no surprise that the rate of adoption of sustainable infrastructures is now increasing at such a rapid pace.


Seventy-five percent of organizations will have implemented a data center infrastructure sustainability program driven by cost optimization and stakeholder pressures by 2027, up from less than 5% in 2022.


Source: Gartner


Of course, some progress has already been made to reduce the environmental impact of cloud data centers. For example, the largest cloud service providers have begun investing billions of dollars in renewable energy offsets for their scope 2 GHG emissions and some have achieved very efficient data center operations by focusing on hot/cold aisle layout and cooling technology.


While these efforts are necessary, they miss addressing the core challenge of sustainability: the significant reduction of energy consumption. For data center architects and operators, the magnitude of electricity consumption needs careful consideration when deciding how to best reach their goals. Reducing power consumption can help them avoid both the expense and the complications of data center build out required under legacy x86 power and performance characteristics.


Green data centers begin with green compute infrastructure that’s designed to meet the growing demands for consistent performance and power efficiency. The adoption of Cloud Native Processors can provide actual and meaningful reductions in power consumption without sacrificing performance.

Innovation is at the Heart of Sustainability

Let’s start by looking at the heart of the data center, the CPU (central processing unit). Afterall, the processor is where everything begins and where heat generation and power consumption originate. These two elements are central to the sustainability analysis. From here you can extrapolate data center size, water conservation, and other sustainability elements. Only by innovating at the most basic level of computing can data center architects and operators hope to meet the sustainability goals set by internal and external forces.


Green data centers begin with green compute infrastructure that’s designed to meet the growing demands for consistent performance and power efficiency.


The performance and efficiency of Cloud Native Processors are achieved through an innovative chip architecture that has disrupted the CPU industry in the last 5 years. A key difference between these processors and legacy x86 begins at the core, literally.


It begins with adding more cores to every chip and continuing to innovate to increase the core count without significantly contributing to power consumption. Cloud Native Processors are now available at up to 192 cores with roadmaps that address the high-performance needs of the data center, while maintaining power efficiency.


Innovation doesn’t stop there. Instead of multi-threading each core—a method designed to optimize the performance of thread constrained workloads—Cloud Native Processors use single-threaded cores. In a data center where there are multiple users, multi-threading is the source of “noisy neighbor” interference and is one factor that contributes to lack of predictable workload behavior. Putting one thread on more cores than any other processor, Ampere has been able to improve performance while keeping power in check, thus shaping a world where compute is more sustainable.


The scalability of Cloud Native Processors allows flexibility, right-sizing workload execution, and greater sustainability in all locations and on all types of clouds. Scalability is critical to the modern era of compute as the same products running in large warehouse data centers can also deliver high performance for low power edge cloud applications.


In a data center where there are multiple users, multi-threading is the source of “noisy neighbor” interference.


The power and area optimized design of Cloud Native Processors streamlines the feature set—making them more efficient in both the cloud and on the edge than legacy x86 processors. The significantly lower power per core—resulting from innovations in processor architecture—provides more compute per watt and delivers more performance and cores than ever before possible.

Reduce Energy Consumption

Built from the ground-up specifically for multi-tenant cloud-scale environments, Cloud Native Processors are disrupting the microprocessor market by lowering power consumption and helping data center operators to offset rising energy costs without sacrificing performance.


The innovative architecture of these processors provides the high-density compute required by cloud native workloads coupled with lower power consumption. In fact, recent studies by Ampere have revealed that Ampere Cloud Native Processors reduce power consumption up to 2.5 times that of legacy x86 processors*—allowing data center operators to achieve the same or better performance with far less power consumption.


“Based on our benchmarking, the HPE ProLiant RL300 Gen 11 server with Ampere processors is delivering significantly higher power efficiency. This is allowing us to both meet our improved sustainability goals and provide a lower cost of delivery to our end customers.”

Robert Jenkin, CEO, Cloud Sigma


One of the greatest inefficiencies data center operators face today is the limitation of power capacity at the rack level. As legacy CPUs draw more and more power, data center architects and operators are left with empty rack space that could be used to expand capacity—driving the need for more data centers. Deploying processors that consume less power is the first line of defense against data center sprawl and a significant step toward sustainability.

Increase Capacity Within Existing Space

In today’s data centers, most racks are underutilized because the energy required to fill the rack with servers exceeds the power capacity of the rack itself. By lowering the power consumption of each server, cloud native processors allow operators to fill more space in their racks, increasing the overall compute capacity in their data center, and reducing the need for new data centers. Efficiency measured in this manner is represented as greater performance per rack.


Deploying processors that consume less power is the first line of defense against data center sprawl and a significant step toward sustainability.


Today, many processing platforms are measured and compared using single system benchmarks focused primarily on performance. Measuring workload or service efficiency demands a look beyond commonly cited benchmarks captured on single servers. For instance, a synthetic CPU benchmark—such as SpecRate Integer—is designed to rigorously measure CPU performance for integer-biased workloads but doesn’t consider the impact on data center efficiency.


The performance per rack metric details analysis at the rack level including power consumption, rack density, and overall data center footprint. Ultimately, this kind of metric yields more sustainable data center designs for the modern cloud era.


Using the performance per rack analysis, Cloud Native Processors have been proven to provide 2.8 times greater performance per rack than legacy X86 processors*. Utilizing the space and power already available in the data center has value beyond lowering the need for new data center builds and include:

  • Increasing compute capacity
  • Optimizing revenue opportunities
  • Lowering infrastructure cost

Decrease Cooling Costs and Water Use

Innovations in cooling technologies can address the consequences of running hardware that generates an enormous amount of waste heat. The heat generated by these devices is directly correlated to the amount of power they consume. Lowering power consumption reduces the cooling requirements and lessens the need to implement ever more complicated and expensive cooling systems.


Each element of data center sustainability and compute efficiency relies on a CPU that consumes less power while delivering predictable, consistent, and linearly scalable performance.


As challenges around heat dissipation continue to increase, the solutions to meet those challenges become ever more costly and complicated. Some new cooling techniques in development are moving away from using water—a limited resource and a potential liability for the data center—and focusing on chemical solutions that could cause their own environmental hazards down the road.


Lowering power consumption reduces the cooling requirements and lessens the need to implement ever more complicated and expensive cooling systems.


By directly reducing the power consumed in the processor, data center operators avoid implementing extravagant cooling technologies—thereby adding another layer of sustainability to the data center equation.

Reduce Carbon Emissions

Reducing Greenhouse Gas (GHG) emissions has taken center stage in the current global climate debate. Data center and IT industry power consumption are in the spotlight as energy consumption for data centers—as well as telecommunication networks and long-haul internet infrastructure—continue to swiftly increase.


Some estimates of this runaway growth in energy consumption place the data center industry higher than other high-profile industries such as aviation and shipping responsible for over two percent of global CO2 emissions, according to Climatiq.


Global emissions from cloud computing range from 2.5% to 3.7% of all global greenhouse gas emissions, thereby exceeding emissions from commercial flights (about 2.4%) and other existential activities that fuel our global economy.


Source: Climatiq.


Reducing carbon emissions in the data center is heavily reliant on our ability to save electricity though more efficient computing. A combination of green energy generation and reducing energy consumption and waste heat through energy efficient servers—powered by Cloud Native Processors will help lower carbon emissions and shorten the path to sustainability.


The demand for more computing power is increasing even as energy costs rise. Governments and communities continue to push back on new data center construction. The solution lies at the nucleus of the data center—the CPU. Cloud Native Processors have proven their ability to perform under the demands of cloud data center workloads, on prem, and on the edge. Investing now in innovations that will grow along with the demand for more compute while helping to keep our planet green is not only sustainable, but also good business sense.


Built for sustainable cloud computing, Ampere’s Cloud Native Processors deliver predictable high performance, platform scalability, and power efficiency unprecedented in the industry.


*Footnotes

The web services study here is based on performance and power data for many typical workloads using single node performance comparisons measured and published by Ampere Computing.

Details and efficiency footnotes are available here.

 

Created At : June 8th 2023, 3:38:01 pm
Last Updated At : December 11th 2023, 11:14:03 pm
Ampere Logo

Ampere Computing

4655 Great America Parkway

Suite 601 Santa Clara, CA 95054

image
image
 |  |  |  |  | 
© 2022 Ampere Computing LLC. All rights reserved. Ampere, Altra and the A and Ampere logos are registered trademarks or trademarks of Ampere Computing.
This site runs on Ampere Processors.