Cloud Native Enables Sustainable Cloud Computing in 2023

Cloud Data Centers Employ Cloud Native Processors for Efficiency and Sustainability
Jeff Wittich, Chief Product Officer
28 Dec 2022
We recently celebrated Ampere’s 5-year anniversary and we are proud of what we have accomplished in that short time. We built a new and modern cloud computing architecture that delivers the predictable high performance, maximum scalability, and extreme power efficiency that the modern cloud requires. This pace of innovation was fueled by the right team and vision. We continue our rapid cadence of new product innovation, first having delivered Ampere® Altra® and then Ampere Altra Max with 128 cores. In 2022, many major CSPs adopted our Cloud Native Processors. As we enter 2023, we firmly hold the industry leadership position at 128 cores, giving customers the scalability required to provide nearly limitless capacity to their users and achieving maximum performance for modern cloud workloads.

Modern cloud native computing is now a reality. Today, there are more than 50 platforms supporting Ampere Altra and Ampere Altra Max Cloud Native Processors available for customers, as well as cloud providers around the world offering access to and services built upon Ampere CPUs.

Why Data Center Sustainability and Efficiency is Key

In 2022, we saw customer demand for power efficiency dramatically escalate globally. This was driven by increasing scarcity of power, expanding water usage, and the difficulty of building more and bigger data centers that require extra capacity on the grid. This need for a more sustainable approach is one of the reasons CSPs are moving to Cloud Native Processors. Our Ampere architecture consumes half as much power and therefore requires half as many data center racks to deliver the same amount of – or even more – compute as with legacy-era processors. By adopting our processors, today’s leading companies are lessening their energy footprint in their data centers.

Why Large Hyperscalers Have Adopted Ampere Cloud Native Processors

With the deployment of Ampere Cloud Native Processors by hyperscale clouds around the globe, modern cloud native computing has truly arrived and is accessible to almost anyone. As a user, the advantage is not only sustainability, but also the price-performance of Ampere CPUs for general purpose cloud workloads of all types. For ease of adoption, platforms and software are another critical element and our hyperscale partners have helped Ampere scale the ecosystem over the past year. Nearly all of the large hyperscalers have now moved to Ampere Cloud Native Processors….Google Cloud, Microsoft Azure, Oracle Cloud, Alibaba Cloud, Baidu Cloud, Tencent Cloud, Equinix, Hetzner, and many more.

2022 highlights:
  • Google Cloud – In July 2022, Google Cloud announced a new class of Ampere Cloud Native Processor based Tau virtual machines (VRs) to address the specific needs of the modern cloud applications. The Tau T2A VMs are designed to run scale-out Cloud Native workloads such and application servers, databases, artificial intelligence, video processing and more in an efficient manner. The Tau T2A VMs are designed from the ground up for predictable high performance and linear scalability using single-threaded cores, eliminating a range of noisy neighbor concerns. They are available in various configurations with up to 48 vCPUs and run at lower power than legacy x86 processors. In Google Cloud, the T2A VM instances outperform current generation x86 VMs by up to 31% and lead on price-performance by up to 65% using on-demand pricing guidance. 
 
  • Microsoft Azure – In August 2022, Microsoft announced the general availability of VMs on Microsoft Azure featuring the Ampere Altra processor. The Azure -based VMs have up to 64 virtual CPU cores, 8 GB of memory per core and 40 Gbps of networking bandwidth as well as SSD local and attachable storage. Microsoft describes them as “engineered to efficiently run scale-out, cloud-native workloads,” including open-source databases, Java and .NET applications and gaming, web, app and media servers.

Enterprise and Edge Adoption with More to Come

2022 was also the year we began seeing enterprise and edge computing adopting Ampere Cloud Native Processors – setting the stage for more deployment in 2023. According to an article in Forbes, “Ampere Computing Leads In Cloud Deployments As The Only Arm-Based Merchant Cloud Computing Platform.”  This is a watershed moment in the proliferation of Ampere processors across the compute spectrum.

Hewlett Packard Enterprise was the first global OEM to announce, becoming the first to offer compute with optimized cloud-native silicon for service providers and enterprises who embrace modern compute architectures. The new HPE ProLiant RL300 Gen11 server is the first in the HPE ProLiant Gen11 server line and delivers next-generation compute performance with higher power efficiency using the Ampere Altra and Ampere Max Cloud Native Processors. These solutions provide service providers and enterprises embracing cloud-native development with an agile, extensible, and trusted compute foundation to drive innovation. The new HPE ProLiant RL Gen 11 servers deliver up to 128 cores per socket for scale-out compute. These servers are ideally suited for customers that offer digital services, media streaming, social platforms, e-commerce, financial, or online services, and cloud-based services such as IaaS, PaaS, and SaaS.

In the edge compute space, we were thrilled to see Cruise adopt Ampere Altra Cloud Native Processors for autonomous driving. By choosing the Ampere Altra processor, Cruise was able to reduce power consumption while still achieving the required compute performance demanded of the next generation of intelligent vehicles. According to Cruise, “Our demands for the future in terms of processing speed are really, really high. And really there was nowhere else to go to. Everywhere else we looked at doesn’t have the pure processing speed and in the way that Cruise needs it to be able to manage our whole throughput of the data coming from our sensors. And so, the precise specifications that we aspire to, there was only one go-to partner. And Ampere was that partner.”

Cloud Native Workloads – Leadership Performance, New Partnerships and Use Cases

With all these new platforms now available, more and more workloads have taken advantage of the performance and power efficiency that Ampere processors can deliver. Amongst the wide spectrum of general-purpose workloads running in cloud-native environments on Ampere, we saw the highest performance and power efficiency on key workloads in Web Services, Databases, Analytics, In-Memory Caching, Video Services and AI Inferencing.

Graphical user interface, applicationDescription automatically generated
Figure 1


Below are just a few examples of new applications and use cases:
  • Plesk on Oracle Cloud Infrastructure (OCI) – As one of the world’s leading WebOps hosting platforms, Plesk implemented their fastest product ramp ever by introducing Plesk on OCI Ampere A1 featuring Ampere Altra Cloud Native Processors, launching 1,000 instances the first month as Plesk customers discovered the innate efficiency and performance of the solution. Plesk customers now enjoy an expanding array of cloud-based infrastructure marked by low power consumption, low heat generation, and exceptional resource allocation thanks to single-threaded cores that reduce the “noisy neighbor” effects of multi-threaded chip architectures.
 
  • Momento on Google Cloud - Momento is at the forefront of data center innovation with the world’s first serverless caching service. Today, the Ampere Altra Max Cloud Native Processors serve as the foundation of Momento’s serverless caching service. According to Momento, “With Ampere, the price/performance ratio often shines at high throughput and has ease of portability for GoogleCloud T2A VMs. The ease of adoption also makes this a two-way door, enabling us to move across architectures on behalf of our customers, without them having to worry about which architecture is best for their workload at any given time.”
 
  • Cloud Gaming - The AICAN (Android-in-the-Cloud-with-Ampere-and-Nvidia) server platform uses Ampere Altra Cloud Native Processors and Nvidia GPUs that run Arm-compatible mobile games natively, without modification or emulation. The platform includes dual-socket Ampere Altra Max CPUs totaling an industry-leading 256 cores with up to four Nvidia A16 or up to six  Nvidia T4 GPUs, capable of supporting up to 160 concurrent users per server. With Ampere Altra Max’s leading core density, a single rack of AICAN servers can stream to approximately 2,500 mobile users at once. This makes streaming premium mobile games from the cloud much more accessible to a broad consumer audience while providing extremely competitive infrastructure costs to service providers for the first time.
 
  • Rigetti Computing - Our partnership with Rigetti helped to create hybrid quantum-classical computers designed to unlock a new generation of machine learning applications over the cloud. Working together, we successfully integrated Ampere’s Cloud Native Processing platform with Rigetti’s quantum systems. Since then, Rigetti has tested increasingly more complex and real-world quantum machine learning use cases, including applications in finance, with this hybrid setup and continue to see extremely promising results.

Software and Hardware Co-Innovation Key to Future Cloud Breakthroughs

To realize the benefits of modern cloud computing, innovation must occur at both the software and hardware levels. Ampere’s developer program is designed to amplify the creativity of the cloud development community with more tools and information to help design sustainable data center solutions using Ampere’s world-class Cloud Native Processors. The program provides access to useful and actionable information for designing, building, deploying, and optimizing cloud native software. Some examples include:
  • Workload briefs and tuning guides on wide variety of applications from NGINX to Cassandra to Elasticsearch to FFmpeg
  • Ampere ready software with over 165 applications tested on a daily basis
  • AI Inference engine resources for improving ML performance on standard frameworks including PyTorch, TensorFlow and ONNX, enabling object detection, sentiment analysis and more

What’s Next in 2023 for Cloud Native Computing

We will be ringing in the new year celebrating the fact that modern cloud native computing has finally arrived. Through laser focus and continued innovation, we not only delivered processors specifically designed to meet the needs of the cloud, but we also drove the creation of an entire ecosystem to make it easy for customers to migrate to Cloud Native Processors.
As we head into 2023, we lead with for highest core count with 128 cores and we are excited about our future roadmap. We have announced our newest innovation, AmpereOne™, is sampling to customers, and you will hear more details next year as well as some exciting deployments.
In the spirit of reflecting on the year ahead, here are some predictions about what we will be talking about over the next 12 months:
  • Sustainability – This continues to be front and center for Ampere and the industry. As strain continues to mount on power grids and energy costs skyrocket, cloud sustainability in both public and private cloud will be addressed more seriously than ever before given its increasing business impact. As companies look to cut costs, they will re-evaluate their cloud infrastructure and make optimizations to reduce energy, cut water usage, and fit more compute power into the footprint of their existing data centers, and as they continue to grow their compute-dependent revenue streams, they will do so in a way that avoids costly and time-consuming data center infrastructure expansion. Quantifying the compute output produced versus the input costs will be an increasing area of focus to ensure goals are being met.
  • Cloud usage model becomes increasingly distributed – This is true both in terms of the number of clouds being utilized – from multi to hybrid cloud models – as well as in the physical location of those compute resources, with the edge becoming increasingly critical. The usage of AI and advanced analytics at the edge is creating demand for high performance and low power compute in distributed locations. The cloud is everywhere today, and choice/flexibility are imperative.
 
From all of us at Ampere, we wish you a successful and ‘cloud native’ 2023.
The modern cloud required a computing platform built to handle today’s cloud workloads - one that offers predictable high performance, platform scalability, and power efficiency. Join the rest of the industry in moving to Cloud Native Processors in 2023 and embrace the sustainable cloud.
Reach out to me or anyone on our sales team about partnerships or to get more information. You can also get trial access to Ampere systems through our Developer Access Programs.
 
Figure 1.
Processors tested were AMD EPYC 7763 (“Milan”), Intel Xeon 8380 (“Ice Lake”),  and Ampere® Altra® Max M128-30.
Configurations:
  • Operating System: CentOS 8.0.1905 (kernel 4.18.0, 64k pages on Ampere® Altra® Max, 4k pages on Intel and AMD processors)
  • Memory: 8x 64 GB DIMMs, DDR4-3200
  • Networking: Mellanox ConnectX-5 100 Gb NIC
  • Storage: 1-4 NVMe drives depending on the workload across all three platforms
  • SMT: Enabled on the Intel and AMD platforms, not available on the Ampere® Altra® Max platform. 
  • For more information on configurations used and data collected, please visit the Ampere Solutions site at https://solutions.amperecomputing.com/performance-overview

Performance
  • Performance of NGINX using above configurations
  • Performance/W measured at the socket level

Predictability
  • Performance of Redis combined with StressNG workload run on nonutilized cores/threads intermittently
  • Ampere Altra Max (128 cores) vs AMD Milan (64 core/128 threads) running single socket

Scalability
  • Performance of x.264 run on single instance per thread or core
  • Ampere Altra Max (128 cores) vs AMD Milan (64 core/128 threads) running single socket

Power Efficiency
  • Power at socket measured under load using Hadoop testing configurations on Ampere Altra Max
  • Usage Power equals average power consumption of load taken over entire run of Hadoop Terasort