Ampere Computing Logo
Contact Sales
Ampere Computing Logo
Accelerating the Cloud
Accelerating the Cloud

IntroductionPart 1: Going Cloud Native Part 2: The Investment to Go Cloud Native Part 3: Transitioning to Cloud Native Pre-Flight ChecklistPart 4: What to Expect When Going Cloud Native Part 5: The Final Steps
Accelerating the Cloud

Part 1: Going Cloud Native

Traditionally, deploying a web application has meant running large, monolithic applications on x86-based servers in a company’s enterprise datacenter. Moving applications to the cloud eliminates the need to overprovision the datacenter since cloud resources can be allocated based on real-time demands. At the same time, the move to cloud has been synonymous with a shift to componentized applications (a.k.a. microservices). This approach allows applications to easily scale out to potentially 100,000s or millions of users.

By moving to a cloud native approach, applications can run entirely in the cloud and fully exploit the unique capabilities of the cloud. For example, with a distributed architecture, developers can scale out seamlessly by creating more instances of an application component rather than running a larger and larger application, much like how another application server can be added without adding another database. Many major companies (i.e., like Netflix, , Wikipedia, and so on) have carried the distributed architecture to the next level by breaking applications into individual microservices. Doing so simplifies design, deployment, and load balancing at scale. See The Phoenix Project for more details on breaking down monolithic applications and The Twelve Factor App for best practices when developing cloud native applications.

Hyperthreading Inefficiencies

Traditional x86 servers are built on general-purpose architectures that were developed primarily for personal computing platforms where users needed to be able to execute a wide range of different types of desktop applications at the same time on a single CPU. Because of this flexibility, the x86 architecture implements advanced capabilities and capacity useful for desktop applications, but which many cloud applications do not need. However, companies running applications on an x86-based cloud must still pay for these capabilities even when they don’t use them.

To improve utilization, x86 processors employ hyperthreading, enabling one core to run two threads. While hyperthreading allows more of a core’s capacity to be utilized, it also allows one thread to potentially impact the performance of the other when the core’s resources are overcommitted. Specifically, whenever these two threads contend for the same resources, this can introduce significant and unpredictable latency to operations. It is very difficult to optimize an application when you don’t know – and can’t control – which application it is going to share a core with. Hyperthreading can be thought of as trying to pay the bills and watch a sports game at the same time. The bills take longer to complete, and you don’t really appreciate the game. It is better to separate and isolate tasks by completing the bills first, then concentrating on the game, or splitting the tasks between two people, one of whom is not a football fan.

Hyperthreading also expands the application’s security attack surface since the application in the other thread might be malware attempting a side channel attack. Keeping applications in different threads isolated from each other introduces overhead and additional latency at the processor level.

Cloud Native Optimization

For greater efficiency and ease of design, developers need cloud resources designed to efficiently process their specific data – not everyone else’s data. To achieve this, an efficient cloud native platform accelerates the types of operations typical of cloud native applications. To increase overall performance, instead of building bigger cores that require hyperthreading to execute increasingly complex desktop applications, cloud native processors provide more cores designed to optimize execution of microservices. This leads to more consistent and deterministic latency, enables transparent scaling, and avoids many of the security issues that arise with hyperthreading since applications are naturally isolated when they run on their own core.

To accelerate cloud native applications, Ampere has developed the Altra and Altra Max 64-bit cloud native processors. Offering unprecedented density with up to 128 cores on a single IC, a single 1U chassis with two sockets can house up to 256 cores in a single rack.

Ampere Altra and Ampere Altra Max cores are designed around the Arm Instruction Set Architecture (ISA). While the x86 architecture was initially designed for general-purpose desktops, Arm has grown from a tradition of embedded applications where deterministic behavior and power efficiency are more of a focus. Starting from this foundation, Ampere processors have been designed specifically for applications where power and core density are important design considerations. Overall, Ampere processors provide an extremely efficient foundation for many cloud native applications, resulting in high performance with predictable and consistent responsiveness combined with higher power efficiency.

For developers, the fact that Ampere processors implement the Arm ISA means there is already an extensive ecosystem of software and tools available for development. In Part 2 of this series, we’ll cover how developers can seamlessly migrate their existing applications to Ampere cloud native platforms offered by leading CSPs to immediately begin accelerating their cloud operations.

The Cloud Native Advantage

A key advantage of running on a cloud native platform is lower latency, leading to more consistent and predictable performance. For example, a microservices approach is fundamentally different than current monolithic cloud applications. It shouldn’t be surprising, then, that optimizing for quality of service and utilization efficiency requires a fundamentally different approach as well.

Microservices break large tasks down into smaller components. The advantage is that because microservices can specialize, they can deliver greater efficiency, such as achieving higher cache utilization between operations compared to a more generalized, monolithic application trying to complete all the necessary tasks. However, even though microservices typically use fewer compute resources per component, latency requirements at each tier are much stricter than for a typical cloud application. Put another way, each microservice only gets a small share of the latency budget available to the full application.

From an optimization standpoint, predictable and consistent latency is critical because when the responsiveness of each microservice can vary as much as it does on a hyperthreaded x86 architecture, the worst case latency is the sum of the worst case for each microservice combined. The good news is that this also means that even small improvements in microservice latency can yield significant improvement when implemented across multiple microservices.

Figure 1 illustrates the performance benefits of running typical cloud applications on a cloud native platform like Ampere Altra Max compared to Intel IceLake and AMD Milan. Ampere Altra Max delivers not only higher performance but even higher performance/watt efficiency. The figure also shows how Ampere Altra Max has superior latency – 13% of Intel IceLake – to provide the consistent performance native cloud applications need.

Leadership Performance in the Cloud.png

Figure 1: A cloud native platform like Ampere Altra Max offers superior performance, power efficiency, and latency compared to Intel IceLake and AMD Milan.

Sustainability

Even though it is the CSP who is responsible for handling power consumption in their datacenter, many developers are aware that the public and company stakeholders are increasingly interested in how companies are addressing sustainability. In 2022, cloud datacenters are estimated to have accounted for 80% of total datacenter power consumption1. Based on figures from 2019, datacenter power consumption is anticipated to double by 2030.

It is clear sustainability is critical to long-term cloud growth and that the cloud industry must begin adopting more power efficient technology. Reducing power consumption will also lead to operational savings. In any case, companies that lead the way by shrinking their carbon footprint today will be prepared when such measures become mandated.

Cloud Native Compute is Fundamental to Sustainability.png

Table 1: Advantages of cloud native processing with Ampere cloud native platforms compared to legacy x86 clouds.


Cloud native technologies like Ampere’s enable CSPs to continue to increase compute density in the datacenter (see Table 1). At the same time, cloud native platforms provide a compelling performance/price/power advantage, enabling developers to reduce day-to-day operating costs while accelerating performance.

In part 2 of this series, we will take a detailed look at what it takes to redeploy existing applications to a cloud native platform and accelerate your operations.

Next: Part 2: The Investment to Go Cloud Native
Previous: Introduction
Created At : February 7th 2023, 7:17:14 pm
Last Updated At : June 22nd 2023, 9:45:59 pm
Ampere Logo

Ampere Computing LLC

4655 Great America Parkway Suite 601

Santa Clara, CA 95054

image
image
image
image
image
 |  |  | 
© 2024 Ampere Computing LLC. All rights reserved. Ampere, Altra and the A and Ampere logos are registered trademarks or trademarks of Ampere Computing.
This site runs on Ampere Processors.