The Ampere® roadmap has been meticulously designed to address the evolving demands of data centers and cloud computing environments, especially with the ramp of AI inference applications in virtually every domain. Our Cloud Native Processors offer a compelling combination of performance, scalability, and power efficiency, making them ideal for modern workloads. In our 2024 Annual Roadmap Video we showed you our product plan beyond 192 cores and 8 channels of memory, and later in July we revealed even more of our roadmap to the press and shared various architectural details of AmpereOne®.
For software developers, however, innovations in CPUs can sometimes feel abstract and not directly relevant to them. In this article, we want to share how these hardware-level features are directly relevant to application developers and operators for cloud native application development. Let’s explore how some of the architectural innovations of the AmpereOne processor show up to developers in user space, and what the implications are for new and seasoned developers alike.
The Memory Tagging Extension (MTE) is an Arm architecture feature, now available on AmpereOne CPUs. It is implemented in silicon hardware as a defense mechanism to detect both types of memory safety violations.
Benefits of Memory Tagging on Arm64
Memory tagging is designed to improve memory safety and reliability by helping detect and mitigate memory errors such as:
By tagging memory regions and enforcing pointer compliance with these tags, MTE allows developers to identify and resolve bugs more effectively, helping build more robust applications.
Memory tagging is a hardware feature, and it requires your operating system and system C library to support this feature and expose it to user applications. If your system C library implementation (e.g., glibc) supports memory tagging, you can enable and use it as follows:
Ensure Hardware and Kernel Support
Enable Memory Tagging in glibc
Debug and Test
Run in Production (Optional)
End-users and developers can leverage memory tagging for:
Memory Tagging Extensions are game-changing for application development and deployment where memory safety and debugging efficiency are critical. Potential application areas can be found in domains such as automotive, medical, telecommunications, and others.
Check out our MTE blog and explainer video.
System Level Cache (SLC) is a single pool of cache memory which is higher latency than L2 cache, but lower latency that going to system RAM. Quality of service enforcement (QoS Enforcement, also known as Memory Partitioning and Access Management) allows the system user to declare that a specific tenant may only access a capped amount of that SLC.
Using this feature helps application operators and system administrators manage how different processes and applications access memory, offering fine-grained control over memory partitioning and isolation.
Benefits of QoS Enforcement on Arm64
Ensure Hardware and Kernel Support
Enable Memory Partitioning in the Kernel If your kernel supports the feature, but it is not enabled by default, you might need to enable it manually.
Real-World Use Cases
QoS enforcement with memory partitioning is beneficial for cloud operators, virtualization technologies, and developers building high-performance, memory-intensive applications. It enables service providers to offer better performance guarantees and operators to manage resources more effectively, particularly in scenarios where multiple applications share the same underlying hardware.
Check out our QoS enforcement blog or the explainer video.
Nested virtualization allows a virtual machine (VM) running under a hypervisor to act as a host for additional VMs. With AmpereOne, Ampere CPUs now support the hardware feature FEAT_NV2 (included in ARMv8.4-A and beyond) which allows for network virtualization, enabling advanced workloads on Ampere infrastructure. This is particularly useful in several scenarios.
Benefits of Nested Virtualization on Arm64
Availability of Software Support in Linux
As with all new hardware features, there is a period between the availability of the feature in hardware, and its support in software. Ampere engineers are working with ecosystem partners to ensure that Nested Virtualization on Ampere CPUs is available to all customers as soon as possible. The current state of support for Nested Virtualization in the Linux kernel is incomplete, but patches to complete support of the feature are in progress.
Once the feature is fully supported upstream by the Linux kernel and QEMU, future releases of Linux distributions will automatically include software support for this feature.
Real-World Use Cases
Nested Virtualization on Ampere CPUs has a variety of real-world applications across industries, particularly as Ampere and other ARM CPUs gain prominence in cloud, edge, and high-performance computing environments.
AmpereOne’s innovative design builds upon the success of the Ampere Altra® family of processors. The new features we described here collectively empower developers to build more secure, efficient, and scalable applications on Arm64, leveraging the latest advancements in hardware capabilities. As these technologies mature, they will further enhance the development and deployment of applications across various industries. Particularly in the age of AI compute, providing top notch tools to developers is key to fueling AI adoption and data center modernization.
Check out our nested virtualization blog for a deeper dive.
Learn more about AmpereOne’s potential to shape the future of compute here.
Useful external reference material can be found here:
All data and information contained herein is for informational purposes only and Ampere reserves the right to change it without notice. This document may contain technical inaccuracies, omissions and typographical errors, and Ampere is under no obligation to update or correct this information. Ampere makes no representations or warranties of any kind, including express or implied guarantees of noninfringement, merchantability, or fitness for a particular purpose, and assumes no liability of any kind. All information is provided “AS IS.” This document is not an offer or a binding commitment by Ampere.
System configurations, components, software versions, and testing environments that differ from those used in Ampere’s tests may result in different measurements than those obtained by Ampere._
©2024 Ampere Computing LLC. All Rights Reserved. Ampere, Ampere Computing, AmpereOne and the Ampere logo are all registered trademarks or trademarks of Ampere Computing LLC or its affiliates. All other product names used in this publication are for identification purposes only and may be trademarks of their respective companies._