Today, Ampere® provided an important roadmap update, announcing a future product in the AmpereOne® Family, AmpereOne® Aurora.
Ampere has been focused on developing innovative technologies for cloud workloads and infrastructures of all types for the last several years. Since general purpose and AI workloads are rapidly converging in the cloud into AI Compute, the platform of the future must be:
When you combine all of Ampere’s innovations -- our custom cores, proprietary mesh, our die-to-die interconnect across chiplets -- and add in Ampere's own AI acceleration, the result is the revolutionary AmpereOne Aurora.
Key Features of AmpereOne® Aurora:
This product can scale across a range of AI Inference and Training use cases with powerful AI Compute capabilities for workloads like RAG and Vector Databases.
It will deliver the leading performance per rack for AI Compute, and even more importantly, can be air-cooled. This means that it can be deployed in any existing data center around the world, from public cloud to enterprises, and from hyperscale data centers to the edge – the key to solving our global AI power crisis.
Combined with our ongoing collaborations with various partners, plus the AI Platform Alliance, we're building hardware solutions in innovative ways to deliver the full spectrum of AI Compute within an open and interoperable environment. Users can rely on the availability of accessible, affordable technologies, where they mix-and-match and build their own innovative AI products and services.
This is a different approach from what others have attempted to build: it is open and inclusive, it can be deployed everywhere now, and it is efficient, both in terms of power and cost. We're bringing the same novel approach that you've seen with today's AmpereOne® CPUs and now adding in our Ampere AI IP to create an integrated product.
At Ampere, we are bringing our innovative portfolio of technology innovation to the most important problem facing our industry, and AmpereOne® Aurora is the solution.