Ampere Computing Logo
Ampere Computing Logo

From Collaboration to Compute: How Ampere, Uber and OCI Co-Developed A4 Instances

image
Jeff Wittich, Chief Product Officer at Ampere
19 February 2026

At Ampere®, some of our most important product decisions are shaped through close collaboration with customers operating at massive scale. Over several years of shared work and insight, our relationship with Uber evolved into a three-way co-innovation effort with Oracle Cloud Infrastructure (OCI), directly informing the design of AmpereOne® M and the OCI A4 instances it powers.


Uber runs one of the world’s most complex microservice environments, supporting real-time systems that power rides, deliveries, pricing and logistics across hundreds of cities. As Uber adopted earlier OCI Ampere-based instances and expanded their use over time, each deployment became part of an ongoing feedback loop between Uber, OCI and Ampere. These production deployments were not simply opportunities to validate performance, but moments of shared learning that shaped how both the compute architecture and the cloud platform evolved together.


A key area of focus emerged from Uber’s extensive use of Go-based microservices and other latency-sensitive workloads. While horizontal scaling remains foundational to Uber’s architecture, many critical services depend on predictable single-threaded execution and efficient memory behavior to meet strict latency requirements. In environments of this scale, consistency matters as much as capacity, since small variations in execution time or memory behavior can cascade across distributed systems and affect user-facing performance.


Working together across multiple generations of deployments allowed these patterns to surface clearly and repeatedly. That continuity informed how AmpereOne® M was designed. Consistent with Ampere’s long-standing design philosophy, the architecture emphasizes predictable per-core performance and choices that reduce variability across cores. This approach supports latency-sensitive services while enabling efficient scaling across large fleets of microservices, aligning closely with how Uber’s software operates in production and how OCI delivers those capabilities in the cloud.


Uber’s experience also underscored the importance of balanced system design as workloads scale. Memory behavior and cache interaction increasingly shape real-world performance in large distributed systems. Designing for these characteristics helps ensure applications behave consistently as demand grows, rather than encountering unexpected performance cliffs. At the same time, efficiency at the processor and system level becomes essential when infrastructure must scale globally without driving disproportionate increases in cost or power consumption. These considerations informed both silicon design at Ampere and instance configuration choices within OCI.


These principles carried directly into the A4 instance family. Powered by AmpereOne® M, A4 provides up to 35% higher per-core performance compared to the prior generation and up to 20% higher clock speeds, helping deliver stronger performance across a wide range of workloads. Expanded memory bandwidth and larger core counts enable teams to scale microservice deployments without sacrificing responsiveness. Together, these characteristics deliver strong price-performance by combining consistent per-core execution with high overall efficiency. For customers operating at Uber’s scale, this balance is critical, enabling capacity growth while keeping infrastructure costs and energy usage aligned with business growth.


For Uber, this resulted in infrastructure that aligned closely with how its software already runs in production. A4 instances deliver the performance characteristics required for mission-critical services while supporting a more favorable cost profile as deployments scale across regions. The platform is well suited for global, always-on workloads where scale, responsiveness and economics must all be considered together.


From Ampere’s perspective, this collaboration illustrates how sustained, multi-party engagement drives better outcomes. Working alongside Uber and OCI over multiple years provided a grounded view into how modern cloud workloads evolve in production and how performance, efficiency and economics intersect at scale. Because this work is delivered through a broadly available cloud platform, its impact extends beyond any single deployment. The same architectural choices shaped through collaboration with Uber are accessible to organizations of all sizes through OCI, helping advance the industry by making modern, high-performance compute easier to adopt.


Related: Introducing OCI A4 Standard Instances: Delivering Next Gen Performance with AmpereOne® M

Created At : February 17th 2026, 9:30:59 pm
Last Updated At : February 19th 2026, 5:02:16 pm
Ampere Logo

Ampere Computing LLC

4655 Great America Parkway Suite 601

Santa Clara, CA 95054

image
image
image
image
image
 |  |  | 
© 2025 Ampere Computing LLC. All rights reserved. Ampere, Altra and the A and Ampere logos are registered trademarks or trademarks of Ampere Computing.
This site runs on Ampere Processors.