Uber embarked on a strategic migration from on-premise data centers to Oracle Cloud Infrastructure (OCI) and Google Cloud Platform in February 2023. A key component of this migration was integrating ARM-based computers into their predominantly x86 fleet to reduce costs, improve price performance, and ensure hardware flexibility amid supply chain uncertainties.
The x86 and ARM architectures represent fundamentally different philosophies in processor design, with their distinctions shaping the computing landscape for decades. While x86 (developed by Intel and AMD) follows a Complex Instruction Set Computing (CISC) approach that prioritizes backward compatibility and complex instructions executed in microcode, ARM embraces Reduced Instruction Set Computing (RISC) principles with simpler, fixed-length instructions that execute in a single cycle.
This architectural difference manifests in practical terms: x86 processors typically deliver higher peak performance for computationally intensive tasks but consume more power, making them dominant in desktops and servers where electrical outlets are readily available; meanwhile, ARM processors excel in energy efficiency, offering better performance-per-watt ratios that have made them the architecture of choice for mobile devices, embedded systems, and increasingly for power-conscious data centers.
The multi-architecture integration wasn't simply about deploying new hardware. For Uber's infrastructure team, it meant rethinking fundamental systems that had been exclusively x86-based for years. The journey revealed how deeply architecture assumptions can permeate every layer of a technology stack.
At the foundation of this transition was Oracle Cloud Infrastructure's strategic embrace of Ampere Computing's ARM processors. These chips deliver remarkable energy efficiency – a trait ARM perfected in the mobile space now scaled to data center environments. For cloud providers, this translates to substantial power savings and increased compute density, reducing both energy costs and physical footprint requirements.
For Uber, these advantages align perfectly with its sustainability goals. As the company works toward zero emissions, adopting energy-efficient computing infrastructure represents a meaningful step in reducing its environmental impact while simultaneously improving its cost structure.
The transition began with host-level readiness – creating ARM-compatible images encompassing the operating system, kernel, and essential infrastructure components. Once hosts could boot, the team confronted their build pipeline, which revealed a complex web of dependencies. Uber's container system relied on Makisu, a tool optimized for x86 that couldn't cross-compile for ARM.

Build pipeline for container images
Rather than rewriting 5,000+ service build processes, the team employed a clever bootstrapping approach. They leveraged Google Bazel to build an ARM version of Makisu itself, which could then build other services natively. This seemingly straightforward task exposed circular dependencies: Makisu ran on Buildkite, which ran on Uber's Odin platform, which depended on host agents – all built with Makisu.
Breaking this circular dependency required methodically rebuilding each layer using Bazel's multi-architecture capabilities. The team started with host agents, then rebuilt Odin's components, followed by Buildkite, and finally Makisu. This foundation enabled a distributed build pipeline that could generate unified multi-architecture container images.
While this approach doubled build costs (with over 400,000 weekly container builds), the economics still favored ARM adoption. The distributed build system also provided a crucial advantage: it enabled gradual, controlled migration rather than an all-or-nothing approach.
The deployment systems required similar enhancements. Uber implemented architecture-specific placement constraints and automatic fallback mechanisms that would revert to x86 if compatibility issues arose. These safeguards allowed the team to migrate services incrementally while maintaining production reliability.
The successful deployment of their first ARM-based services marked a technical milestone, proving that multi-architecture infrastructure could work at Uber's scale. However, the journey from this initial success to migrating thousands of services requires additional strategies and tooling.
As cloud providers expand their processor architecture options, organizations like Uber and Bitmovin demonstrate both the challenges and potential benefits of incorporating diverse computing architectures into large-scale infrastructure systems. Bitmovin's complete migration of their encoding services to ARM processors, alongside Uber's experiences, offers valuable insights into how companies can navigate architectural heterogeneity at massive scale.