InfoQ Homepage Case Study Content on InfoQ
-
How to Unlock Insights and Enable Discovery within Petabytes of Autonomous Driving Data
Kyra Mozley explains Perception 2.0, shifting from rigid CV pipelines to semantic embeddings. She shares how Wayve uses foundation models & vector search to solve the edge case "needle in a haystack."
-
Lessons Learned from Building LinkedIn’s First Agent: Hiring Assistant
Karthik Ramgopal and Daniel Hewlett explain LinkedIn’s shift to agentic AI. They share how a modular supervisor-sub-agent architecture and a centralized skill registry power the new Hiring Assistant.
-
Scaling Cloud and Distributed Applications: Lessons and Strategies from chase.com, #1 Banking Portal in the US
Durai Arasan shares how Chase.com achieved a 71% latency reduction. He explains strategies for efficient scaling, multi-region resilience, and automated "repaving" to secure large-scale systems.
-
Developing Meta's Orion AR Glasses
Jinsong Yu (Meta) discusses the extreme engineering tradeoffs and architecture highlights (world-locked rendering, distributed compute, EMG input) of the 100g Orion AR glasses.
-
Transforming Primary Care: a Case Study in Evolving from Start-Up to Scale-Up
Leander Vanderbijl discusses how Kry transformed its spaghetti architecture into a cohesive system using Domain-Driven Design.
-
One Network: Cloud-Agnostic Service and Policy-Oriented Network Architecture
Anna Berenberg reveals Google's shift to One Network, streamlining diverse infrastructures to enhance developer velocity and policy management.
-
Inflection Points in Engineering Productivity as Amazon Grew 30x
Carlos Arguelles shares Amazon's engineering growth, detailing how crises, scale, and strategic shifts drove critical investments in developer productivity and infrastructure.
-
Supporting Diverse ML Systems at Netflix
David Berg and Romain Cledat discuss Metaflow, Netflix's ML infrastructure for diverse use cases from computer vision to recommendations.
-
Scaling Large Language Model Serving Infrastructure at Meta
Ye (Charlotte) Qi explains key considerations for optimizing LLM inference, including hardware, latency, and production scaling strategies.
-
Renovate to Innovate: Fundamentals of Transforming Legacy Architecture
Rashmi Venugopal explains fundamentals of technical renovation for scaling software, addressing tech debt & complexity.
-
Slack's Migration to a Cellular Architecture
Cooper Bethea explains the journey of converting Slack's monolithic production services to cellular, highlighting the challenges and key success factors.
-
Optimizing Search at Uber Eats
Janani Narayanan and Karthik Ramasamy share Uber Eats' backend scaling journey for nX merchant growth, tackling latency with infra & indexing optimizations.