InfoQ Homepage QCon London 2024 Content on InfoQ
-
QCon London: gRPC Migration Automation at LinkedIn
At QCon London 2024, Karthik Ramgopal and Min Chen described how AI helped LinkedIn change the remote procedure calls (RPC) protocol for 50,000 production endpoints from Rest.li to Google's gRPC. A planned 2-3 year manual migration turned into an AI-supported migration lasting 2-3 quarters. It changed 20 million lines of code across 2000 services – without business interruption.
-
QCon London: Mastering Long-Running Processes in Modern Architectures
At QCon London 2024, Bernd Ruecker recommended implementing long-running tasks asynchronously with a process-orchestration platform. Such a platform provides better service boundaries and efficiencies and reduces accidental system complexity and risk. Organizing the platform centrally in an organization eases orchestration adoption by applications.
-
QCon London: The Art, Science and Psychology of Decision-Making
At QCon London 2024, Hannes Ricklefs, head of architecture at the BBC, gave a well-received talk on decision making. Ricklefs summarised the key reasons behind applying art, science and psychology to the discipline of decision-making, focusing on appropriate methodologies to use and the effects of biases on our ability to make good decisions in both a personal and business context.
-
QCon London: How Duolingo Sent 4 Million Push Notifications in 6 Seconds During the Super Bowl Break
As part of the Super Bowl marketing campaign, Duolingo sent out 4 million mobile push notifications when the company’s five-second ad aired during the commercial break. At QCon London, Doulingo’s engineers presented the asynchronous AWS architecture responsible for broadcasting messages to millions of users across seven US cities.
-
QCon London: Efficient Serverless Development
At QCon London, Yan Cui, a serverless advocate at Lumigo, shared patterns for effective local development with AWS serverless technologies. The focus areas were testing approaches, deployment practices, and application environments.
-
Large Language Models for Code by Loubna Ben Allal at QCon London
At QCon London, Loubna Ben Allal discussed Large Language Models (LLMs) for code. She discussed the lifecycle of code completion models, which consists of pre-training on vast codebases and finetuning and continuous adaptation. She specifically discussed open-source models, which are powered by platforms like Hugging Face.
-
QCon London: Meta Used Monolithic Architecture to Ship Threads in Only Five Months
Zahan Malkani talked during QCon London 2024 about Meta’s journey from identifying the opportunity in the market to shipping the Threads application only five months later. The company leveraged Instagram's existing monolithic architecture and quickly iterated to create a new text-first microblogging service in record time.
-
Efficient DevSecOps Workflows with a Little Help from AI: Q&A with Michael Friedrich
At QCon London, Michael Friedrich, senior developer advocate at GitLab, discussed how AI can help in DevSecOps workflows. His session was part of the Cloud-Native Engineering track on the first day of the conference. InfoQ interviewed Friedrich after the session.
-
Navigating LLM Deployment: Tips, Tricks and Techniques by Meryem Arik at QCon London
At QCon London, Meryem Arik discussed deploying Large Language Models (LLMs). While initial proofs of concept benefit from hosted solutions, scaling demands self-hosting to cut costs, enhance performance with tailored models, and meet privacy and security requirements. She emphasized understanding deployment limits, quantization for efficiency, and optimizing inference to fully use GPU resources.