InfoQ Homepage AI, ML & Data Engineering Content on InfoQ
-
Sauce Labs Launches AI Agent to Automate Test Creation and Close the DevOps “Velocity Gap”
Sauce Labs has announced the general availability of Sauce AI for Test Authoring, an AI-driven agent designed to translate business intent directly into executable test suites, marking a shift toward what the company calls Intent-Driven Testing.
-
Mistral AI Introduces Workflows for Orchestrating Enterprise AI Processes
Mistral AI has launched Workflows, an orchestration layer for enterprise AI that is now in public preview. This release addresses a significant challenge as AI models and agents become more advanced, while reliably deploying them in production remains difficult due to a lack of infrastructure for coordination, monitoring, and recovery.
-
QCon AI Boston 2026 Schedule: Agents in Production, Inference Cost, and AI in the SDLC
The schedule for QCon AI Boston 2026 (June 1-2) is now live. The two-day program groups sessions around context engineering, inference economics, agent reliability, and how AI is changing the software development lifecycle. Speakers include engineers from DoorDash, LinkedIn, Netflix, Apple, and Red Hat.
-
How Slack Manages Context in Long-running Multi-agent Systems
To sustain productivity in long-running agent systems, Slack engineers moved away from accumulating chat logs and started using structured memory, validation, and distilled truth to maintain coherence and accuracy of long-running agent systems.
-
Google Cloud Introduces Agents CLI to Streamline AI Agent Development Lifecycle
Google Cloud has introduced Agents CLI within its Agent Platform, aiming to streamline the development lifecycle of AI agents from local prototyping to production deployment. The release targets a common challenge in agent development, where tooling and infrastructure are often fragmented across multiple services and environments.
-
Legare Kerrison and Cedric Clyburn on LLM Performance and Evaluations
Effectively measuring the performance of applications that are leveraging Large Language Models (LLM) is critical to the adoption of AI technologies in organizations. Legare Kerrison and Cedric Clyburn from RedHat team recently spoke at Arc of AI 2026 Conference about practical methods to evaluate and optimize LLM inference.
-
QCon San Francisco 2026: 12 Tracks Announced
The 12 tracks for QCon San Francisco 2026 (November 16-20) are now live. Four tracks cover AI in production. The other eight cover the rest of what senior engineering still demands: distributed systems, architecture teardowns, resilience, platform internals, API design, and Staff+ leadership. Early bird pricing runs until May 12th.
-
Uber Migrates 75,000+ Test Classes from Junit 4 to Junit 5 Using Automated Code Transformation
Uber engineers migrated over 75,000 test classes from JUnit 4 to JUnit 5 using automated code transformation with OpenRewrite and internal orchestration. By enabling the JUnit Platform for dual execution with Bazel and validating changes through CI, the team modernized testing infrastructure while maintaining correctness at monorepo scale.
-
Microsoft's Russinovich and Hanselman Warn AI Is Hollowing Out the Junior Developer Pipeline
Microsoft's Russinovich and Hanselman argue in a CACM paper that agentic AI creates an "AI drag" on junior developers while boosting seniors, incentivizing companies to stop hiring entry-level engineers. Entry-level hiring is down 67% since 2022. They propose a preceptor model borrowed from medical education to preserve the talent pipeline.
-
Cloudflare Sandboxes Reach General Availability, Giving AI Agents Persistent Isolated Environments
Cloudflare has released Sandboxes and Containers into general availability, providing persistent isolated Linux environments for AI agent workloads. New capabilities include secure credential injection via egress proxy, PTY terminal support, persistent code interpreters, filesystem watching, and snapshot-based session recovery. Active CPU pricing charges only for used cycles.
-
Cloudflare Outlines MCP Architecture as Enterprises Confront Security and Governance Risks
Cloudflare has outlined a reference architecture for scaling Model Context Protocol (MCP) deployments across the enterprise, positioning centralized governance, remote server infrastructure, and cost controls as key requirements for production-ready agent systems.
-
Anthropic Introduces Managed Agents to Simplify AI Agent Deployment
Anthropic introduces Managed Agents on Claude, a managed execution layer for agent-based workflows. It separates agent logic from runtime concerns like orchestration, sandboxing, state management, and credentials. The system supports long-running multi-step workflows with external tools, error recovery, and session continuity via a meta-harness architecture.
-
GitHub Acknowledges Recent Outages, Cites Scaling Challenges and Architectural Weaknesses
GitHub has publicly addressed a series of recent availability and performance issues that disrupted services across its platform, attributing the incidents to rapid growth, architectural coupling, and limitations in handling system load.
-
Designing Memory for AI Agents: inside Linkedin’s Cognitive Memory Agent
LinkedIn introduces Cognitive Memory Agent (CMA), generative AI infrastructure layer enabling stateful, context-aware systems. It provides persistent memory across episodic, semantic, and procedural layers, supporting multi-agent coordination, retrieval, and lifecycle management. CMA addresses LLM statelessness and enables production-grade personalization and long-term context in AI applications.
-
Subagents in Gemini CLI Enable Task Delegation and Parallel Agent Workflows
Google has introduced subagents in Gemini CLI, a new capability designed to help developers delegate complex or repetitive tasks to specialized AI agents operating alongside a primary session.