InfoQ Homepage Microsoft Content on InfoQ
-
Microsoft Introduces Serverless GPUs on Azure Container Apps in Public Preview
Discover the power of Azure Container Apps with serverless GPUs, now in public preview! Leverage NVIDIA A100 and T4 GPUs for real-time AI inferencing and machine learning, all without infrastructure management. Enjoy scale-to-zero capabilities and per-second billing, optimizing both performance and costs. Unlock innovation with seamless Azure integration!
-
Azure Boost DPU: Microsoft's New Silicon Solution for Enhanced Cloud Performance
At Ignite 2024, Microsoft unveiled the Azure Boost DPU, its first in-house solution for low-power, data-centric workloads. This innovative chip optimizes cloud performance and security, offering triple the efficiency of CPUs. With a robust hardware-software design, Microsoft’s advancements position it to redefine AI and cloud infrastructure.
-
Azure AI Agent Service in Public Preview: Automation of Routine Tasks
Unveiling at Ignite, Microsoft's Azure AI Agent Service empowers developers to build and scale AI agents seamlessly. With secure integration, flexible use cases, and support for multiple frameworks, it automates workflows across platforms like Teams and Excel. Experience the future of business automation—innovate efficiently with Azure AI today!
-
Ai2 Launches OLMo 2, a Fully Open-Source Foundation Model
The Allen Institute for AI research team has introduced OLMo 2, a new family of open-source language models available in 7 billion (7B) and 13 billion (13B) parameter configurations. Trained on up to 5 trillion tokens, these models redefine training stability, adopting staged training processes, and incorporating diverse datasets.
-
Microsoft Introduces Magentic-One, a Generalist Multi-Agent System
Microsoft has announced the release of Magentic-One, a new generalist multi-agent system designed to handle open-ended tasks involving web and file-based environments. This system aims to assist with complex, multi-step tasks across various domains, improving efficiency in activities such as software development, data analysis, and web navigation.
-
QCon SF 2024 - Ten Reasons Your Multi-Agent Workflows Fail
At QCon SF 2024, Victor Dibia from Microsoft Research explored the complexities of multi-agent systems powered by generative AI. Highlighting common pitfalls like inadequate prompts and poor orchestration, he shared strategies for enhancing reliability and scalability. Dibia emphasized the need for meticulous design and oversight to unlock the full potential of these innovative systems.
-
Microsoft Announces General Availability of Fabric API for GraphQL
Microsoft has launched Fabric API for GraphQL, moving the data access layer from public preview to general availability (GA). This release introduces several enhancements, including support for Azure SQL and Fabric SQL databases, saved credential authentication, detailed monitoring tools, and integration with CI/CD workflows.
-
Microsoft Introduces Long-Awaited Local Emulator for Azure Service Bus
Microsoft has unveiled a local Azure Service Bus emulator, transforming the way developers create and test applications. This tool eliminates cloud dependencies and costs, offering a streamlined development cycle and precise debugging in an isolated environment. Compatible with the latest SDKs, it empowers developers to innovate efficiently while ensuring reliable application communication.
-
Native Vector Support in Azure SQL Database in Public Preview
Azure SQL Database now supports native vector storage and processing, streamlining AI development by integrating vector search with SQL queries. This update simplifies database management, enhances data analysis, and boosts performance by eliminating data movement. Ideal for diverse applications, it empowers sectors like e-commerce and healthcare with advanced, context-aware functionalities.
-
Microsoft Unveils Enhanced AI Tools for Developers at GitHub Universe
At GitHub Universe, Microsoft unveiled powerful integrations of Azure AI with GitHub and VS Code, empowering developers with context-aware tools like GitHub Copilot for Azure and AI App Templates. These innovations streamline workflows, enhance resource management, and simplify deployment, providing developers with robust features for efficient, secure application building and experimentation.
-
Central Package Management Now Available in .NET Upgrade Assistant
The .NET Upgrade Assistant team has recently introduced a significant upgrade: the Central Package Management (CPM) feature. This new capability enables .NET developers to manage dependencies more effectively, streamlining the upgrade process and maintaining consistency across various projects within a solution.
-
Logic App Standard Hybrid Deployment Model Public Preview: More Flexibility and Control On-Premise
Microsoft's Logic Apps Hybrid Deployment Model offers unparalleled flexibility for organizations, enabling the execution of workflows on-premises or in private/public clouds. With enhanced local processing, regulatory compliance, and dynamic scalability, businesses can optimize their infrastructure while ensuring data integrity- ideal for sectors like Government, Healthcare, and Manufacturing.
-
Microsoft Unveils Azure Cobalt 100-Based Virtual Machines: Enhanced Performance and Sustainability
Microsoft's Azure Cobalt 100 VMs are now generally available. They deliver up to 50% improved price performance with energy-efficient Arm architecture. Tailored for diverse workloads, these VMs offer various configurations, including general-purpose and memory-optimized options. Their release supports sustainable computing, aligning with Microsoft's commitment to lower carbon footprints.
-
Microsoft Launches Azure Confidential VMs with NVIDIA Tensor Core GPUs for Enhanced Secure Workloads
Microsoft's Azure has launched the NCC H100 v5 virtual machines, now equipped with NVIDIA Tensor Core GPUs, enhancing secure computing for high-performance workloads. These VMs leverage AMD EPYC processors for robust data protection, making them ideal for tasks like AI model training and inferencing, while ensuring a trusted execution environment for sensitive applications.
-
Microsoft and Tsinghua University Present DIFF Transformer for LLMs
Researchers from Microsoft AI and Tsinghua University have introduced a new architecture called the Differential Transformer (DIFF Transformer), aimed at improving the performance of large language models. This model enhances attention mechanisms by refining how models handle context and minimizing distractions from irrelevant information.