InfoQ Homepage Generative AI Content on InfoQ
-
Study Shows AI Coding Assistant Improves Developer Productivity
Researchers from Microsoft, MIT, Princeton University, and the Wharton School of the University of Pennsylvania recently published a study that showed the use of GitHub Copilot increased developer productivity. The team conducted three separate randomized controlled trials (RCT) involving over 4,000 developers; the ones using Copilot achieved a 26% increase in productivity.
-
Stability AI Announces Integration of Top Text-to-Image Models with Amazon Bedrock
Stability AI has introduced three new text-to-image models to Amazon Bedrock: Stable Image Ultra, Stable Diffusion 3 Large, and Stable Image Core. These models focus on improving performance in multi-subject prompts, image quality, and typography. They are designed to generate high-quality visuals for various use cases in marketing, advertising, media, entertainment, retail, and more.
-
Apple Open-Sources Multimodal AI Model 4M-21
Researchers at Apple and the Swiss Federal Institute of Technology Lausanne (EPFL) have open-sourced 4M-21, a single any-to-any AI model that can handle 21 input and output modalities. 4M-21 performs well "out of the box" on several vision benchmarks and is available under the Apache 2.0 license.
-
Google Announces Game Simulation AI GameNGen
A research team from Google recently published a paper on GameNGen, a generative AI model that can simulate the video game Doom. GameNGen can simulate the game at 20 frames-per-second (FPS) and in human evaluations was preferred only slightly less often than the actual game.
-
HelixML Announces Helix 1.0 Release
HelixML has announced their Helix platform for Generative AI is production ready at version 1.0. Described as a "Private GenAI Stack," the platform provides an interface layer and applications that can be connected to a variety of LLMs. It can be used to prototype apps, starting with a laptop, with all components version controlled to ease subsequent deployment and scaling.
-
Alibaba Releases Two Open-Weight Language Models for Math and Voice Chat
Alibaba released two open-weight language model families: Qwen2-Math, a series of LLMs tuned for solving mathematical problems; and Qwen2-Audio, a family of multi-modal LLMs that can accept voice or text input. Both families are based on Alibaba's Qwen2 LLM series, and all but the largest version of Qwen2-Math are available under the Apache 2.0 license.
-
Apple Unveils Apple Foundation Models Powering Apple Intelligence
Apple published the details of their new Apple Foundation Models (AFM), a family of large language models (LLM) that power several features in their Apple Intelligence suite. AFM comes in two sizes: a 3B parameter on-device version and a larger cloud-based version.
-
LLMs and Agents as Team Enablers
Eric Naiburg and Birgitta Böckeler published articles on the benefits and challenges of using AI as a multiplier in dev teams. We report on their insights for scenarios such as simplifying the germane cognitive load of a domain, automating code migrations, and coaching scrum masters on team facilitation. We also cover Böckeler's experiments with using LLMs to onboard onto a complex project.
-
MariaDB Introduces Open-Source Vector Preview, Aiming to Become Default MySQL Option
With the release of MariaDB 11.6, the MariaDB Foundation has announced the public preview of Vector search for the open-source fork of the MySQL engine. Database experts and open-source advocates see vector support as an opportunity for MariaDB to lead the MySQL ecosystem, especially since Oracle reserves most new features for its enterprise editions only.
-
Amazon MemoryDB Provides Fastest Vector Search on AWS
AWS recently announced the general availability of vector search for Amazon MemoryDB, the managed in-memory database with Multi-AZ availability. The new capability provides ultra-low latency and the fastest vector search performance at the highest recall rates among vector databases on AWS.
-
Mistral AI Releases Three Open-Weight Language Models
Mistral AI released three open-weight language models: Mistral NeMo, a 12B parameter general-purpose LLM; Codestral Mamba, a 7B parameter code-generation model; and Mathstral, a 7B parameter model fine-tuned for math and reasoning. All three models are available under the Apache 2.0 license.
-
Increasing Productivity by Becoming a Dual-Purpose Stream Aligned and Platform Software Team
To manage their increased workload effectively and maintain quality and efficiency, a software team decided to become dual-purpose: stream-aligned and platform. They rewrote their main application to be API-first and implemented micro releases with their customer-facing products, to provide value to their end users quickly and maintain a steady flow of accomplishments for the team.
-
AWS Announces a Generative Artificial Intelligence-Powered Service AWS App Studio in Preview
AWS App Studio, a new generative artificial intelligence (AI)-powered service designed to enable technical professionals without software development skills to create enterprise-grade applications using natural language, has been launched in preview by AWS in the US West (Oregon) AWS region.
-
AWS Introduces Amazon Q Developer in SageMaker Studio to Streamline ML Workflows
AWS announced that Amazon SageMaker Studio now includes Amazon Q Developer as a new capability. This generative AI-powered assistant is built natively into SageMaker’s JupyterLab experience and provides recommendations for the best tools for each task, step-by-step guidance, code generation, and troubleshooting assistance.
-
Redis Improves Performance of Vector Semantic Search with Multi-Threaded Query Engine
Redis, the in-memory data structure store, has recently released its enhanced Redis Query Engine. This comes at a time when vector databases are gaining prominence due to their importance in retrieval-augmented generation (RAG) for GenAI applications. Redis announced significant improvements to its Query Engine, using multi-threading to enhance query throughput while maintaining low latency.