InfoQ Homepage Large language models Content on InfoQ
-
Decart and Etched Release Oasis, a New AI Model Transforming Gaming Worlds
Decart.ai and Etched.ai recently introduced Oasis, an AI-driven model that generates a fully interactive, real-time open-world experience inspired by Minecraft.
-
Grab Employs LLMs for Conversational Data Discovery with GPT-4, Glean and Slack
Grab responded to the challenges of finding valuable datasets among 200k+ tables by enhancing Hubble, the data discovery tool, with new capabilities leveraging GenAI technologies. The company reduced the data discovery process by incorporating LLMs to generate dataset documentation and created a Slack bot to bring effective data discovery to data consumers.
-
Amazon SageMaker JumpStart Expands Portfolio with Bria AI's Text-to-Image Models
Amazon Web Services has integrated Bria AI's latest text-to-image foundation models into Amazon SageMaker JumpStart, marking a significant expansion of its enterprise-grade generative AI capabilities. The addition includes three variants - Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast, each designed to address specific enterprise needs in visual content generation.
-
GitHub and Google Cloud Collaborate to Bring Gemini 1.5 Pro to GitHub Copilot
GitHub's partnership with Google Cloud brings Gemini 1.5 Pro to GitHub Copilot, revolutionizing development with AI that processes two million tokens. This natively multimodal tool excels at code generation, analysis, and optimization, empowering developers to effortlessly manage extensive codebases in platforms like Visual Studio Code.
-
xAI Unveils a New API Service for Grok Models
Elon Musk’s xAI has launched a public beta for its API service, enabling developers to integrate xAI's large language models (LLMs) into their applications.
-
OpenAI Releases ChatGPT Search Feature
OpenAI recently released ChatGPT Search which allows ChatGPT to search the web when answering user questions. Instead of being limited to knowledge available at the time of training, ChatGPT can now incorporate current information from the web and include links to its sources.
-
Meta MobileLLM Advances LLM Design for On-Device Use Cases
With MobileLLM, Meta researchers aim to show that, for smaller models, quality is not a direct product of how many billions parameters they have; rather, it is the result of carefully designing their architecture. To prove their point, they coupled deep and thin architectures with embedding sharing and grouped-query attention mechanisms to improve accuracy over prior state-of-the-art models.
-
Microsoft Introduces Vector Data Abstractions Library for .NET
On October 29th 2024, Microsoft released Microsoft.Extensions.VectorData.Abstractions library for .NET in preview. It makes it easier to integrate .NET solutions with the AI Semantic Kernel SDK, using abstractions over concrete AI implementations and models.
-
Meta AI Introduces Thought Preference Optimization Enabling AI Models to Think before Responding
Researchers from Meta FAIR, the University of California, Berkeley, and New York University have introduced Thought Preference Optimization (TPO), a new method aimed at improving the response quality of instruction-fine tuned LLMs.
-
Meta Spirit LM Integrates Speech and Text in New Multimodal GenAI Model
Presented in a recent paper, Spirit LM enables the creation of pipelines that mixes spoken and written text to integrate speech and text in the same multimodal model. According to Meta, their novel approach, based on interleaving text and speech tokens, makes it possible to circumvent the inherent limitations of prior solutions that use distinct pipelines for speech and text.
-
Stable Diffusion 3.5 Improves Text Rendering, Image Quality, Consistency, and More
Stability AI has released Stable Diffusion 3.5 Large, its most powerful text-to-image generation model to date, and Stable Diffusion 3.5 Large Turbo, with special emphasis on customizability, efficiency, and flexibility. Both models come with a free licensing model for non commercial and limited commercial use.
-
AI and ML Tracks at QCon San Francisco 2024 – a Deep Dive into GenAI & Practical Applications
At QCon San Francisco 2024, explore two AI/ML-focused tracks highlighting real-world applications and innovations. Learn from industry experts on deploying LLMs, GenAI, and recommendation systems, gaining practical strategies for integrating AI into software development.
-
Distill Your LLMs and Surpass Their Performance: spaCy's Creator at InfoQ DevSummit Munich
In her presentation at the inaugural edition of InfoQ Dev Summit Munich, Ines Montani built on top of the presentation she had earlier this year at QCon London and provided the audience with practical solutions for using the latest state-of-the-art models in real-world applications and distilling their knowledge into smaller and faster components that you can run and maintain in-house.
-
University Researchers Publish Analysis of Chain-of-Thought Reasoning in LLMs
Researchers from Princeton University and Yale University published a case study of Chain-of-Thought (CoT) reasoning in LLMs which shows evidence of both memorization and true reasoning. They also found that CoT can work even when examples given in the prompt are incorrect.
-
Microsoft and Tsinghua University Present DIFF Transformer for LLMs
Researchers from Microsoft AI and Tsinghua University have introduced a new architecture called the Differential Transformer (DIFF Transformer), aimed at improving the performance of large language models. This model enhances attention mechanisms by refining how models handle context and minimizing distractions from irrelevant information.