InfoQ Homepage Large language models Content on InfoQ
-
Mistral AI Introduces Saba: Regional Language Model for Arabic and South Indian Language
Mistral AI has introduced Mistral Saba, a 24-billion-parameter language model designed to improve AI performance in Arabic and several Indian-origin languages, particularly South Indian languages like Tamil.
-
Hugging Face Publishes Guide on Efficient LLM Training across GPUs
Hugging Face has published the Ultra-Scale Playbook: Training LLMs on GPU Clusters, an open-source guide that provides a detailed exploration of the methodologies and technologies involved in training LLMs across GPU clusters.
-
IBM Granite 3.2 Brings New Vision Language Model, Chain of Thought Reasoning, Improved TimeSeries
IBM has introduced its new Granite 3.2 multi-modal and reasoning model. Granite 3.2 features experimental chain-of-thought reasoning capabilities that significantly improve its predecessor's performance, a new vision language model (VLM) outperforming larger models on several benchmarks, and smaller models for more efficient deployments.
-
GitHub Copilot Extensions Integrate IDEs with External Services
Now generally available, GitHub Copilot Extensions allow developers to use natural language to query documentation, generate code, retrieve data, and execute actions on external services without leaving their IDEs. Besides using public extensions from companies like Docker, MongoDB, Sentry, and many more, developers can create their own extensions to work with internal libraries or APIs.
-
Google DeepMind’s AlphaGeometry2 AI Achieves Gold-Medal Math Olympiad Performance
Google DeepMind's AlphaGeometry2 (AG2) AI model solved 84% of the geometry problems from the last 25 years of International Math Olympiads (IMO), outperforming the average human gold-medalist performance.
-
Perplexity Unveils Deep Research: AI-Powered Tool for Advanced Analysis
Perplexity has introduced Deep Research, an AI-powered tool designed for conducting in-depth analysis across various fields, including finance, marketing, and technology. The system automates the research process by performing multiple searches, analyzing extensive sources, and synthesizing findings into structured reports within minutes.
-
Google Gemini's Long-term Memory Vulnerable to a Kind of Phishing Attack
AI security hacker Johann Rehberger described a prompt injection attack against Google Gemini able to modify its long-term memories using a technique he calls delayed tool invocation. The researcher described the attack as a sort of social engineering/phishing attack triggered by the user interacting with a malicious document.
-
How a Software Architect Uses Artificial Intelligence in His Daily Work
Software architects and system architects will not be replaced anytime soon by generative artificial intelligence (AI) or large language models (LLMs), Avraham Poupko said. They will be replaced by software architects who know how to leverage generative AI and LLMs, and just as importantly, know how NOT to use generative AI.
-
Latin America Launches Latam-GPT to Improve AI Cultural Relevance
Latin America is advancing in the development of artificial intelligence with the creation of Latam-GPT, a language model designed to better represent the history, culture, and linguistic diversity of the region.
-
Meta Introduces LLM-Powered Tool for Software Testing
Meta has unveiled the Automated Compliance Hardening (ACH) tool, a mutation-guided, LLM-based test generation system. Designed to enhance software reliability and security, ACH generates faults in source code and subsequently creates tests to detect and address these issues.
-
UC Berkeley's Sky Computing Lab Introduces Model to Reduce AI Language Model Inference Costs
UC Berkeley's Sky Computing Lab has released Sky-T1-32B-Flash, an updated reasoning language model that addresses the common issue of AI overthinking. The model, developed through the NovaSky (Next-generation Open Vision and AI) initiative, "slashes inference costs on challenging questions by up to 57%" while maintaining accuracy across mathematics, coding, science, and general knowledge domains.
-
Gemini 2.0 Family Expands with Cost-Efficient Flash-Lite and Pro-Experimental Models
Announced last December, the Gemini 2.0 family of models now has a new member, Gemini 2.0 Flash-Lite, which Google says is cost-optimized for large scale text output use cases and is now available in preview. Along with Flash-Lite, Google also announced Gemini 2.0 Pro.
-
OpenAI Releases Reasoning Model o3-mini, Faster and More Accurate Than o1
OpenAI released OpenAI o3-mini, their latest reasoning LLM. o3-mini is optimized for STEM applications and outperforms the full o1 model on science, math, and coding benchmarks, with lower response latency than o1-mini.
-
Micronaut Framework 4.7.0 Provides Integration with LangChain4j and Graal Languages
The Micronaut Foundation has released Micronaut Framework 4.7.0 in December 2024, four months after the release of version 4.6.0. This version provides LangChain4J support to integrate LLMs into Java applications. Micronaut Graal Languages provides integration with Graal-based dynamic languages such as the Micronaut GraalPy feature to interact with Python.
-
OpenEuroLLM: Europe’s New Initiative for Open-Source AI Development
A consortium of 20 European research institutions, companies, and EuroHPC centers has launched OpenEuroLLM, an initiative to develop open-source, multilingual large language models (LLMs). Coordinated by Jan Hajič and co-led by Peter Sarlin, the project aims to provide transparent and compliant AI models for commercial and public sector applications.