InfoQ Homepage GPT-4 Content on InfoQ
News
RSS Feed-
Gemini 2.0 Family Expands with Cost-Efficient Flash-Lite and Pro-Experimental Models
Announced last December, the Gemini 2.0 family of models now has a new member, Gemini 2.0 Flash-Lite, which Google says is cost-optimized for large scale text output use cases and is now available in preview. Along with Flash-Lite, Google also announced Gemini 2.0 Pro.
-
Rhymes AI Unveils Aria: Open-Source Multimodal Model with Development Resources
Rhymes AI has introduced Aria, an open-source multimodal native Mixture-of-Experts (MoE) model capable of processing text, images, video, and code effectively. In benchmarking tests, Aria has outperformed other open models and demonstrated competitive performance against proprietary models such as GPT-4o and Gemini-1.5.
-
OpenAI Releases Stable Version of .NET Library with GPT-4o Support and API Enhancements
OpenAI has released the stable version of its official .NET library, following June's beta launch. Available as a NuGet package, it supports the latest models like GPT-4o and GPT-4o mini, and the full OpenAI REST API. The release includes both sync and async APIs, streaming chat completions, and key-breaking changes for improved API consistency.
-
Microsoft Launches Open-Source Phi-3.5 Models for Advanced AI Development
Microsoft launched three new open-source AI models in its Phi-3.5 series: Phi-3.5-mini-instruct, Phi-3.5-MoE-instruct, and Phi-3.5-vision-instruct. Available under a permissive MIT license, these models offer developers powerful tools for various tasks, including reasoning, multilingual processing, and image and video analysis.
-
Azure OpenAI's “Use Your Data” Feature Now Generally Available
Microsoft has officially made On Your Data generally available in Azure OpenAI Service. This feature enables users to harness the full power of OpenAI models, including GPT-4, and seamlessly integrate the advanced features of the RAG (Retrieval Augmented Generation) model with their data. According to the company, all this is backed by enterprise-grade security on Azure.
-
OpenAI is Adding Memory Capabilities to ChatGPT to Improve Conversations
By letting ChatGPT remember conversations, OpenAI hopes to reduce the need for users to provide repetitive context information and make future chats more helpful. Users will be able to ask what to remember explicitly, what to forget, or turn off the feature entirely.
-
OpenAI Releases New Embedding Models and Improved GPT-4 Turbo
OpenAI recently announced the release of several updates to their models, including two new embedding models and updates to GPT-4 Turbo and GPT-3.5 Turbo. The company also announced improvements to their free text moderation tool and to their developer API management tools.
-
Meta Releases Code Generation Model Code Llama 70B, Nearing GPT-3.5 Performance
Code Llama 70B is Meta's new code generation AI model. Thanks to its 70 billion parameters, it is "the largest and best-performing model in the Code Llama family", Meta says.
-
AI Researchers Improve LLM-Based Reasoning by Mimicking Learning from Mistakes
Researchers from Microsoft, Peking University, and Xi’an Jiaotong University claim to have developed a technique to improve large language models' (LLMs) ability to solve math problems by replicating how humans learn from their own mistakes.