BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Mistral AI Releases Three Open-Weight Language Models

Mistral AI Releases Three Open-Weight Language Models

Mistral AI released three open-weight language models: Mistral NeMo, a 12B parameter general-purpose LLM; Codestral Mamba, a 7B parameter code-generation model; and Mathstral, a 7B parameter model fine-tuned for math and reasoning. All three models are available under the Apache 2.0 license.

Mistral AI calls NeMo their "new best small model." The model has a 128k token context window and is available in both a base model and an instruct-tuned version. Mistral NeMO supports multiple languages, with "strong" performance on 11 languages including Chinese, Japanese, Arabic, and Hindi. Mistral developed a new tokenizer for the model, called Tekken, which has more efficient compression of source code and natural language. On LLM benchmarks such as MMLU and Winogrande, Mistral NeMO outperforms similarly-sized models, including Gemma 2 9B and Llama 3 8B

Codestral Mamba is based on the Mamba architecture, an alternative to the more common Transformer that most LLMs are derived from. Mamba models offer faster inference than Transformers and theoretically infinite context length. Mistral touts their model's ability to provide users with  "quick responses, irrespective of the input length" and performance "on par" with larger Transformer-based models such as CodeLlama 34B.

Mathstral was developed in collaboration with Project Numina, a non-profit organization dedicated to fostering AI for mathematics. It is based on the Mistral 7B model and is fine-tuned for performance in STEM subjects. According to Mistral AI, Mathstral "achieves state-of-the-art reasoning capacities in its size category" on several benchmarks, including 63.47% on MMLU and 56.6% on MATH.

In a discussion on Hacker News about Mistral NeMo, one user pointed out that:

[The model features] an improvement at just about everything, right? Large context, permissive license, should have good perf. The one thing I can't tell is how big 12B is going to be (read: how much VRAM/RAM is this thing going to need). Annoyingly and rather confusingly for a model under Apache 2.0, [Huggingface] refuses to show me files unless I login and "You need to agree to share your contact information to access this model"...though if it's actually as good as it looks, I give it hours before it's reposted without that restriction, which Apache 2.0 allows.

Other users pointed out that, at release time, the model was not supported by the popular Ollama framework, because Mistral NeMo used a new tokenizer. However, the Ollama devs added support for NeMo in less than a week.

Hacker News users also discussed Codestral Mamba, speculating whether it would be a good solution for an "offline" or locally hosted coding assistant. One user wrote:

I don't have a gut feel for how much difference the Mamba arch makes to inference speed, nor how much quantisation is likely to ruin things, but as a rough comparison Mistral-7B at 4 bits per param is very usable on CPU. The issue with using any local models for code generation comes up with doing so in a professional context: you lose any infrastructure the provider might have for avoiding regurgitation of copyright code, so there's a legal risk there. That might not be a barrier in your context, but in my day-to-day it certainly is.

The new models are available for download on Huggingface or via Mistral's mistral-inference SDK. Mistral NeMO and Codestral Mamba are available via Mistral AI's la Plateforme API. Mistral NeMO is further available via NVIDIA's NIM inference microservice, and Codestral Mamba can be deployed using TensorRT-LLM.

About the Author

Rate this Article

Adoption
Style

BT