Smolagents is a library created at Hugging Face to build agents leveraging large language models (LLMs). Hugging Faces says its new library aims to be simple and LLM-agnostic. It supports secure "agents that write their actions in code" and is integrated with Hugging Face Hub.
Agentic systems promises to extend the possibility of computer programs beyond the mere execution of pre-determined workflows conceived to solve narrow tasks. In fact, most real-life problems do not fit in pre-determined workflows, say Hugging Face engineers Aymeric Roucher, Merve Noyan, and Thomas Wolf.
Agents, in HuggingFace's view, provide LLMs access to the outside world. An Agent-based system can be either a multi-step agent or multi-agent and differs from other LLM-based systems in the level of agency of LLMs in the system. Specifically, AI Agents have the characteristic that LLM outputs control the system workflow. In other LLM-based systems, on the contrary, LLM output may have no impact whatsoever on the program's flow or some intermediate effect.
The way agentic systems achieve their workflow flexibility is having an LLM write an action, which takes the form of calls to external tools. This idea is represented in the following meta-code:
memory = [user_defined_task]
while llm_should_continue(memory): # this loop is the multi-step part
action = llm_get_next_action(memory) # this is the tool-calling part
observations = execute_action(action)
memory += [action, observations]
This idea is not new and, as Roucher, Noyan, and Wolf remarks, there already exists a commonly accepted JSON format, used by Anthropic, OpenAI, and others to describe such actions, i.e., calls to external tools. Here is where smolagents takes a distinct approach, based on the realization that JSON is not the best way to express what a computer should do. Instead, they preferred writing actions in code because programming languages provide a superior way to describe computer behavior, granting better composability, data management, and generality. Since LLMs already have the capacity to create quality code, this approach adds no major complexity.
To create agentic systems, you need to solve a few recurrent problems, such as parsing the agent's output and synthesizing prompts based on what happened in the last iteration. Those are among the key features provided by smolagents, along with error logging and retry mechanisms.
If you want to build an agent system, however, you need to first determine if you need one. Indeed, as Roucher, Noyan, and Wolf explain, agents may be overkill.
If [a] deterministic workflow fits all queries, by all means just code everything! This will give you a 100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behavior.
Once you are sure you need an agent, you need an LLM and some tools. You can use any open model using Hugging Face HfApiModel
class or you can use LiteLMMModel
to access a plethora of Cloud-based LLM. A tool is just a function the LLM can execute with some inputs.
Hugging Face ran a series of benchmarks using some of the leading models, such as GPT4o, Claude3.5, LLaMA-3.3 70B, and others, to create smolagents and found out that open models can rival with the best closed models.
Hugging Face smolagents are not the only currently available tool to create agentic systems. In particular, OpenAI released Swarm, which leverages routines and handoffs to have multiple agents coordinate with one another. Additionally, Microsoft introduced Magentic-One and AWS has its own Multi-Agent Orchestrator.