of AI brokers. LLMs are not simply instruments. They’ve turn out to be energetic members in our lives, boosting productiveness and remodeling the best way we stay and work.
- OpenAI lately launched Operator, an AI agent that may autonomously carry out numerous duties, from searching the net to filling out varieties and scheduling appointments.
- Anthropic launched MCP (Mannequin Context Protocol), a brand new normal for the way AI assistants work together with the surface world. With over 5 thousand energetic MCP servers already, adoption is rising quickly.
- AI brokers are additionally altering the panorama of software program engineering. Instruments like GitHub Copilot’s agentic mode, Claude Code, OpenAI Codex, and others are usually not solely bettering developer productiveness and code high quality but in addition democratising the sphere, making software program improvement accessible to individuals and not using a technical background.
We’ve beforehand checked out totally different AI Agent frameworks, resembling LangGraph or CrewAI. On this article, I want to talk about a brand new one I’ve been exploring lately — HuggingFace smolagents. It’s an fascinating framework because it implements the idea of code brokers.
On this article, we’ll discover a number of subjects:
- What code brokers are (teaser: it’s not associated to vibe coding).
- Find out how to use the HuggingFace smolagents framework in apply.
- Whether or not it’s safe to provide LLMs a lot company.
- The actual distinction in efficiency between code brokers and conventional tool-calling brokers.
AI Brokers recap
Let’s begin with a fast refresher: what precisely are AI brokers? HuggingFace supplies a transparent and concise definition of what they imply by brokers.
AI Brokers are applications the place LLM outputs management the workflow.
So, we’d like an agentic movement once we need a system to motive and act primarily based on observations. Really, company just isn’t a binary variable (sure or no), however a spectrum.
- At one finish, we are able to have methods with out company in any respect, for instance, a easy course of the place an LLM defines the sentiment of a textual content, interprets it or summarises it.
- The following degree is routing, the place an LLM can classify an incoming query and determine which path to take — for instance, calling a software if a buyer is asking in regards to the standing of their present order, and transferring the dialog to a human CS agent in any other case.
- Extra superior methods can exhibit increased levels of company. These would possibly embrace the power to execute different LLMs (multi-agent setup) and even create new instruments on the fly.
Code brokers fall into this extra superior class. They’re multi-step brokers that execute software calls within the type of code, in distinction to the extra conventional strategy utilizing a JSON format with the software title and arguments.
A number of latest papers have proven that utilizing code in agentic flows results in higher outcomes:
It is sensible when you concentrate on it. We’ve been growing programming languages for many years to unravel advanced issues. So, it’s pure that these languages are higher suited to LLM’s duties than easy JSON configs. An extra profit is that LLMs are already fairly good at writing code in frequent programming languages, due to the huge quantity of obtainable information for coaching.
This strategy comes with a number of different advantages as properly:
- By producing code, an LLM just isn’t restricted to a predefined set of instruments and might create its personal features.
- It could mix a number of instruments inside a single motion utilizing circumstances and loops, which helps cut back the variety of steps required to finish a process.
- It additionally allows the mannequin to work with a greater diversity of outputs, resembling producing charts, photographs, or different advanced objects.
These advantages aren’t simply theoretical; we are able to observe them in apply. In “Executable Code Actions Elicit Higher LLM Brokers”, the authors present that code brokers outperform conventional strategies, reaching the next success charge and finishing a process in fewer steps, which in flip reduces prices.

Code brokers look promising, which impressed me to do this strategy in apply.
HuggingFace smolagents framework
First attempt
Fortunately, we don’t have to construct code brokers from scratch, as HuggingFace has launched a useful library referred to as smolagents that implements this strategy.
Let’s begin by putting in the library.
pip set up smolagents[litellm]
# I've used litellm, since I am planning to make use of it with OpenAI mannequin
Subsequent, let’s construct a fundamental instance. To initialise the agent, we’d like simply two parameters: mannequin and instruments.
I plan to make use of OpenAI for the mannequin, which is accessible through LiteLLM. Nevertheless, the framework helps different choices as properly. You need to use a neighborhood mannequin through Ollama or TransformersModel, or public fashions through Inference Suppliers or select different choices (you could find extra particulars in the documentation).
I didn’t specify any instruments, however used add_base_tools = True
, so my agent has a default set of instruments, resembling a Python interpreter or DuckDuckGo search. Let’s attempt it out with a easy query.
from smolagents import CodeAgent, LiteLLMModel
mannequin = LiteLLMModel(model_id="openai/gpt-4o-mini",
api_key=config['OPENAI_API_KEY'])
agent = CodeAgent(instruments=[], mannequin=mannequin, add_base_tools=True)
agent.run(
"""I've 5 totally different balls and I randomly choose 2.
What number of potential combos of the balls I can get?""",
)
Consequently, we see a extremely properly formatted execution movement. It’s simply wonderful and permits you to perceive the method completely.

So, the agent discovered a solution in a single step and wrote Python code to calculate the variety of combos.
The output is sort of useful, however we are able to go even deeper and have a look at the entire info associated to execution (together with prompts), through agent.reminiscence.steps
. Let’s have a look at the system immediate utilized by the agent.
You're an knowledgeable assistant who can resolve any process utilizing code blobs.
You can be given a process to unravel as greatest you'll be able to.
To take action, you have got been given entry to an inventory of instruments: these instruments
are mainly Python features which you'll be able to name with code.
To resolve the duty, you should plan ahead to proceed in a collection of
steps, in a cycle of 'Thought:', 'Code:',
and 'Remark:' sequences.
At every step, within the 'Thought:' sequence, it's best to first clarify
your reasoning in direction of fixing the duty and the instruments that you really want
to make use of.
Then within the 'Code:' sequence, it's best to write the code in easy
Python. The code sequence should finish with '' sequence.
Throughout every intermediate step, you need to use 'print()' to save lots of
no matter essential info you'll then want.
These print outputs will then seem within the 'Remark:' area,
which can be obtainable as enter for the subsequent step.
Ultimately it's important to return a remaining reply utilizing
the final_answer software.
Listed below are a couple of examples utilizing notional instruments: <...>
It’s fairly clear that smolagents implements the ReAct strategy (launched within the paper by Yao et al. “ReAct: Synergizing Reasoning and Performing in Language Fashions”) and makes use of a few-shot prompting approach.
The smolagents library handles all behind-the-scenes work concerned within the agent workflow: assembling the system immediate with all vital info for the LLM (i.e. obtainable instruments), parsing the output and executing the generated code. It additionally supplies complete logging and a retry mechanism to assist appropriate errors.
Moreover, the library affords reminiscence administration options. By default, all execution outcomes are saved to reminiscence, however you’ll be able to customise this behaviour. For instance, you’ll be able to take away some middleman outcomes from the reminiscence to cut back the variety of tokens or execute the agent step-by-step. Whereas we gained’t dive deep into reminiscence administration right here, you could find useful code examples in the documentation.
Safety
Now, it’s time to debate the drawbacks of the code brokers’ strategy. Giving an LLM extra company by permitting it to execute arbitrary code introduces increased dangers. Certainly, an LLM can run dangerous code both by mistake (since LLMs are nonetheless removed from good) or on account of focused assaults like immediate injections or compromised fashions.
To mitigate these dangers, the native Python executor carried out within the smolagents library has a bunch of security checks:
- By default, imports are usually not allowed except the bundle has been explicitly added to
additional_authorized_imports
checklist. - Furthermore, submodules are blocked by default, so you should authorise them particularly (i.e.
numpy.*
). It’s been performed as a result of some packages can expose probably dangerous submodules, i.e.random._os
. - The full variety of executed operations is capped, stopping infinite loops and useful resource bloating.
- Any operation not explicitly outlined within the interpreter will increase an error.
Let’s check whether or not these security measures really work.
from smolagents.local_python_executor import LocalPythonExecutor
custom_executor = LocalPythonExecutor(["numpy.*", "random"])
# operate to have fairly formatted exceptions
def run_capture_exception(command: str):
attempt:
custom_executor(harmful_command)
besides Exception as e:
print("ERROR:n", e)
# Unauthorised imports are blocked
harmful_command="import os; exit_code = os.system('')"
run_capture_exception(harmful_command)
# ERROR: Code execution failed at line 'import os' on account of:
# InterpreterError: Import of os just isn't allowed. Approved imports
# are: ['datetime', 'itertools', 're', 'math', 'statistics', 'time', 'queue',
# 'numpy.*', 'random', 'collections', 'unicodedata', 'stat']
# Submodules are additionally blocked except said particularly
harmful_command="from random import _os; exit_code = _os.system('')"
run_capture_exception(harmful_command)
# ERROR: Code execution failed at line 'exit_code = _os.system('')'
# on account of: InterpreterError: Forbidden entry to module: os
# The cap on the variety of iterations breaks inifinity loops
harmful_command = '''
whereas True:
cross
'''
run_capture_exception(harmful_command)
# ERROR: Code execution failed at line 'whereas True: cross' on account of:
# InterpreterError: Most variety of 1000000 iterations in Whereas loop
# exceeded
# Undefined operations do not work
harmful_command="!echo "
custom_executor(harmful_command)
# ERROR: Code parsing failed on line 1 on account of: SyntaxError
It appears now we have some security nets with code brokers. Nevertheless, regardless of these safeguards, dangers persist whenever you’re executing code domestically. For instance, an LLM can recursively create threads in your pc or create too many information, resulting in useful resource bloating. A potential resolution is to execute code in a sandboxed surroundings, resembling utilizing Docker or options like E2B. I’m prepared to be adventurous and run my code domestically, however when you desire a extra risk-averse strategy, you’ll be able to observe the sandbox set-up steerage in the documentation.
Code agent vs conventional Software-Calling agent
It’s claimed that the code brokers carry out higher in comparison with the standard JSON-based strategy. Let’s put this to the check.
I’ll use the duty of metrics change evaluation that I described in my earlier article, “Making sense of KPI adjustments”. We are going to begin with a simple case: analysing a easy metric (income) cut up by one dimension (nation).
raw_df = pd.read_csv('absolute_metrics_example.csv', sep = 't')
df = raw_df.groupby('nation')[['revenue_before', 'revenue_after_scenario_2']].sum()
.sort_values('revenue_before', ascending = False).rename(
columns = {'revenue_after_scenario_2': 'after',
'revenue_before': 'earlier than'})

The smolagents library helps two lessons, which we are able to use to match two approaches:
- CodeAgent — an agent that acts by producing and executing code,
- ToolCallingAgent — a conventional JSON-based agent.
Our brokers will want some instruments, so let’s implement them. There are a number of choices to create instruments in smolagents: we are able to re-use LangChain instruments, obtain them from HuggingFace Hub or just create Python features. We are going to take essentially the most simple strategy by writing a few Python features and annotating them with @software
.
I’ll create two instruments: one to estimate the relative distinction between metrics, and one other to calculate the sum of an inventory. Since LLM can be utilizing these instruments, offering detailed descriptions is essential.
@software
def calculate_metric_increase(earlier than: float, after: float) -> float:
"""
Calculate the share change of the metric between earlier than and after
Args:
earlier than: worth earlier than
after: worth after
"""
return (earlier than - after) * 100/ earlier than
@software
def calculate_sum(values: checklist) -> float:
"""
Calculate the sum of checklist
Args:
values: checklist of numbers
"""
return sum(values)
Teaser: I’ll later realise that I ought to have supplied extra instruments to the agent, however I genuinely ignored them.
CodeAgent
Let’s begin with a CodeAgent. I’ve initialised the agent with the instruments we outlined earlier and authorised the utilization of some Python packages that may be useful.
agent = CodeAgent(
mannequin=mannequin,
instruments=[calculate_metric_increase, calculate_sum],
max_steps=10,
additional_authorized_imports=["pandas", "numpy", "matplotlib.*",
"plotly.*"],
verbosity_level=1
)
process = """
Here's a dataframe exhibiting income by section, evaluating values
earlier than and after.
Might you please assist me perceive the adjustments? Particularly:
1. Estimate how the whole income and the income for every section
have modified, each in absolute phrases and as a proportion.
2. Calculate the contribution of every section to the whole
change in income.
Please spherical all floating-point numbers within the output
to 2 decimal locations.
"""
agent.run(
process,
additional_args={"information": df},
)
Total, the code agent accomplished the duty in simply two steps, utilizing solely 5,451 enter and 669 output tokens. The consequence additionally seems to be fairly believable.
{'total_before': 1731985.21, 'total_after':
1599065.55, 'total_change': -132919.66, 'segment_changes':
{'absolute_change': {'different': 4233.09, 'UK': -4376.25, 'France':
-132847.57, 'Germany': -690.99, 'Italy': 979.15, 'Spain':
-217.09}, 'percentage_change': {'different': 0.67, 'UK': -0.91,
'France': -55.19, 'Germany': -0.43, 'Italy': 0.81, 'Spain':
-0.23}, 'contribution_to_change': {'different': -3.18, 'UK': 3.29,
'France': 99.95, 'Germany': 0.52, 'Italy': -0.74, 'Spain': 0.16}}}
Let’s check out the execution movement. The LLM obtained the next immediate.
╭─────────────────────────── New run ────────────────────────────╮
│ │
│ Here's a pandas dataframe exhibiting income by section, │
│ evaluating values earlier than and after. │
│ Might you please assist me perceive the adjustments? │
│ Particularly: │
│ 1. Estimate how the whole income and the income for every │
│ section have modified, each in absolute phrases and as a │
│ proportion. │
│ 2. Calculate the contribution of every section to the whole │
│ change in income. │
│ │
│ Please spherical all floating-point numbers within the output to 2 │
│ decimal locations. │
│ │
│ You have got been supplied with these extra arguments, that │
│ you'll be able to entry utilizing the keys as variables in your python │
│ code: │
│ {'df': earlier than after │
│ nation │
│ different 632767.39 637000.48 │
│ UK 481409.27 477033.02 │
│ France 240704.63 107857.06 │
│ Germany 160469.75 159778.76 │
│ Italy 120352.31 121331.46 │
│ Spain 96281.86 96064.77}. │
│ │
╰─ LiteLLMModel - openai/gpt-4o-mini ────────────────────────────╯
In step one, the LLM generated a dataframe and carried out all calculations. Apparently, it selected to put in writing all of the code independently reasonably than utilizing the supplied instruments.
Much more surprisingly, the LLM recreated the dataframe primarily based on the enter information as a substitute of referencing it immediately. This strategy just isn’t best (particularly when working with large datasets), as it may well result in errors and better token utilization. This behaviour might probably be improved through the use of a extra express system immediate. Right here’s the code the agent executed in step one.
import pandas as pd
# Creating the DataFrame from the supplied information
information = {
'earlier than': [632767.39, 481409.27, 240704.63, 160469.75,
120352.31, 96281.86],
'after': [637000.48, 477033.02, 107857.06, 159778.76,
121331.46, 96064.77]
}
index = ['other', 'UK', 'France', 'Germany', 'Italy', 'Spain']
df = pd.DataFrame(information, index=index)
# Calculating whole income earlier than and after
total_before = df['before'].sum()
total_after = df['after'].sum()
# Calculating absolute and proportion change for every section
df['absolute_change'] = df['after'] - df['before']
df['percentage_change'] = (df['absolute_change'] /
df['before']) * 100
# Calculating whole income change
total_change = total_after - total_before
# Calculating contribution of every section to the whole change
df['contribution_to_change'] = (df['absolute_change'] /
total_change) * 100
# Rounding outcomes
df = df.spherical(2)
# Printing the calculated outcomes
print("Whole income earlier than:", total_before)
print("Whole income after:", total_after)
print("Whole change in income:", total_change)
print(df)
Within the second step, the LLM merely constructed the ultimate reply by referring to the variables calculated on the earlier step (which is de facto neat).
final_answer({
"total_before": spherical(total_before, 2),
"total_after": spherical(total_after, 2),
"total_change": spherical(total_change, 2),
"segment_changes": df[['absolute_change',
'percentage_change', 'contribution_to_change']].to_dict()
})
It labored fairly properly.
ToolCallingAgent
Now, it’s time to see how conventional tool-calling brokers can deal with this downside. We initialised it in the same method and ran the duty.
from smolagents import ToolCallingAgent
traditional_agent = ToolCallingAgent(
mannequin=mannequin,
instruments=[calculate_metric_increase, calculate_sum],
max_steps=30,
)
process = """
Here's a dataframe exhibiting income by section, evaluating values
earlier than and after.
Might you please assist me perceive the adjustments? Particularly:
1. Estimate how the whole income and the income for every section
have modified, each in absolute phrases and as a proportion.
2. Calculate the contribution of every section to the whole
change in income.
Please spherical all floating-point numbers within the output
to 2 decimal locations.
"""
traditional_agent.run(
process,
additional_args={"information": df},
)
The outcomes are removed from best: solely the relative adjustments are appropriate, whereas the remainder of the numbers are pure hallucinations. I’ve to confess, the core situation was the shortage of acceptable instruments (particularly, instruments to calculate variations and to estimate shares). Nevertheless, the agent ought to have flagged lacking instruments reasonably than producing random numbers.
Whole income change: -7319.66 (-7.67%). Income Adjustments by Phase:
- Different: +232.09 (-0.67%)
- UK: -4376.25 (0.91%)
- France: -132847.57 (55.19%)
- Germany: -690.99 (0.43%)
- Italy: +979.15 (-0.81%)
- Spain: -217.09 (0.23%)
Contribution to whole change:
- Different: 0.03%
- UK: -59.88%
- France: -181.77%
- Germany: -9.43%
- Italy: +13.38%
- Spain: -0.03%
When it comes to useful resource utilization, the tool-calling agent carried out considerably worse: 12 steps, with 29,201 enter and 1,695 output tokens. So, code brokers clearly provide value financial savings relating to agent execution.
Let’s dig a bit deeper to grasp what the agent really did. First, it took 4 iterations simply to determine how you can name calculate_sum
appropriately and estimate the whole income earlier than and after.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭──────────────────────────────────────────────────────────────╮
│ Calling software: 'calculate_sum' with arguments: │
│ {'values': {'earlier than': [632767.39, 481409.27, 240704.63, │
│ 160469.75, 120352.31, 96281.86], │
│ 'after': [637000.48, 477033.02, 107857.06, 159778.76, │
│ 121331.46, 96064.77]}} │
╰──────────────────────────────────────────────────────────────╯
Invalid name to software 'calculate_sum' with arguments
{"values": {"earlier than": [632767.39, 481409.27, 240704.63,
160469.75, 120352.31, 96281.86],
"after": [637000.48, 477033.02, 107857.06, 159778.76,
121331.46, 96064.77]}}:
unsupported operand sort(s) for +: 'int' and 'str'
You must name this software with appropriate enter arguments.
Anticipated inputs: {"values": {"sort": "object",
"description": "checklist of values"}}
Returns output sort: quantity
Software description: 'Calculate the sum of checklist'
[Step 1: Duration 4.06 sec | Input tokens: 1,190 | Output: 250]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭──────────────────────────────────────────────────────────────╮
│ Calling software: 'calculate_sum' with arguments: │
│ {'values': {'earlier than': [632767.39, 481409.27, 240704.63, │
│ 160469.75, 120352.31, 96281.86], │
│ 'after': [637000.48, 477033.02, 107857.06, 159778.76, │
│ 121331.46, 96064.77]}} │
╰──────────────────────────────────────────────────────────────╯
Invalid name to software 'calculate_sum' with arguments
{"values": {"earlier than": [632767.39, 481409.27, 240704.63,
160469.75, 120352.31, 96281.86],
"after": [637000.48, 477033.02, 107857.06, 159778.76,
121331.46, 96064.77]}}:
unsupported operand sort(s) for +: 'int' and 'str'
You must name this software with appropriate enter arguments.
Anticipated inputs: {"values": {"sort": "object",
"description": "checklist of values"}}
Returns output sort: quantity
Software description: 'Calculate the sum of checklist'
[Step 2: Duration 8.60 sec | Input tokens: 2,815 | Output: 386]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭──────────────────────────────────────────────────────────────╮
│ Calling software: 'calculate_sum' with arguments: │
│ {'values': [632767.39, 481409.27, 240704.63, 160469.75, │
│ 120352.31, 96281.86]} │
╰──────────────────────────────────────────────────────────────╯
Observations: 1731985.2100000002
[Step 3: Duration 1.23 sec | Input tokens: 4,871 | Output: 488]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭──────────────────────────────────────────────────────────────╮
│ Calling software: 'calculate_sum' with arguments: │
│ {'values': [637000.48, 477033.02, 107857.06, 159778.76, │
│ 121331.46, 96064.77]} │
╰──────────────────────────────────────────────────────────────╯
Observations: 1599065.55
The following seven steps had been spent calculating the relative metric adjustments utilizing the calculate_metric_increase
software.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╭──────────────────────────────────────────────────────────────╮
│ Calling software: 'calculate_metric_increase' with │
│ arguments: {'earlier than': 1731985.21, 'after': 1599065.55} │
╰──────────────────────────────────────────────────────────────╯
Observations: 7.674410799385517
<-- related software requires all nation segments -->
Ultimately, the agent put collectively a remaining name.
So, if the LLM had had instruments to calculate absolutely the distinction and the share of the sum, it could have taken a further 14 iterations and much more tokens. In fact, we are able to forestall such inefficiencies by fastidiously designing the instruments we offer:
- We might modify our features to work with lists of values as a substitute of single objects, which might considerably cut back the variety of steps.
- Moreover, we might create extra advanced features that calculate all vital metrics without delay (just like what the code agent did). This fashion, LLM wouldn’t have to carry out calculations step-by-step. Nevertheless, this strategy would possibly cut back the flexibleness of the system.
Although the outcomes weren’t best on account of a poor selection of instruments, I nonetheless discover this instance fairly insightful. It’s clear that code brokers are extra highly effective, cost-efficient and versatile as they’ll invent their very own complete instruments and carry out a number of actions in a single step.
Yow will discover the entire code and execution logs on GitHub.
Abstract
We’ve realized quite a bit in regards to the code brokers. Now, it’s time to wrap issues up with a fast abstract.
Code brokers are LLM brokers that “suppose” and act utilizing Python code. As a substitute of calling instruments through JSON, they generate and execute precise code. It makes them extra versatile and cost-efficient as they’ll invent their very own complete instruments and carry out a number of actions in a single step.
HuggingFace has introduced this lifestyle of their framework, smolagents. Smolagents makes it straightforward to construct fairly advanced brokers with out a lot trouble, whereas additionally offering security measures throughout the code execution.
On this article, we’ve explored the fundamental performance of the smolagents library. However there’s much more to it. Within the subsequent article, we’ll dive into extra superior options (like multi-agent setup and planning steps) to construct the agent that may narrate KPI adjustments. Keep tuned!
Thank you a large number for studying this text. I hope this text was insightful for you.
Reference
This text is impressed by the “Constructing Code Brokers with Hugging Face smolagents” brief course by DeepLearning.AI.