Selecting Between LLM Agent Frameworks | by Aparna Dhinakaran | Sep, 2024


The tradeoffs between constructing bespoke code-based brokers and the main agent frameworks.

Picture by writer

Due to John Gilhuly for his contributions to this piece.

Brokers are having a second. With a number of new frameworks and contemporary investment within the house, trendy AI brokers are overcoming shaky origins to quickly supplant RAG as an implementation precedence. So will 2024 lastly be the 12 months that autonomous AI methods that may take over writing our emails, reserving flights, speaking to our information, or seemingly some other job?

Perhaps, however a lot work stays to get to that time. Any developer constructing an agent should not solely select foundations — which mannequin, use case, and structure to make use of — but additionally which framework to leverage. Do you go together with the long-standing LangGraph, or the newer entrant LlamaIndex Workflows? Or do you go the normal route and code the entire thing your self?

This put up goals to make that selection a bit simpler. Over the previous few weeks, I constructed the identical agent in main frameworks to look at a few of the strengths and weaknesses of every at a technical degree. The entire code for every agent is on the market in this repo.

Background on the Agent Used for Testing

The agent used for testing consists of operate calling, a number of instruments or abilities, connections to outdoors assets, and shared state or reminiscence.

The agent has the next capabilities:

  1. Answering questions from a information base
  2. Speaking to information: answering questions on telemetry information of an LLM utility
  3. Analyzing information: analyzing higher-level tendencies and patterns in retrieved telemetry information

So as to accomplish these, the agent has three beginning abilities: RAG with product documentation, SQL technology on a hint database, and information evaluation. A easy gradio-powered interface is used for the agent UI, with the agent itself structured as a chatbot.

The primary choice you may have when creating an agent is to skip the frameworks completely and construct the agent totally your self. When embarking on this challenge, this was the method I began with.

Picture created by writer

Pure Code Structure

The code-based agent beneath is made up of an OpenAI-powered router that makes use of operate calling to pick the appropriate ability to make use of. After that ability completes, it returns again to the router to both name one other ability or reply to the consumer.

The agent retains an ongoing record of messages and responses that’s handed totally into the router on every name to protect context by cycles.

def router(messages):
if not any(
isinstance(message, dict) and message.get("position") == "system" for message in messages
):
system_prompt = {"position": "system", "content material": SYSTEM_PROMPT}
messages.append(system_prompt)

response = consumer.chat.completions.create(
mannequin="gpt-4o",
messages=messages,
instruments=skill_map.get_combined_function_description_for_openai(),
)

messages.append(response.selections[0].message)
tool_calls = response.selections[0].message.tool_calls
if tool_calls:
handle_tool_calls(tool_calls, messages)
return router(messages)
else:
return response.selections[0].message.content material

The talents themselves are outlined in their very own courses (e.g. GenerateSQLQuery) which are collectively held in a SkillMap. The router itself solely interacts with the SkillMap, which it makes use of to load ability names, descriptions, and callable capabilities. This method signifies that including a brand new ability to the agent is so simple as writing that ability as its personal class, then including it to the record of abilities within the SkillMap. The concept right here is to make it straightforward so as to add new abilities with out disturbing the router code.

class SkillMap:
def __init__(self):
abilities = [AnalyzeData(), GenerateSQLQuery()]

self.skill_map = {}
for ability in abilities:
self.skill_map[skill.get_function_name()] = (
ability.get_function_dict(),
ability.get_function_callable(),
)

def get_function_callable_by_name(self, skill_name) -> Callable:
return self.skill_map[skill_name][1]

def get_combined_function_description_for_openai(self):
combined_dict = []
for _, (function_dict, _) in self.skill_map.objects():
combined_dict.append(function_dict)
return combined_dict

def get_function_list(self):
return record(self.skill_map.keys())

def get_list_of_function_callables(self):
return [skill[1] for ability in self.skill_map.values()]

def get_function_description_by_name(self, skill_name):
return str(self.skill_map[skill_name][0]["function"])

General, this method is pretty easy to implement however comes with a number of challenges.

Challenges with Pure Code Brokers

The primary issue lies in structuring the router system immediate. Typically, the router within the instance above insisted on producing SQL itself as a substitute of delegating that to the appropriate ability. In case you’ve ever tried to get an LLM not to do one thing, you understand how irritating that have will be; discovering a working immediate took many rounds of debugging. Accounting for the completely different output codecs from every step was additionally difficult. Since I opted to not use structured outputs, I needed to be prepared for a number of completely different codecs from every of the LLM calls in my router and abilities.

Advantages of a Pure Code Agent

A code-based method offers a superb baseline and start line, providing an effective way to learn the way brokers work with out counting on canned agent tutorials from prevailing frameworks. Though convincing the LLM to behave will be difficult, the code construction itself is easy sufficient to make use of and may make sense for sure use circumstances (extra within the evaluation part beneath).

LangGraph is without doubt one of the longest-standing agent frameworks, first releasing in January 2024. The framework is constructed to handle the acyclic nature of current pipelines and chains by adopting a Pregel graph construction as a substitute. LangGraph makes it simpler to outline loops in your agent by including the ideas of nodes, edges, and conditional edges to traverse a graph. LangGraph is constructed on prime of LangChain, and makes use of the objects and kinds from that framework.

Picture created by writer

LangGraph Structure

The LangGraph agent seems to be just like the code-based agent on paper, however the code behind it’s drastically completely different. LangGraph nonetheless makes use of a “router” technically, in that it calls OpenAI with capabilities and makes use of the response to proceed to a brand new step. Nonetheless the best way this system strikes between abilities is managed fully in another way.

instruments = [generate_and_run_sql_query, data_analyzer]
mannequin = ChatOpenAI(mannequin="gpt-4o", temperature=0).bind_tools(instruments)

def create_agent_graph():
workflow = StateGraph(MessagesState)

tool_node = ToolNode(instruments)
workflow.add_node("agent", call_model)
workflow.add_node("instruments", tool_node)

workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("instruments", "agent")

checkpointer = MemorySaver()
app = workflow.compile(checkpointer=checkpointer)
return app

The graph outlined right here has a node for the preliminary OpenAI name, known as “agent” above, and one for the software dealing with step, known as “instruments.” LangGraph has a built-in object known as ToolNode that takes an inventory of callable instruments and triggers them primarily based on a ChatMessage response, earlier than returning to the “agent” node once more.

def should_continue(state: MessagesState):
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "instruments"
return END

def call_model(state: MessagesState):
messages = state["messages"]
response = mannequin.invoke(messages)
return {"messages": [response]}

After every name of the “agent” node (put one other manner: the router within the code-based agent), the should_continue edge decides whether or not to return the response to the consumer or cross on to the ToolNode to deal with software calls.

All through every node, the “state” shops the record of messages and responses from OpenAI, just like the code-based agent’s method.

Challenges with LangGraph

A lot of the difficulties with LangGraph within the instance stem from the necessity to use Langchain objects for issues to move properly.

Problem #1: Perform Name Validation

So as to use the ToolNode object, I needed to refactor most of my current Ability code. The ToolNode takes an inventory of callable capabilities, which initially made me suppose I may use my current capabilities, nevertheless issues broke down on account of my operate parameters.

The talents had been outlined as courses with a callable member operate, which means they’d “self” as their first parameter. GPT-4o was good sufficient to not embody the “self” parameter within the generated operate name, nevertheless LangGraph learn this as a validation error on account of a lacking parameter.

This took hours to determine, as a result of the error message as a substitute marked the third parameter within the operate (“args” on the information evaluation ability) because the lacking parameter:

pydantic.v1.error_wrappers.ValidationError: 1 validation error for data_analysis_toolSchema
args area required (kind=value_error.lacking)

It’s value mentioning that the error message originated from Pydantic, not from LangGraph.

I finally bit the bullet and redefined my abilities as fundamental strategies with Langchain’s @software decorator, and was capable of get issues working.

@software
def generate_and_run_sql_query(question: str):
"""Generates and runs an SQL question primarily based on the immediate.

Args:
question (str): A string containing the unique consumer immediate.

Returns:
str: The results of the SQL question.
"""

Problem #2: Debugging

As talked about, debugging in a framework is troublesome. This primarily comes right down to complicated error messages and abstracted ideas that make it more durable to view variables.

The abstracted ideas primarily present up when attempting to debug the messages being despatched across the agent. LangGraph shops these messages in state[“messages”]. Some nodes throughout the graph pull from these messages robotically, which might make it obscure the worth of messages when they’re accessed by the node.

A sequential view of the agent’s actions (picture by writer)

LangGraph Advantages

One of many primary advantages of LangGraph is that it’s straightforward to work with. The graph construction code is clear and accessible. Particularly if in case you have complicated node logic, having a single view of the graph makes it simpler to know how the agent is linked collectively. LangGraph additionally makes it easy to transform an current utility inbuilt LangChain.

Takeaway

In case you use all the things within the framework, LangGraph works cleanly; should you step outdoors of it, put together for some debugging complications.

Workflows is a more moderen entrant into the agent framework house, premiering earlier this summer season. Like LangGraph, it goals to make looping brokers simpler to construct. Workflows additionally has a selected deal with operating asynchronously.

Some components of Workflows appear to be in direct response to LangGraph, particularly its use of occasions as a substitute of edges and conditional edges. Workflows use steps (analogous to nodes in LangGraph) to accommodate logic, and emitted and obtained occasions to maneuver between steps.

Picture created by writer

The construction above seems to be just like the LangGraph construction, save for one addition. I added a setup step to the Workflow to organize the agent context, extra on this beneath. Regardless of the same construction, there may be very completely different code powering it.

Workflows Structure

The code beneath defines the Workflow construction. Just like LangGraph, that is the place I ready the state and connected the talents to the LLM object.

class AgentFlow(Workflow):
def __init__(self, llm, timeout=300):
tremendous().__init__(timeout=timeout)
self.llm = llm
self.reminiscence = ChatMemoryBuffer(token_limit=1000).from_defaults(llm=llm)
self.instruments = []
for func in skill_map.get_function_list():
self.instruments.append(
FunctionTool(
skill_map.get_function_callable_by_name(func),
metadata=ToolMetadata(
title=func, description=skill_map.get_function_description_by_name(func)
),
)
)

@step
async def prepare_agent(self, ev: StartEvent) -> RouterInputEvent:
user_input = ev.enter
user_msg = ChatMessage(position="consumer", content material=user_input)
self.reminiscence.put(user_msg)

chat_history = self.reminiscence.get()
return RouterInputEvent(enter=chat_history)

That is additionally the place I outline an additional step, “prepare_agent”. This step creates a ChatMessage from the consumer enter and provides it to the workflow reminiscence. Splitting this out as a separate step signifies that we do return to it because the agent loops by steps, which avoids repeatedly including the consumer message to the reminiscence.

Within the LangGraph case, I completed the identical factor with a run_agent technique that lived outdoors the graph. This alteration is usually stylistic, nevertheless it’s cleaner in my view to accommodate this logic with the Workflow and graph as we’ve carried out right here.

With the Workflow arrange, I then outlined the routing code:

@step
async def router(self, ev: RouterInputEvent) -> ToolCallEvent | StopEvent:
messages = ev.enter

if not any(
isinstance(message, dict) and message.get("position") == "system" for message in messages
):
system_prompt = ChatMessage(position="system", content material=SYSTEM_PROMPT)
messages.insert(0, system_prompt)

with using_prompt_template(template=SYSTEM_PROMPT, model="v0.1"):
response = await self.llm.achat_with_tools(
mannequin="gpt-4o",
messages=messages,
instruments=self.instruments,
)

self.reminiscence.put(response.message)

tool_calls = self.llm.get_tool_calls_from_response(response, error_on_no_tool_call=False)
if tool_calls:
return ToolCallEvent(tool_calls=tool_calls)
else:
return StopEvent(outcome=response.message.content material)

And the software name dealing with code:

@step
async def tool_call_handler(self, ev: ToolCallEvent) -> RouterInputEvent:
tool_calls = ev.tool_calls

for tool_call in tool_calls:
function_name = tool_call.tool_name
arguments = tool_call.tool_kwargs
if "enter" in arguments:
arguments["prompt"] = arguments.pop("enter")

attempt:
function_callable = skill_map.get_function_callable_by_name(function_name)
besides KeyError:
function_result = "Error: Unknown operate name"

function_result = function_callable(arguments)
message = ChatMessage(
position="software",
content material=function_result,
additional_kwargs={"tool_call_id": tool_call.tool_id},
)

self.reminiscence.put(message)

return RouterInputEvent(enter=self.reminiscence.get())

Each of those look extra just like the code-based agent than the LangGraph agent. That is primarily as a result of Workflows retains the conditional routing logic within the steps versus in conditional edges — traces 18–24 had been a conditional edge in LangGraph, whereas now they’re simply a part of the routing step — and the truth that LangGraph has a ToolNode object that does nearly all the things within the tool_call_handler technique robotically.

Transferring previous the routing step, one factor I used to be very completely happy to see is that I may use my SkillMap and current abilities from my code-based agent with Workflows. These required no adjustments to work with Workflows, which made my life a lot simpler.

Challenges with Workflows

Problem #1: Sync vs Async

Whereas asynchronous execution is preferable for a stay agent, debugging a synchronous agent is far simpler. Workflows is designed to work asynchronously, and attempting to drive synchronous execution was very troublesome.

I initially thought I might simply be capable of take away the “async” technique designations and change from “achat_with_tools” to “chat_with_tools”. Nonetheless, because the underlying strategies throughout the Workflow class had been additionally marked as asynchronous, it was essential to redefine these so as to run synchronously. I ended up sticking to an asynchronous method, however this didn’t make debugging harder.

A sequential view of the agent’s actions (picture by writer)

Problem #2: Pydantic Validation Errors

In a repeat of the woes with LangGraph, comparable issues emerged round complicated Pydantic validation errors on abilities. Thankfully, these had been simpler to handle this time since Workflows was capable of deal with member capabilities simply superb. I in the end simply ended up having to be extra prescriptive in creating LlamaIndex FunctionTool objects for my abilities:

for func in skill_map.get_function_list(): 
self.instruments.append(FunctionTool(
skill_map.get_function_callable_by_name(func),
metadata=ToolMetadata(title=func, description=skill_map.get_function_description_by_name(func))))

Excerpt from AgentFlow.__init__ that builds FunctionTools

Advantages of Workflows

I had a a lot simpler time constructing the Workflows agent than I did the LangGraph agent, primarily as a result of Workflows nonetheless required me to jot down routing logic and gear dealing with code myself as a substitute of offering built-in capabilities. This additionally meant that my Workflow agent seemed extraordinarily just like my code-based agent.

The most important distinction got here in the usage of occasions. I used two customized occasions to maneuver between steps in my agent:

class ToolCallEvent(Occasion):
tool_calls: record[ToolSelection]

class RouterInputEvent(Occasion):
enter: record[ChatMessage]

The emitter-receiver, event-based structure took the place of instantly calling a few of the strategies in my agent, just like the software name handler.

When you’ve got extra complicated methods with a number of steps which are triggering asynchronously and may emit a number of occasions, this structure turns into very useful to handle that cleanly.

Different advantages of Workflows embody the truth that it is extremely light-weight and doesn’t drive a lot construction on you (apart from the usage of sure LlamaIndex objects) and that its event-based structure offers a useful different to direct operate calling — particularly for complicated, asynchronous purposes.

Trying throughout the three approaches, each has its advantages.

The no framework method is the only to implement. As a result of any abstractions are outlined by the developer (i.e. SkillMap object within the above instance), holding varied varieties and objects straight is straightforward. The readability and accessibility of the code completely comes right down to the person developer nevertheless, and it’s straightforward to see how more and more complicated brokers may get messy with out some enforced construction.

LangGraph offers fairly a little bit of construction, which makes the agent very clearly outlined. If a broader staff is collaborating on an agent, this construction would supply a useful manner of implementing an structure. LangGraph additionally may present a superb start line with brokers for these not as aware of the construction. There’s a tradeoff, nevertheless — since LangGraph does fairly a bit for you, it might result in complications should you don’t totally purchase into the framework; the code could also be very clear, however it’s possible you’ll pay for it with extra debugging.

Workflows falls someplace within the center. The event-based structure could be extraordinarily useful for some initiatives, and the truth that much less is required when it comes to utilizing of LlamaIndex varieties offers higher flexibility for these not be totally utilizing the framework throughout their utility.

Picture created by writer

In the end, the core query may come right down to “are you already utilizing LlamaIndex or LangChain to orchestrate your utility?” LangGraph and Workflows are each so entwined with their respective underlying frameworks that the extra advantages of every agent-specific framework may not trigger you to modify on benefit alone.

The pure code method will possible at all times be a horny choice. When you’ve got the rigor to doc and implement any abstractions created, then guaranteeing nothing in an exterior framework slows you down is straightforward.

In fact, “it relies upon” isn’t a satisfying reply. These three questions ought to allow you to resolve which framework to make use of in your subsequent agent challenge.

Are you already utilizing LlamaIndex or LangChain for important items of your challenge?

If sure, discover that choice first.

Are you aware of widespread agent buildings, or would you like one thing telling you the way you need to construction your agent?

In case you fall into the latter group, attempt Workflows. In case you actually fall into the latter group, attempt LangGraph.

Has your agent been constructed earlier than?

One of many framework advantages is that there are various tutorials and examples constructed with every. There are far fewer examples of pure code brokers to construct from.

Picture created by writer

Selecting an agent framework is only one selection amongst many that may influence outcomes in manufacturing for generative AI methods. As at all times, it pays to have strong guardrails and LLM tracing in place — and to be agile as new agent frameworks, analysis, and fashions upend established strategies.

Leave a Reply

Your email address will not be published. Required fields are marked *