Hello Learners…
Welcome to the blog…
Table Of Contents
- Introduction
- What is AutoGen?
- How To Create Multi-Agent Conversation Using AutoGen
- Summary
- References
Introduction
In this post, we learn How To Create Multi-Agent Conversation Using AutoGen.AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks
What is AutoGen?
AutoGen is a multi-agent conversational framework that enables us to quickly create multiple agents with different roles, persona tasks, and capabilities to implement complex AI applications using different AI agentic design patterns.
AutoGen enables building next-gen LLM applications based on multi-agent conversations with minimal effort. It simplifies the orchestration, automation, and optimization of a complex LLM workflow. It maximizes the performance of LLM models and overcomes their weaknesses.
How To Create Multi-Agent Conversation Using AutoGen
Let’s try to understand why we have to use autogen,
we take an example,
- If we want to analyse financial data, the task may require writing code to collect and analyse the data, then synthesizing our data into report. This might take a person days of research, coding and writing.
- A multi-agent system can streamline this process by enabling us to create and hire agents that works for us as a researcher, data collector, co-writer and executor.
- Our agents can also iteratively review and improve the results until it meets our standard.
- This is the one simple example, but there are many practical applications of a multi-agent framework.
Let’s start learning the concept of an agent.
In AutoGen, an agent is an entity that can act on behalf of human intent, send messages, receive messages, perform actions, generate replies and interact with other agents.
AutoGen has a built-in agent class called “coversable agent”. It unifies different types of agents in the same programming abstraction. It comes with a lot of built-in functionalities.
For example, we can use a list of LLM configurations to generate replies, or we can do code execution or function and tool execution. It also provides a component for keeping human in the loop and checking for stopping the response.
We can switch each component on and off and customize it to suit the need of our application.
Using this different capabilities we can create agents of different roles using the same interface.
Let’s Create Multi-Agent Conversation With AutoGen
To Implement the Multi-Agent Conversational System Using Autogen You can clone the below repo.
And Install the all the requirements to run the code using the below command,
pip install -r requirements.txt
To run this first we required OpenAI API key to use LLM, For that go to the below page and get your API Key,
Note: OpenAI API key is paid key.
create a .env file in your current directory and put the API key in that.
OPENAI_API_KEY=""
Now open the google colab and run the code cells one by one.
Setup OpenAI API Key
This will setup OpenAI API key for the code.
# Add your utilities or helper functions to this file.
import os
from dotenv import load_dotenv, find_dotenv
# these expect to find a .env file at the directory above the lesson. # the format for that file is (without the comment) #API_KEYNAME=AStringThatIsTheLongAPIKeyFromSomeService
def load_env():
_ = load_dotenv(find_dotenv())
def get_openai_api_key():
load_env()
openai_api_key = os.getenv("OPENAI_API_KEY")
return openai_api_key
After that we config an LLM,
OPENAI_API_KEY = get_openai_api_key()
llm_config = {"model": "gpt-3.5-turbo"}
Define an AutoGen Agent
from autogen import ConversableAgent
agent = ConversableAgent(
name="chatbot",
llm_config=llm_config,
human_input_mode="NEVER",
)
And Generate the reply,
reply = agent.generate_reply(
messages=[{"content": "Tell me a joke.", "role": "user"}]
)
print(reply)
Here you can see we can ask the questions and generate the replies. this is the simple agent that we can use for generating the replies for the user’s queries.
After that if we ask to the agent to repeat the joke, and we will see what it generates,
reply = agent.generate_reply(
messages=[{"content": "Repeat the joke.", "role": "user"}]
)
print(reply)
Here we are expecting that agent should repeat the joke that we provide, but it is not able to do that. because when we call generate_reply function it do not store any past memory or it does not know what we ask previously.
It is a fresh function to generates a reply. To solve this we need to use different approaches to handle continuous chat.
Create a Conversation Between Two Agents
Setting up a conversation between two agents, hasmukh and amit, where the memory of their interactions is retained.
hasmukh = ConversableAgent(
name="hasmukh",
system_message=
"Your name is Hasmukh and you are a stand-up comedian.",
llm_config=llm_config,
human_input_mode="NEVER",
)
amit = ConversableAgent(
name="amit",
system_message=
"Your name is Amit and you are a stand-up comedian. "
"Start the next joke from the punchline of the previous joke.",
llm_config=llm_config,
human_input_mode="NEVER",
)
chat_result = hasmukh.initiate_chat(
recipient=amit,
message="I'm hasmukh. amit, let's keep the jokes rolling.",
max_turns=2,
)
Also, we can print all the conversation using pprint
import pprint
pprint.pprint(chat_result.chat_history)
So this is the simple example of multi-agent conversation using AutoGen.
To check cost of the conversation,
pprint.pprint(chat_result.cost)
Summary
Each component of the coversable agent can be tailored to suit specific application requirements, with the ability to enable or disable functionalities as needed.
This adaptability enables the creation of agents fulfilling diverse roles while adhering to a unified interface.
By leveraging AutoGen’s capabilities, developers can construct agents tailored to their application’s needs, enhancing efficiency and effectiveness across a wide range of tasks.