Skip to main content
Version: Next

Creating ASI1 LLM Compatible uAgent

This guide demonstrates how to make your agents accessible via ASI1 LLM by integrating the chat protocol. We'll use a Football Team Agent example to show how the chat protocol enables seamless communication between agents and the LLM.

Overview

In this example, you'll learn how to build a uAgent compatible with Fetch.ai's ASI1 Large Language Model (LLM). Using a Football Team Agent as an example, the guide shows how you can enable your agent to understand and respond to natural language queries.

Message Flow

The communication flow between ASI1 LLM, the Football Team Agent, and OpenAI Agent follows this sequence:

  1. Query Initiation (1.1)

    • ASI1 LLM sends a natural language query (e.g., "Give me the list of players in Manchester United Football Team") as a ChatMessage to the Football Team Agent on the ChatMessage handler.
  2. Parameter Extraction (2, 3)

    • The Football Team Agent forwards the query to OpenAI Agent for parameter extraction
    • OpenAI Agent processes the natural language and extracts structured parameters (e.g., team_name="Manchester United")
    • The parameters are returned in a Pydantic Model format as StructuredOutputResponse on the StructuredOutputResponse handler.
  3. Team Data Processing (4, 5)

    • The Football Team Agent calls the get_team_info function with the extracted parameters
    • The function returns the team details.
  4. Football Team Agent Response (6.1)

    • The Football Team Agent sends the formatted response back as a ChatMessage to ASI1 LLM.
  5. Message Acknowledgements (1.2, 6.2)

    • Each message exchanged using the chat protocol is automatically acknowledged by the receiving agent using ChatAcknowledgement.

Here's a visual representation of the flow:

ASI Chat Protocol Flow

Implementation

In this example we will create an agent and its associated files on Agentverse that communicate using the chat protocol with the ASI1 LLM. Refer to the Hosted Agents section to understand the detailed steps for agent creation on Agentverse.

Create a new agent named "FootballTeamAgent" on Agentverse and create the following files:

agent.py        # Main agent file with integrated chat protocol and message handlers for ChatMessage and ChatAcknowledgement
football.py # Football team service implementation and API integration

To create a new file on Agentverse:

  1. Click on the New File icon

New File

  1. Assign a name to the File

Rename File

  1. Directory Structure Directory Structure

1. Football Team Function and Data Models

Let's start by defining our data models and the function to retrieve the list of players in a football team. These models will define how we request team information and receive responses. We'll use the AllSports API to fetch team and player information. You can obtain your API key by signing up at AllSports API, which provides comprehensive sports data feeds including football (soccer) team and player information.

To implement the football team service add the following chat protocol in the football.py file created on Agentverse:

football.py
import requests
from uagents import Model, Field

API_KEY = "YOUR_ALLSPORTS_API_KEY"
BASE_URL = "https://apiv2.allsportsapi.com/football/"

class FootballTeamRequest(Model):
team_name: str

class FootballTeamResponse(Model):
results: str

async def get_team_info(team_name: str) -> str:
"""
Fetch team information from AllSportsAPI and return as plain text
"""
try:
params = {
"met": "Teams",
"teamName": team_name,
"APIkey": API_KEY
}

response = requests.get(BASE_URL, params=params)
data = response.json()

if data.get("success") == 1 and data.get("result"):
team_info = data["result"][0]
result = f"\nTeam Name: {team_info['team_name']}\n"
result += f"Team Logo: {team_info['team_logo']}\n\n"
result += "Players:\n"

for player in team_info.get("players", []):
result += f"- Name: {player['player_name']}\n"
result += f" Type: {player['player_type']}\n"
result += f" Image: {player['player_image']}\n\n"

return result
else:
return "Team not found or invalid API key."

except Exception as e:
return f"Error fetching team information: {str(e)}"

2. Football Team Agent Setup

The agent.py file is the core of your application with integrated chat protocol functionality and contains message handlers for ChatMessage and ChatAcknowledgement protocols. Think of it as the main control center that:

  • Sets up your agent and handles chat protocol messages with dedicated handlers for processing natural language queries
  • Manages AI-powered parameter extraction and rate limiting for optimal performance

Here's the complete implementation with integrated chat protocol:

agent.py
import os
from enum import Enum
from datetime import datetime, timezone
from uuid import uuid4
from typing import Any

from uagents import Agent, Context, Model, Protocol
from uagents.experimental.quota import QuotaProtocol, RateLimit
from uagents_core.models import ErrorMessage

# Import chat protocol components
from uagents_core.contrib.protocols.chat import (
chat_protocol_spec,
ChatMessage,
ChatAcknowledgement,
TextContent,
EndSessionContent,
StartSessionContent,
)

from football import get_team_info, FootballTeamRequest, FootballTeamResponse

agent = Agent()

# AI Agent for structured output (choose one)
AI_AGENT_ADDRESS = 'agent1q0h70caed8ax769shpemapzkyk65uscw4xwk6dc4t3emvp5jdcvqs9xs32y' # OpenAI Agent
# AI_AGENT_ADDRESS = 'agent1qvk7q2av3e2y5gf5s90nfzkc8a48q3wdqeevwrtgqfdl0k78rspd6f2l4dx' # Claude Agent

if not AI_AGENT_ADDRESS:
raise ValueError("AI_AGENT_ADDRESS not set")

# Create the chat protocol
chat_proto = Protocol(spec=chat_protocol_spec)

# Create structured output protocol
struct_output_client_proto = Protocol(
name="StructuredOutputClientProtocol", version="0.1.0"
)

# Structured output models
class StructuredOutputPrompt(Model):
prompt: str
output_schema: dict[str, Any]

class StructuredOutputResponse(Model):
output: dict[str, Any]

# Optional: Rate limiting protocol for direct requests
proto = QuotaProtocol(
storage_reference=agent.storage,
name="Football-Team-Protocol",
version="0.1.0",
default_rate_limit=RateLimit(window_size_minutes=60, max_requests=30),
)

# Chat protocol message handler
@chat_proto.on_message(ChatMessage)
async def handle_message(ctx: Context, sender: str, msg: ChatMessage):
ctx.logger.info(f"Got a message from {sender}: {msg.content}")
ctx.storage.set(str(ctx.session), sender)

# Send acknowledgement
await ctx.send(
sender,
ChatAcknowledgement(
acknowledged_msg_id=msg.msg_id,
timestamp=datetime.now(timezone.utc)
),
)

# Process message content
for content in msg.content:
if isinstance(content, StartSessionContent):
ctx.logger.info(f"Got a start session message from {sender}")
continue
elif isinstance(content, TextContent):
ctx.logger.info(f"Got a message from {sender}: {content.text}")
ctx.storage.set(str(ctx.session), sender)

# Send to AI agent for structured output extraction
await ctx.send(
AI_AGENT_ADDRESS,
StructuredOutputPrompt(
prompt=content.text,
output_schema=FootballTeamRequest.schema()
),
)
else:
ctx.logger.info(f"Got unexpected content from {sender}")

# Handle structured output response from AI agent
@struct_output_client_proto.on_message(StructuredOutputResponse)
async def handle_structured_output_response(
ctx: Context, sender: str, msg: StructuredOutputResponse
):
session_sender = ctx.storage.get(str(ctx.session))
if session_sender is None:
ctx.logger.error(
"Discarding message because no session sender found in storage"
)
return

if "<UNKNOWN>" in str(msg.output):
error_response = ChatMessage(
content=[TextContent(text="Sorry, I couldn't process your request. Please try again later.")],
msg_id=uuid4(),
timestamp=datetime.now(timezone.utc)
)
await ctx.send(session_sender, error_response)
return

try:
# Parse the structured output
prompt = FootballTeamRequest.parse_obj(msg.output)

# Get team information
team_info = await get_team_info(prompt.team_name)

# Create response message
response = ChatMessage(
content=[TextContent(text=team_info)],
msg_id=uuid4(),
timestamp=datetime.now(timezone.utc)
)

await ctx.send(session_sender, response)

except Exception as err:
ctx.logger.error(f"Error processing structured output: {err}")
error_response = ChatMessage(
content=[TextContent(text="Sorry, I couldn't process your request. Please try again later.")],
msg_id=uuid4(),
timestamp=datetime.now(timezone.utc)
)
await ctx.send(session_sender, error_response)

# Chat protocol acknowledgement handler
@chat_proto.on_message(ChatAcknowledgement)
async def handle_ack(ctx: Context, sender: str, msg: ChatAcknowledgement):
ctx.logger.info(
f"Got an acknowledgement from {sender} for {msg.acknowledged_msg_id}"
)

# Optional: Direct request handler for structured requests
@proto.on_message(
FootballTeamRequest, replies={FootballTeamResponse, ErrorMessage}
)
async def handle_request(ctx: Context, sender: str, msg: FootballTeamRequest):
ctx.logger.info("Received team info request")
try:
results = await get_team_info(msg.team_name)
ctx.logger.info("Successfully fetched team information")
await ctx.send(sender, FootballTeamResponse(results=results))
except Exception as err:
ctx.logger.error(err)
await ctx.send(sender, ErrorMessage(error=str(err)))

# Register protocols
agent.include(chat_proto, publish_manifest=True)
agent.include(struct_output_client_proto, publish_manifest=True)
agent.include(proto, publish_manifest=True)

if __name__ == "__main__":
agent.run()

Key Features of the Integrated Approach:

  1. Simplified Architecture: All chat protocol functionality is integrated directly in the main agent file, eliminating the need for separate protocol files.

  2. AI-Powered Parameter Extraction: Uses OpenAI or Claude agents to extract structured parameters from natural language queries, ensuring accurate team name identification.

  3. Robust Error Handling: Includes comprehensive error handling for both AI processing and team data retrieval.

  4. Session Management: Properly tracks chat sessions to ensure responses are sent back to the correct sender.

  5. Backwards Compatibility: Still supports structured requests through the FootballTeamRequest protocol for direct agent-to-agent communication.

LLM Integration Options:

You can choose between two AI agents for structured output:

  • OpenAI Agent: agent1q0h70caed8ax769shpemapzkyk65uscw4xwk6dc4t3emvp5jdcvqs9xs32y
  • Claude Agent: agent1qvk7q2av3e2y5gf5s90nfzkc8a48q3wdqeevwrtgqfdl0k78rspd6f2l4dx

Note: Each agent is limited to 6 requests per hour for fair usage.

Adding a README to your Agent

  1. Go to the Overview section in the Editor.

  2. Click on Edit and add a good description for your Agent so that it can be easily searchable by the ASI1 LLM. Please refer the Importance of Good Readme section for more details.

  3. Make sure the Agent has the right AgentChatProtocol. Chat Protocol version

Test your Agent

  1. Start your Agent

Start Agent

  1. To test your agent, use the Agentverse Chat Interface. You can either search for your Agent by the Agent's name or by the Agent's address.

Agentverse Chat Interface

  1. Select your Agent from the list and type in a query to ask your Agent and it should return a response back with the Team Details.

Agent Response

Query your Agent from ASI1 LLM

  1. Login to the ASI1 LLM, either using your Google Account or the ASI1 Wallet and Start a New Chat.

  2. Toggle the "Agents" switch to enable ASI1 to connect with Agents on Agentverse.

Agent Calling

  1. Type in a query to ask your Agent for instance 'I want to get the player details for the Manchester City Football Team'.

ASI1 Response

Note: The ASI1 LLM may not always select your agent for answering the query as it is designed to pick the best agent for a task based on a number of parameters.