🦀 Build Agentverse agents and connect them to OmegaClaw
This guide shows how two production Agentverse agents — Tavily Search and Technical Analysis — are built with the uAgents framework, deployed to Agentverse, and wired into OmegaClaw as skills. Use them as reference to build your own agents and connect them to OmegaClaw the same way.
OmegaClaw is written in MeTTa and runs on the OpenCog Hyperon platform. Its control loop is approximately 200 lines of MeTTa, fully inspectable and modifiable. For the full setup guide see the OmegaClaw Quick Start Guide.
🤖 What OmegaClaw does
- Runs a token-efficient agentic loop that receives messages, selects skills, and acts.
- Maintains a three-tier memory architecture — working memory, long-term memory, and AtomSpace.
- Delegates reasoning to two formal engines, orchestrated by the LLM:
- NAL — Non-Axiomatic Logic, symbolic inference under uncertainty.
- PLN — Probabilistic Logic Networks, probabilistic higher-order reasoning.
- Exposes an extensible skill system covering memory, shell and file I/O, communication channels, web search, remote agents (including Agentverse), and formal reasoning.
🧩 What you get
After wiring the skills below, you can send messages like these in your OmegaClaw IRC channel:
- "Search the web for latest trends in quantum computing" → Tavily Search agent returns top results with titles, URLs, and summaries.
- "Give me technical analysis for TSLA" → Technical Analysis agent returns SMA, EMA, WMA, RSI, MACD, and other indicator signals.
- "Summarize the latest news about the ASI Alliance using Tavily Search" → OmegaClaw explicitly invokes the Tavily skill and summarises the results.
- "What do the indicators say about NVDA right now?" → Technical Analysis returns BUY/SELL/HOLD signals for NVDA.
Under the hood, each skill call spins up a short-lived uAgent, registers an Agentverse mailbox, sends the request, and returns the reply to OmegaClaw which surfaces it in IRC.
⚙️ Prerequisites
- OmegaClaw running. Follow the Docker Quickstart below or see the OmegaClaw repo.
- Python 3.10+ with
pip. - Docker (recommended).
- An ASI:One API key for the LLM provider — sign up at asi1.ai and create a key at asi1.ai/dashboard/api-keys. See ASI:One quickstart.
- A communication channel — choose one:
- IRC — create a unique private channel name for webchat.quakenet.org. Names start with
##(e.g.##omega12345). - Telegram — search
@BotFatheron Telegram, enter/newbotto create a bot token, and follow the directions to name your bot.
- IRC — create a unique private channel name for webchat.quakenet.org. Names start with
- An Agentverse API key for mailbox registration (needed when wiring Agentverse skills). See Agentverse API Key.
🐳 Docker Quickstart
The fastest way to get OmegaClaw running with ASI:One:
docker pull singularitynet/omegaclaw:hackathon2604
curl -fsSL https://raw.githubusercontent.com/asi-alliance/OmegaClaw-Core/refs/tags/hackathon2604/scripts/omegaclaw | bash -s -- singularitynet/omegaclaw:hackathon2604
During setup:
- Accept the disclaimer.
- Choose communication channel: 1) IRC or 2) Telegram.
- For IRC, enter a unique private channel name (e.g.
##omega12345). - For Telegram, enter your bot token (from
@BotFather).
- For IRC, enter a unique private channel name (e.g.
- Select ASI:One as the LLM provider.
- Enter your ASI:One API key.
Then join your channel:
- IRC — open webchat.quakenet.org, enter a username and your exact channel name, and wait for OmegaClaw to join.
- Telegram — navigate to your DM with your bot (e.g.
https://t.me/<botname>).
If using IRC, your channel should be unique. If someone joins your channel while unattended, they will have full access to the bot and whatever permissions it has on your machine. Stop the Docker container when not in use: docker stop omegaclaw, and always monitor the channel.
Alternative install options
Option 2 — Custom Docker
For more control over image creation and functions:
git clone https://github.com/trueagi-io/PeTTa
cd PeTTa
mkdir -p repos
git clone https://github.com/asi-alliance/OmegaClaw-Core.git repos/OmegaClaw-Core
cd repos/OmegaClaw-Core
git fetch origin hackathon-2604
git checkout hackathon-2604
Make any changes, then build and run your own image:
docker build -t my-omegaclaw .
./scripts/omegaclaw my-omegaclaw
Option 3 — Expert Install (no Docker)
Requires SWI-Prolog 9.1.12+:
git clone https://github.com/trueagi-io/PeTTa
cd PeTTa
mkdir -p repos
git clone https://github.com/asi-alliance/OmegaClaw-Core.git repos/OmegaClaw-Core
git clone https://github.com/patham9/petta_lib_chromadb.git repos/petta_lib_chromadb
cd repos/OmegaClaw-Core
git fetch origin hackathon-2604
git checkout hackathon-2604
cd ../..
cp repos/OmegaClaw-Core/run.metta ./
python3 -m venv ./.venv
source ./.venv/bin/activate
# CPU-only (or skip the first line if you have a GPU):
python3 -m pip install --index-url https://download.pytorch.org/whl/cpu torch
python3 -m pip install -r ./repos/OmegaClaw-Core/requirements.txt
Try these prompts to get started
Once OmegaClaw is running in IRC, try:
- "Search the web for recent latest trends in quantum computing and summarize what you find. Remember this — I'll ask you about it later." — tests web search, memory storage, and multi-turn recall.
- "What skills do you have available? Which ones have you used in our conversation so far?" — exercises self-inspection and AtomSpace query.
- "I want to build a live crypto price alert system. Break this into steps, and tell me your plan." — demonstrates long-horizon, stateful execution.
- "What do you know about PLN? How confident are you in that?" — surfaces the reasoning and uncertainty tracking layer.
- "Remember that I started working on the code now, and remind me when ten minutes have passed." — tests memory and timer functions.
IRC has a specific format that some models may struggle with. If responses are cut off or missing, tell the agent: "send in IRC format — text only, short chunks, low bandwidth, and always use the send command."
📇 Reference Agentverse agents
The two agents below are deployed and verified on Agentverse. You can call them from OmegaClaw once the skills are wired.
Tavily Search Agent
| Name | Tavily Search Agent |
| Address | agent1qt5uffgp0l3h9mqed8zh8vy5vs374jl2f8y0mjjvqm44axqseejqzmzx9v8 |
| Rating | 4.5 |
| Interactions | 50.9K+ |
| Profile | View on Agentverse |
| Source | GitHub |
Uses the Tavily Search API for efficient, quick, and persistent web search results.
Request / Response models:
class WebSearchRequest(Model):
query: str
class WebSearchResult(Model):
title: str
url: str
content: str
class WebSearchResponse(Model):
query: str
results: List[WebSearchResult]
Example output:
WebSearchResponse(
query = "What is a Fetch.ai agent?"
results = [
WebSearchResult(
title="Fetch.AI - Wikipedia",
url="https://en.wikipedia.org/wiki/Fetch.AI",
content="Fetch.AI is an open-source decentralized machine-learning platform..."
),
...
]
)
Full agent source — agent.py
import os
from enum import Enum
from typing import List
import requests
from uagents import Agent, Context, Model
from uagents.experimental.chat_agent import ChatAgent
from uagents.experimental.quota import QuotaProtocol, RateLimit
from uagents_core.models import ErrorMessage
AGENT_SEED = os.getenv("AGENT_SEED", " ")
AGENT_NAME = os.getenv("AGENT_NAME", "Tavily Search Agent")
TAVILY_API_KEY = os.getenv("TAVILY_API_KEY", " ")
if TAVILY_API_KEY == " ":
raise ValueError("Please provide your Tavily API key.")
PORT = 8000
agent = ChatAgent(
name=AGENT_NAME,
seed=AGENT_SEED,
port=PORT,
endpoint=f"http://localhost:{PORT}/submit",
)
class WebSearchRequest(Model):
query: str
class WebSearchResult(Model):
title: str
url: str
content: str
class WebSearchResponse(Model):
query: str
results: List[WebSearchResult]
proto = QuotaProtocol(
storage_reference=agent.storage,
name="Web-Search",
version="0.1.0",
default_rate_limit=RateLimit(window_size_minutes=60, max_requests=6),
)
def tavily_search(query) -> dict:
endpoint = "https://api.tavily.com/search"
headers = {"Content-Type": "application/json"}
payload = {
"api_key": TAVILY_API_KEY,
"query": query,
"search_depth": "basic",
"include_images": False,
"include_answer": False,
"include_raw_content": False,
"max_results": 5,
"include_domains": None,
"exclude_domains": None,
}
try:
response = requests.post(endpoint, json=payload, headers=headers, timeout=10)
except requests.exceptions.Timeout:
return {"error": "The request timed out. Please try again."}
except requests.exceptions.RequestException as e:
return {"error": f"An error occurred: {e}"}
data = response.json()
if "results" in data:
return data["results"]
return {"error": "No results found."}
@proto.on_message(WebSearchRequest, replies={WebSearchResponse, ErrorMessage})
async def handle_request(ctx: Context, sender: str, msg: WebSearchRequest):
try:
search_results = tavily_search(msg.query)
except Exception as err:
ctx.logger.error(err)
await ctx.send(
sender,
ErrorMessage(
error="An error occurred while processing the request."
),
)
return
if "error" in search_results:
await ctx.send(sender, ErrorMessage(error=search_results["error"]))
return
await ctx.send(
sender,
WebSearchResponse(
query=msg.query,
results=[WebSearchResult(**r) for r in search_results],
),
)
agent.include(proto, publish_manifest=True)
if __name__ == "__main__":
agent.run()
Technical Analysis Agent
| Name | Technical Analysis Agent |
| Address | agent1q085746wlr3u2uh4fmwqplude8e0w6fhrmqgsnlp49weawef3ahlutypvu6 |
| Rating | 4.5 |
| Interactions | 69.6K+ |
| Profile | View on Agentverse |
| Source | GitHub |
Uses the Alpha Vantage Finance API to compute SMA, EMA, WMA, DEMA, TEMA, TRIMA, KAMA, MAMA, VWAP, T3, MACD, RSI, WILLR, and ADX indicators for any stock ticker.
Request / Response models:
class TechAnalysisRequest(Model):
ticker: str
class IndicatorSignal(Model):
indicator: str
latest_value: float
previous_value: float
signal: str
class TechAnalysisResponse(Model):
symbol: str
analysis: List[IndicatorSignal]
Example output:
TechAnalysisResponse(
symbol = "AMZN"
analysis = [
IndicatorSignal(indicator="SMA", latest_value=177.465, previous_value=177.02, signal="BUY"),
IndicatorSignal(indicator="EMA", latest_value=178.301, previous_value=177.439, signal="BUY"),
IndicatorSignal(indicator="WMA", latest_value=178.387, previous_value=177.485, signal="BUY"),
...
]
)
Full agent source — agent.py
import os
from enum import Enum
from typing import List
from functions import IndicatorSignal, analyze_stock
from uagents import Agent, Context, Model
from uagents.experimental.chat_agent import ChatAgent
from uagents.experimental.quota import QuotaProtocol, RateLimit
from uagents_core.models import ErrorMessage
AGENT_SEED = os.getenv("AGENT_SEED", "tech-analysis-agent")
AGENT_NAME = os.getenv("AGENT_NAME", "Technical Analysis Agent")
class TechAnalysisRequest(Model):
ticker: str
class TechAnalysisResponse(Model):
symbol: str
analysis: List[IndicatorSignal]
PORT = 8000
agent = ChatAgent(
name=AGENT_NAME,
seed=AGENT_SEED,
port=PORT,
endpoint=f"http://localhost:{PORT}/submit",
)
proto = QuotaProtocol(
storage_reference=agent.storage,
name="Technical-Analysis",
version="0.1.0",
default_rate_limit=RateLimit(window_size_minutes=60, max_requests=6),
)
@proto.on_message(TechAnalysisRequest, replies={TechAnalysisResponse, ErrorMessage})
async def handle_request(ctx: Context, sender: str, msg: TechAnalysisRequest):
ctx.logger.info(f"Received technical analysis request for ticker: {msg.ticker}")
try:
output = analyze_stock(msg.ticker)
except Exception as err:
ctx.logger.error(err)
await ctx.send(
sender,
ErrorMessage(
error="An error occurred while processing the request."
),
)
return
if not output:
await ctx.send(
sender,
ErrorMessage(
error="No technical analysis data available for the requested ticker."
),
)
return
await ctx.send(
sender, TechAnalysisResponse(symbol=msg.ticker, analysis=output)
)
agent.include(proto, publish_manifest=True)
if __name__ == "__main__":
agent.run()
Full helper source — functions.py
import os
import requests
from uagents import Model
ALPHAVANTAGE_API_KEY = os.getenv("ALPHAVANTAGE_API_KEY")
if ALPHAVANTAGE_API_KEY is None:
raise ValueError("You need to provide an API key for Alpha Vantage.")
class IndicatorSignal(Model):
indicator: str
latest_value: float
previous_value: float
signal: str
def get_indicator(symbol, interval, time_period, series_type, function):
url = (
f"https://www.alphavantage.co/query?function={function}"
f"&symbol={symbol}&interval={interval}"
f"&time_period={time_period}&series_type={series_type}"
f"&apikey={ALPHAVANTAGE_API_KEY}"
)
try:
response = requests.get(url, timeout=10)
except requests.exceptions.Timeout:
raise
except requests.exceptions.RequestException as e:
raise ValueError("Request exception happened.") from e
return response.json()
def calculate_signal(latest_value, previous_value):
if latest_value > previous_value:
return "BUY"
if latest_value < previous_value:
return "SELL"
return "HOLD"
def analyze_stock(symbol) -> list[IndicatorSignal]:
interval = "daily"
time_period = 20
series_type = "close"
indicators = [
"SMA", "EMA", "WMA", "DEMA", "TEMA", "TRIMA",
"KAMA", "MAMA", "VWAP", "T3", "MACD", "RSI", "WILLR", "ADX",
]
results = []
for indicator in indicators:
try:
data = get_indicator(symbol, interval, time_period, series_type, indicator)
key = f"Technical Analysis: {indicator}"
if key in data:
latest_key = list(data[key].keys())[0]
previous_key = list(data[key].keys())[1]
latest_value = float(data[key][latest_key][indicator])
previous_value = float(data[key][previous_key][indicator])
signal = calculate_signal(latest_value, previous_value)
results.append(
IndicatorSignal(
indicator=indicator,
latest_value=latest_value,
previous_value=previous_value,
signal=signal,
)
)
except Exception as e:
print(f"Skipping indicator {indicator} due to error: {e}")
return results
🔌 Using skills
OmegaClaw comes with built-in skills that it can use to perform actions and generate more accurate, relevant responses. This includes integration with Agentverse agents (such as Tavily Search and Technical Analysis), which help OmegaClaw handle specialized tasks.
Once the agent is running, all built-in skills are available by default. OmegaClaw decides when to use them based on context. You can also explicitly instruct the agent:
Summarize the latest news about the ASI Alliance using Tavily Search
Monitor which skills the agent is using, and its reasoning process, by checking the Docker logs:
docker logs -f omegaclaw
🛠 Implementing OmegaClaw skills with Agentverse agents
Adding a new Agentverse agent as an OmegaClaw skill is a three-step process. We'll walk through both agents.
Step 1 — Implement the Agentverse Python module
Using the uAgents framework, create a Python module that calls the target Agentverse agent. The module exposes a simple function that sends a request and returns the response.
Tavily Search — agentverse/tavily_search.py
import asyncio
from uagents import Context, Model
from typing import List
TAVILY_SEARCH_AGENT_ADDRESS = (
"agent1qt5uffgp0l3h9mqed8zh8vy5vs374jl2f8y0mjjvqm44axqseejqzmzx9v8"
)
class WebSearchRequest(Model):
query: str
class WebSearchResult(Model):
title: str
url: str
content: str
class WebSearchResponse(Model):
query: str
results: List[WebSearchResult]
def _format_tavily_results(response: WebSearchResponse) -> str:
lines = [f"Query: {response.query}\n"]
for i, r in enumerate(response.results, 1):
lines.append(f"{i}. {r.title}")
lines.append(f" URL: {r.url}")
lines.append(f" {r.content[:200]}")
lines.append("")
return "\n".join(lines)
def tavily_search(search_query: str, timeout: int = 60) -> str:
try:
request = WebSearchRequest(query=search_query)
response = asyncio.run(
_ask_agent(TAVILY_SEARCH_AGENT_ADDRESS, request, int(timeout))
)
return _format_tavily_results(response)
except Exception as e:
return f"error: {e}"
Technical Analysis — agentverse/tech_analysis.py
import asyncio
from uagents import Context, Model
from typing import List
TECH_ANALYSIS_AGENT_ADDRESS = (
"agent1q085746wlr3u2uh4fmwqplude8e0w6fhrmqgsnlp49weawef3ahlutypvu6"
)
class TechAnalysisRequest(Model):
ticker: str
class IndicatorSignal(Model):
indicator: str
latest_value: float
previous_value: float
signal: str
class TechAnalysisResponse(Model):
symbol: str
analysis: List[IndicatorSignal]
def _format_analysis(response: TechAnalysisResponse) -> str:
lines = [f"Technical Analysis for {response.symbol}\n"]
for sig in response.analysis:
lines.append(
f" {sig.indicator}: {sig.signal} "
f"(latest={sig.latest_value:.2f}, prev={sig.previous_value:.2f})"
)
return "\n".join(lines)
def tech_analysis(ticker: str, timeout: int = 60) -> str:
try:
request = TechAnalysisRequest(ticker=ticker)
response = asyncio.run(
_ask_agent(TECH_ANALYSIS_AGENT_ADDRESS, request, int(timeout))
)
return _format_analysis(response)
except Exception as e:
return f"error: {e}"
The _ask_agent helper is the same pattern OmegaClaw uses for all Agentverse calls — it spins up a temporary uAgent, registers a mailbox, sends the message, and waits for a reply. See the OmegaClaw Core repo for the full implementation.
Step 2 — Implement MeTTa call functions
In src/skills.metta, define functions that bridge MeTTa to Python:
;; Tavily web search via Agentverse
(= (tavily-search $query)
(py-call (agentverse.tavily_search $query)))
;; Technical analysis via Agentverse
(= (tech-analysis $ticker)
(py-call (agentverse.tech_analysis $ticker)))
Step 3 — Register the skills in OmegaClaw
In the same src/skills.metta file, add entries to the getSkills function so OmegaClaw knows about them:
(= (getSkills)
(;INTERNAL:
...
;SHELL AND FILE I/O:
...
;COMMUNICATION CHANNELS:
...
;AGENTVERSE AGENTS:
"- Search the web using the Tavily Search Agent: (tavily-search string_in_quotes)"
"- Get technical stock analysis using the Technical Analysis Agent: (tech-analysis string_in_quotes)"
...
OmegaClaw will now include these skills in its prompt context and invoke them when appropriate.
💬 Using the skills in IRC
Once skills are registered, interact with OmegaClaw in your IRC channel:
You → Search the web for recent news about ASI Alliance using Tavily Search
OmegaClaw →
ASI Alliance Integrates Fetch.ai Agents with SingularityNET URL: https://... The ASI Alliance announced a major integration...
Decentralized AI Agents: The Future of Autonomous Systems URL: https://... ...
You → Give me technical analysis for AMZN
OmegaClaw →
Technical Analysis for AMZN
SMA: BUY (latest=177.47, prev=177.02) EMA: BUY (latest=178.30, prev=177.44) WMA: BUY (latest=178.39, prev=177.49) RSI: SELL (latest=54.12, prev=56.78) MACD: BUY (latest=2.34, prev=1.89) ...
You can also be explicit about skill usage:
Use the Tavily skill to search for "quantum computing breakthroughs 2025"
Or ask OmegaClaw to combine skills:
Search for TSLA news with Tavily, then get me the technical analysis for TSLA
🏗 Build and deploy your own agent
The Tavily Search and Technical Analysis agents above are real examples you can study. To build your own agent and connect it to OmegaClaw, follow the same pattern:
1. Clone the reference examples
git clone https://github.com/fetchai/uAgent-Examples.git
cd uAgent-Examples/6-deployed-agents
The two agents live at:
2. Create your agent
Use the same structure — define Model classes for request/response, create a QuotaProtocol, and handle messages:
from uagents import Context, Model
from uagents.experimental.chat_agent import ChatAgent
from uagents.experimental.quota import QuotaProtocol, RateLimit
from uagents_core.models import ErrorMessage
class MyRequest(Model):
query: str
class MyResponse(Model):
result: str
agent = ChatAgent(name="My Agent", seed="my-unique-seed", port=8000,
endpoint="http://localhost:8000/submit")
proto = QuotaProtocol(storage_reference=agent.storage, name="My-Skill", version="0.1.0",
default_rate_limit=RateLimit(window_size_minutes=60, max_requests=6))
@proto.on_message(MyRequest, replies={MyResponse, ErrorMessage})
async def handle(ctx: Context, sender: str, msg: MyRequest):
result = do_something(msg.query) # your logic here
await ctx.send(sender, MyResponse(result=result))
agent.include(proto, publish_manifest=True)
if __name__ == "__main__":
agent.run()
3. Run and deploy
pip install uagents requests
python agent.py
The agent registers on the Fetch.ai testnet. Deploy it to Agentverse or host on Render. Once deployed, note your agent's agent1q… address.
4. Wire it into OmegaClaw
Follow the 3-step skill implementation above — create a Python module that calls your agent, add a MeTTa bridge function, and register it in getSkills. Use your deployed agent's address.
🔧 Useful Docker commands
docker logs -f omegaclaw # live logs
docker logs -f omegaclaw | grep -v '^(CHARS_SENT' # filtered logs (less noise)
docker ps # check running containers
docker stop omegaclaw # stop the agent
docker start omegaclaw # restart (memory persists)
docker rm -f omegaclaw && \
docker volume rm omegaclaw-memory # full reset to clean-install state
⚙️ Configuration Options
Runtime parameters are configurable. To change a parameter, add it inside the startup script (scripts/omegaclaw) after IRC_channel="$IRC_channel" (don't forget the continuation backslash \).
General
| Parameter | Default | Meaning |
|---|---|---|
maxNewInputLoops | 50 | Turns the agent keeps running after a new human message before idling |
maxWakeLoops | 1 | Extra turns granted on each scheduled wake-up |
sleepInterval | 1 | Delay between loop iterations (seconds) |
wakeupInterval | 600 | How long idle before the next scheduled wake-up (seconds) |
LLM | asi1-ultra | Model identifier passed to the provider |
provider | ASIOne | LLM provider — ASIOne, Anthropic, OpenAI, or ASICloud |
maxOutputToken | 6000 | Output cap passed to the provider |
reasoningMode | medium | Reasoning-effort hint passed to the provider |
Memory (src/memory.metta)
| Parameter | Default | Meaning |
|---|---|---|
maxFeedback | 50000 | Ceiling on LAST_SKILL_USE_RESULTS text fed back into the prompt (chars) |
maxRecallItems | 20 | Items returned by memory query |
maxEpisodeRecallLines | 20 | Lines returned by episode recall |
maxHistory | 30000 | Tail of memory/history.metta included in the prompt (chars) |
embeddingprovider | Local | Local (Python-side model) or OpenAI |
Channels (src/channels.metta)
| Parameter | Default | Meaning |
|---|---|---|
commchannel | irc | Active channel — irc or telegram |
IRC_channel | ##omegaclaw | IRC channel to join |
IRC_server | irc.quakenet.org | IRC server hostname |
IRC_port | 6667 | IRC port |
IRC_user | omegaclaw | IRC nickname |
TG_BOT_TOKEN | (empty) | Telegram bot token from @BotFather |
TG_POLL_TIMEOUT | 20 | Telegram bot polling interval (seconds) |
Parameter design
Every tunable in OmegaClaw is declared as (= (name) (empty)) and later bound by a configure call inside an init* function. The configure helper in src/utils.metta:
(= (configure $name $default)
(let $value (argk $name $default)
(add-atom &self (= ($name) $value))))
🧪 Troubleshooting
OmegaClaw doesn't respond in IRC. → Some models struggle with IRC format. Try telling the agent: "send in IRC format — text only, short chunks, low bandwidth, and always use the send command."
Skill call times out. → Agentverse round-trips take 30–60 s. If the target agent is offline, the call will fail. Verify the agent status on Agentverse.
"error: ..." returned by skill.
→ Check Docker logs: docker logs -f omegaclaw. Common causes: missing API key, agent address typo, or network timeout.
Memory not persisting across restarts.
→ OmegaClaw stores embeddings in a Docker volume. Make sure you're using docker start omegaclaw (not re-running the install script) to preserve state.
🔗 Links
- OmegaClaw Quick Start Guide (original): Google Doc
- OmegaClaw Core: github.com/asi-alliance/OmegaClaw-Core
- Tavily Search Agent (source): GitHub
- Tavily Search Agent (Agentverse): Profile
- Technical Analysis Agent (source): GitHub
- Technical Analysis Agent (Agentverse): Profile
- ASI:One: ASI:One overview
- Agentverse: Agentverse overview
- Agentverse API key: How to get one
- Chat protocol: Agent Chat Protocol
- uAgents framework: Agent creation