OpenAI Compatibility
Use ASI:One's API with OpenAI's client libraries for seamless integration.
Overview
ASI:One's API is fully compatible with OpenAI's Chat Completions API format. This means you can use existing OpenAI client libraries and simply change the base URL to start using ASI:One's agentic models with Agentverse marketplace integration.
API Compatibility
Standard OpenAI Parameters
These parameters work exactly the same as OpenAI's API:
model
- Model name (use ASI:One model names)messages
- Chat messages arraytemperature
- Sampling temperature (0-2)max_tokens
- Maximum tokens in responsetop_p
- Nucleus sampling parameterfrequency_penalty
- Frequency penalty (-2.0 to 2.0)presence_penalty
- Presence penalty (-2.0 to 2.0)stream
- Enable streaming responses
ASI:One-Specific Parameters
These ASI:One-specific parameters are also supported:
web_search
- Enable web search capabilitiesx-session-id
- Session ID for agentic model persistence (header)- Tool calling parameters for Agentverse marketplace agent integration
Examples with OpenAI SDK
- Python
- JavaScript
Install the OpenAI library: pip install openai
# Complete Request & Response
from openai import OpenAI
client = OpenAI(
api_key="YOUR_ASI_ONE_API_KEY",
base_url="https://api.asi1.ai/v1"
)
response = client.chat.completions.create(
model="asi1-mini",
messages=[
{"role": "system", "content": "Be precise and concise."},
{"role": "user", "content": "What is agentic AI and how does it work?"}
],
temperature=0.2,
top_p=0.9,
max_tokens=1000,
presence_penalty=0,
frequency_penalty=0,
stream=False,
extra_body={
"web_search": False
}
)
print(response.choices[0].message.content)
print(f"Usage: {response.usage}")
# Agentic Model with Session - Working Example
import uuid
from openai import OpenAI
client = OpenAI(
api_key="YOUR_ASI_ONE_API_KEY",
base_url="https://api.asi1.ai/v1"
)
# Generate session ID for agentic models
session_id = str(uuid.uuid4())
print(f"🆔 Session ID: {session_id}")
print("🔄 Making request to asi1-agentic...")
response = client.chat.completions.create(
model="asi1-agentic",
messages=[
{"role": "user", "content": "Check latest flights arrival status on Delhi airport."}
],
extra_headers={
"x-session-id": session_id
},
temperature=0.7,
stream=True
)
print("📡 Response received, streaming content:\n")
# Handle streaming response safely
for chunk in response:
# Safe check for choices and content
if (hasattr(chunk, 'choices') and
chunk.choices and
len(chunk.choices) > 0 and
hasattr(chunk.choices[0], 'delta') and
hasattr(chunk.choices[0].delta, 'content') and
chunk.choices[0].delta.content):
print(chunk.choices[0].delta.content, end="")
print("\n\n🏁 Stream completed!")
# Web Search Integration
from openai import OpenAI
client = OpenAI(
api_key="YOUR_ASI_ONE_API_KEY",
base_url="https://api.asi1.ai/v1"
)
response = client.chat.completions.create(
model="asi1-extended",
messages=[
{"role": "user", "content": "Latest developments in AI research"}
],
extra_body={
"web_search": True
}
)
print(response.choices[0].message.content)
Install the OpenAI library: npm install openai
// Complete Request & Response
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: "YOUR_ASI_ONE_API_KEY",
baseURL: "https://api.asi1.ai/v1"
});
const response = await client.chat.completions.create({
model: "asi1-mini",
messages: [
{ role: "system", content: "Be precise and concise." },
{ role: "user", content: "What is agentic AI and how does it work?" }
],
temperature: 0.2,
top_p: 0.9,
max_tokens: 1000,
presence_penalty: 0,
frequency_penalty: 0,
stream: false,
web_search: false
});
console.log(response.choices[0].message.content);
console.log(`Usage: ${JSON.stringify(response.usage)}`);
// Agentic Model with Session
import OpenAI from 'openai';
import { v4 as uuidv4 } from 'uuid';
const client = new OpenAI({
apiKey: "YOUR_ASI_ONE_API_KEY",
baseURL: "https://api.asi1.ai/v1"
});
// Generate session ID for agentic models
const sessionId = uuidv4();
const response = await client.chat.completions.create({
model: "asi1-agentic",
messages: [
{ role: "user", content: "Help me book a restaurant for dinner tonight" }
],
temperature: 0.7,
stream: true
}, {
headers: {
"x-session-id": sessionId
}
});
// Handle streaming response
for await (const chunk of response) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
// Web Search Integration
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: "YOUR_ASI_ONE_API_KEY",
baseURL: "https://api.asi1.ai/v1"
});
const response = await client.chat.completions.create({
model: "asi1-extended",
messages: [
{ role: "user", content: "Latest developments in AI research" }
],
web_search: true
});
console.log(response.choices[0].message.content);
Understanding the Response Structure
After making a request, your response object includes both standard OpenAI fields and ASI:One-specific fields:
choices[0].message.content
: The main model responsemodel
: The model usedusage
: Token usage detailsexecutable_data
: (ASI:One) Agent manifests and tool calls from Agentverse marketplaceintermediate_steps
: (ASI:One) Multi-step reasoning tracesthought
: (ASI:One) Model reasoning process
- Python
- JavaScript
# Accessing response fields
print(response.choices[0].message.content) # Main answer
print(response.model) # Model name
print(response.usage) # Token usage
# ASI:One specific fields
if hasattr(response, 'executable_data'):
print(response.executable_data) # Agent calls
if hasattr(response, 'intermediate_steps'):
print(response.intermediate_steps) # Reasoning steps
if hasattr(response, 'thought'):
print(response.thought) # Model thinking
// Accessing response fields
console.log(response.choices[0].message.content); // Main answer
console.log(response.model); // Model name
console.log(response.usage); // Token usage
// ASI:One specific fields
if (response.executable_data) {
console.log(response.executable_data); // Agent calls
}
if (response.intermediate_steps) {
console.log(response.intermediate_steps); // Reasoning steps
}
if (response.thought) {
console.log(response.thought); // Model thinking
}
Model Selection for OpenAI SDK
Choose the right ASI:One model based on your use case:
Model | Best For | OpenAI SDK Usage |
---|---|---|
asi1-mini | Fast responses, general chat | Standard OpenAI parameters |
asi1-fast | Ultra-low latency | Standard OpenAI parameters |
asi1-extended | Complex reasoning | Standard OpenAI parameters |
asi1-agentic | Agent orchestration | Requires x-session-id header |
asi1-fast-agentic | Real-time agents | Requires x-session-id header |
asi1-extended-agentic | Complex workflows | Requires x-session-id header |
asi1-graph | Data visualization | Standard OpenAI parameters |
Next Steps
Ready to get started with ASI:One's OpenAI-compatible API? Here's what to do next:
- Get your API key - Sign up and create your ASI:One API key
- Try the quickstart - Make your first API call in minutes
- Explore agentic models - Discover the power of Agentverse marketplace integration
- Learn about tool calling - Extend your applications with custom tools
Need help? Check out our Model Selection guide to choose the right model for your use case.