Agent API
The Agent API provides AI-powered chat with graph-based reasoning and autonomous research capabilities.
Chat with Reasoning
Send messages to the AI agent with different reasoning engines.
Endpoint
POST /agent/interactionsRequest
bash
curl -X POST https://exograph.ai/agent/interactions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "snap",
"reasoning_engine": "graph",
"target_graph_id": "graph_123",
"messages": [
{
"role": "user",
"content": "What are the key relationships in this document?"
}
]
}'Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | No | "snap" or "ponder". Default: "snap" |
reasoning_engine | string | No | "rag", "graph", or "web". Default: "rag" |
target_graph_id | string | No | Specific graph to query |
target_doc_id | string | No | Specific document to query |
messages | array | Yes | Conversation history |
Reasoning Engines
| Engine | Description | Best For |
|---|---|---|
| rag | Vector similarity search | Fast, general queries |
| graph | Knowledge graph reasoning | Structured, relationship-based queries |
| web | Autonomous web research | Comprehensive, current information |
Response
json
{
"messages": [
{
"role": "user",
"content": "What are the key relationships?"
},
{
"role": "assistant",
"content": "Based on the knowledge graph, I found..."
}
],
"usage": {
"tokens_consumed": 3,
"balance_remaining": 97,
"operation": "chat_query",
"model": "snap"
}
}Token Costs
| Model | Base Cost | Additional |
|---|---|---|
| Snap | 3 tokens | +0.5 per node accessed (beyond 5) |
| Ponder | 10 tokens | +0.5 per node accessed (beyond 5) |
Autonomous Research
Start an autonomous research job that creates a comprehensive report.
Endpoint
POST /agent/researchRequest
bash
curl -X POST https://exograph.ai/agent/research \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"topic": "Quantum error correction methods",
"model": "ponder",
"max_results": 8
}'Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
topic | string | Yes | Research topic |
model | string | No | Model to use - "snap" (fast) or "ponder" (deep thinking). Default: "ponder" |
max_results | integer | No | Number of web sources. Default: 8 |
Response
json
{
"doc_id": "doc_research_abc",
"graph_id": "graph_research_xyz",
"sources": [
{
"title": "Quantum Error Correction Basics",
"url": "https://example.com/article"
}
],
"usage": {
"tokens_consumed": 100,
"balance_remaining": 0,
"operation": "research",
"model": "ponder"
}
}Token Costs
| Model | Cost |
|---|---|
| Snap | 30 tokens |
| Ponder | 75-100 tokens (varies by complexity) |
List Models
Get available AI models.
Endpoint
GET /agent/modelsRequest
bash
curl https://exograph.ai/agent/models \
-H "Authorization: Bearer YOUR_API_KEY"Response
json
{
"models": [
{
"id": "snap",
"name": "Snap",
"description": "Fast responses for quick conversations"
},
{
"id": "ponder",
"name": "Ponder",
"description": "Deep thinking for complex analysis"
}
],
"default": "snap"
}Token Cost: Free
Examples
Multi-turn Conversation
python
# Maintain conversation history
messages = []
def chat(question):
messages.append({"role": "user", "content": question})
response = requests.post(
"https://exograph.ai/agent/interactions",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
},
json={
"model": "snap",
"reasoning_engine": "graph",
"messages": messages
}
)
result = response.json()
messages.extend(result['messages'][len(messages):])
return result['messages'][-1]['content']
# Have a conversation
print(chat("What topics are covered?"))
print(chat("Tell me more about the first topic"))
print(chat("How does it relate to the others?"))Research Pipeline
python
def research_pipeline(topics):
results = []
for topic in topics:
research = requests.post(
"https://exograph.ai/agent/research",
headers={"Authorization": f"Bearer {API_KEY}"},
json={"topic": topic, "model": "ponder"}
).json()
results.append(research)
return results
# Use it
topics = [
"Quantum error correction",
"Topological quantum computing",
"Fault-tolerant quantum gates"
]
results = research_pipeline(topics)Error Responses
Insufficient Tokens
json
{
"error": {
"code": "insufficient_tokens",
"message": "Insufficient tokens. Need 10, have 3."
}
}Invalid Request
json
{
"error": {
"code": "invalid_request",
"message": "Missing required parameter: messages"
}
}Next: Documents API →