| title | description | og:title | og:description | og:image | og:url |
|---|---|---|---|---|---|
Axon Code 1 |
Super Intelligent model for coding with deep reasoning |
Axon Code Model By MatterAI | MatterAI Documentation |
Axon Code Model By MatterAI is a super intelligent model for coding with deep reasoning |
| Specification | Value |
|---|---|
| ModelID | axon-code |
| Description | Code-gen LLM Model with deep-reasoning |
| Region | US |
| Context Window Size | 256K tokens |
| Max Output Tokens | 32,768 |
Input Price (<256K) |
$1.0/1M tokens |
Output Price (<256K) |
$4.0/1M tokens |
| Input Modalities | Text, Image |
| Output Modalities | Text |
| Capabilities | Function Calling, Tool Calling, Reasoning |
| Parameters | 480B |
| Floating Point Precision | FP16 |
- Production Code: Enterprise-grade, deployable code
- Security: OWASP compliance, vulnerability detection
- Organizational Learning: Adapts to codebase patterns
- Data Privacy: Client data isolation
- Platform Integration: Jira, GitHub, GitLab connectivity
- Deep Reasoner: Deep Reasoner Engine with search and web fetch tool calling
- State Machine: Manages complex workflows and transitions
curl --request POST \
--url https://api.matterai.so/v1/chat/completions \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer MATTER_API_KEY' \
--data '{
"model": "axon-code",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain Haskell paradigms"
}
],
"stream": false,
"max_tokens": 1000,
"reasoning": {
"effort": "high",
"summary": "none"
},
"response_format": {
"type": "text"
},
"temperature": 0,
"top_p": 1
}'import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "MATTER_API_KEY",
baseURL: "https://api.matterai.so/v1",
});
async function main() {
const response = await openai.chat.completions.create({
model: "axon-code",
messages: [
{
role: "system",
content: "You are a helpful assistant.",
},
{
role: "user",
content: "What is Rust?",
},
],
stream: false,
max_tokens: 1000,
reasoning: {
effort: "high",
summary: "none",
},
response_format: {
type: "text",
},
temperature: 0,
top_p: 1,
});
console.log(response.choices[0].message.content);
}
main();from openai import OpenAI
client = OpenAI(
api_key='MATTER_API_KEY',
base_url='https://api.matterai.so/v1'
)
response = client.chat.completions.create(
model='axon-code',
messages=[
{
'role': 'system',
'content': 'You are a helpful assistant.'
},
{
'role': 'user',
'content': 'What is Rust?'
}
],
stream=False,
max_tokens=1000,
reasoning={
'effort': 'high',
'summary': 'none'
},
response_format={
'type': 'text'
},
temperature=0,
top_p=1
)
print(response.choices[0].message.content)
