A Python SDK for decentralized model management and inference services on the OpenGradient platform. The SDK enables programmatic access to our model repository and decentralized AI infrastructure.
- Model management and versioning
- Decentralized model inference
- Support for LLM inference with various models
- Trusted Execution Environment (TEE) inference with cryptographic attestation
- Drop-in replacement for OpenAI and Anthropic LLM APIs - add verifiable and secure inference to your existing AI application or agent with minimal code changes
- Consensus-based end-to-end verified AI execution
- Command-line interface (CLI) for direct access
Browse and discover AI models on our Model Hub. The Hub provides:
- Registry of models and LLMs
- Easy model discovery and deployment
- Direct integration with the SDK
pip install opengradientNote: Windows users should temporarily enable WSL when installing opengradient (fix in progress).
OpenGradient currently runs two networks:
- Testnet: The main public testnet for general use
- Alpha Testnet: For alpha features like atomic AI execution from smart contracts or scheduled ML workflow execution (see Alpha Testnet Features)
For the latest network RPCs, contract addresses, and deployment information, see the Network Deployment Documentation.
You'll need:
- Private key: An Ethereum-compatible wallet private key for OpenGradient transactions
- Test tokens: Get free test tokens from the OpenGradient Faucet to use LLM inference on the testnet
- Model Hub account (optional): Only needed for uploading models. Create one at Hub Sign Up
The easiest way to set up your configuration is through our wizard:
opengradient config initimport os
import opengradient as og
client = og.Client(
private_key=os.environ.get("OG_PRIVATE_KEY"),
email=None, # Optional: only needed for model uploads
password=None,
)OpenGradient supports secure, verifiable inference through TEE for leading LLM providers. Access models from OpenAI, Anthropic, Google, and xAI with cryptographic attestation verified by the OpenGradient network:
completion = client.llm.chat(
model=og.TEE_LLM.GPT_4O,
messages=[{"role": "user", "content": "Hello!"}],
)
print(f"Response: {completion.chat_output['content']}")
print(f"Tx hash: {completion.transaction_hash}")Use OpenGradient as a drop-in LLM for LangChain agents - every decision and reasoning is verified through the OpenGradient network:
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
import opengradient as og
llm = og.agents.langchain_adapter(
private_key=os.environ.get("OG_PRIVATE_KEY"),
model_cid=og.TEE_LLM.GPT_4O,
)
@tool
def get_weather(city: str) -> str:
"""Returns the current weather for a city."""
return f"Sunny, 72°F in {city}"
agent = create_react_agent(llm, [get_weather])
result = agent.invoke({"messages": [("user", "What's the weather in San Francisco?")]})
print(result["messages"][-1].content)Available TEE Models:
The SDK includes models from multiple providers accessible via the og.TEE_LLM enum:
- OpenAI: GPT-4.1, GPT-4o, o4-mini
- Anthropic: Claude 3.7 Sonnet, Claude 3.5 Haiku, Claude 4.0 Sonnet
- Google: Gemini 2.5 Flash, Gemini 2.5 Pro, Gemini 2.0 Flash
- xAI: Grok 3 Beta, Grok 3 Mini Beta, Grok 4.1 Fast
For the complete list, check the og.TEE_LLM enum in your IDE or see the API documentation.
The Alpha Testnet provides access to experimental features, including custom ML model inference and workflow deployment and execution. Run inference on any model hosted on the Model Hub, or deploy on-chain AI pipelines that connect models with data sources and can be scheduled for automated execution.
Note: Alpha features require connecting to the Alpha Testnet. See Network Configuration for details.
Browse models on our Model Hub or upload your own:
result = client.alpha.infer(
model_cid="your-model-cid",
model_input={"input": [1.0, 2.0, 3.0]},
inference_mode=og.InferenceMode.VANILLA,
)
print(f"Output: {result.model_output}")import opengradient as og
client = og.init(
private_key="your-private-key",
email="your-email",
password="your-password",
)
# Define input query for historical price data
input_query = og.HistoricalInputQuery(
base="ETH",
quote="USD",
total_candles=10,
candle_duration_in_mins=60,
order=og.CandleOrder.DESCENDING,
candle_types=[og.CandleType.CLOSE],
)
# Deploy a workflow (optionally with scheduling)
contract_address = client.alpha.new_workflow(
model_cid="your-model-cid",
input_query=input_query,
input_tensor_name="input",
scheduler_params=og.SchedulerParams(frequency=3600, duration_hours=24), # Optional
)
print(f"Workflow deployed at: {contract_address}")# Manually trigger workflow execution
result = client.alpha.run_workflow(contract_address)
print(f"Inference output: {result}")
# Read the latest result
latest = client.alpha.read_workflow_result(contract_address)
# Get historical results
history = client.alpha.read_workflow_history(contract_address, num_results=5)See code examples under examples.
The SDK includes a command-line interface for quick operations. First, verify your configuration:
opengradient config showRun a test inference:
opengradient infer -m QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ \
--input '{"num_input1":[1.0, 2.0, 3.0], "num_input2":10}'-
Off-chain Applications: Use OpenGradient as a decentralized alternative to centralized AI providers like HuggingFace and OpenAI.
-
Verifiable AI Execution: Leverage TEE inference for cryptographically attested AI outputs, enabling trustless AI applications and agents.
-
Model Hosting: Manage, hosy and execute models on the Model Hub and integrate directly into your development workflow.
For comprehensive documentation, API reference, and examples, visit:
If you use Claude Code, copy docs/CLAUDE_SDK_USERS.md to your project's CLAUDE.md to help Claude assist you with OpenGradient SDK development.
- Run
opengradient --helpfor CLI command reference - Visit our documentation for detailed guides
- Join our community for support