Skip to content

Python SDK for using verifiable AI inference on OpenGradient

License

Notifications You must be signed in to change notification settings

OpenGradient/OpenGradient-SDK

Repository files navigation

OpenGradient Python SDK

A Python SDK for decentralized model management and inference services on the OpenGradient platform. The SDK enables programmatic access to our model repository and decentralized AI infrastructure.

Key Features

  • Model management and versioning
  • Decentralized model inference
  • Support for LLM inference with various models
  • Trusted Execution Environment (TEE) inference with cryptographic attestation
  • Drop-in replacement for OpenAI and Anthropic LLM APIs - add verifiable and secure inference to your existing AI application or agent with minimal code changes
  • Consensus-based end-to-end verified AI execution
  • Command-line interface (CLI) for direct access

Model Hub

Browse and discover AI models on our Model Hub. The Hub provides:

  • Registry of models and LLMs
  • Easy model discovery and deployment
  • Direct integration with the SDK

Installation

pip install opengradient

Note: Windows users should temporarily enable WSL when installing opengradient (fix in progress).

Network Configuration

OpenGradient currently runs two networks:

  • Testnet: The main public testnet for general use
  • Alpha Testnet: For alpha features like atomic AI execution from smart contracts or scheduled ML workflow execution (see Alpha Testnet Features)

For the latest network RPCs, contract addresses, and deployment information, see the Network Deployment Documentation.

Getting Started

1. Account Setup

You'll need:

  • Private key: An Ethereum-compatible wallet private key for OpenGradient transactions
  • Test tokens: Get free test tokens from the OpenGradient Faucet to use LLM inference on the testnet
  • Model Hub account (optional): Only needed for uploading models. Create one at Hub Sign Up

The easiest way to set up your configuration is through our wizard:

opengradient config init

2. Initialize the Client

import os
import opengradient as og

client = og.Client(
    private_key=os.environ.get("OG_PRIVATE_KEY"),
    email=None,  # Optional: only needed for model uploads
    password=None,
)

3. Basic Usage

LLM Chat secured by TEE (Trusted Execution Environment)

OpenGradient supports secure, verifiable inference through TEE for leading LLM providers. Access models from OpenAI, Anthropic, Google, and xAI with cryptographic attestation verified by the OpenGradient network:

completion = client.llm.chat(
    model=og.TEE_LLM.GPT_4O,
    messages=[{"role": "user", "content": "Hello!"}],
)
print(f"Response: {completion.chat_output['content']}")
print(f"Tx hash: {completion.transaction_hash}")

Verifiable LangChain Agent

Use OpenGradient as a drop-in LLM for LangChain agents - every decision and reasoning is verified through the OpenGradient network:

from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
import opengradient as og

llm = og.agents.langchain_adapter(
    private_key=os.environ.get("OG_PRIVATE_KEY"),
    model_cid=og.TEE_LLM.GPT_4O,
)

@tool
def get_weather(city: str) -> str:
    """Returns the current weather for a city."""
    return f"Sunny, 72°F in {city}"

agent = create_react_agent(llm, [get_weather])
result = agent.invoke({"messages": [("user", "What's the weather in San Francisco?")]})
print(result["messages"][-1].content)

Available TEE Models: The SDK includes models from multiple providers accessible via the og.TEE_LLM enum:

  • OpenAI: GPT-4.1, GPT-4o, o4-mini
  • Anthropic: Claude 3.7 Sonnet, Claude 3.5 Haiku, Claude 4.0 Sonnet
  • Google: Gemini 2.5 Flash, Gemini 2.5 Pro, Gemini 2.0 Flash
  • xAI: Grok 3 Beta, Grok 3 Mini Beta, Grok 4.1 Fast

For the complete list, check the og.TEE_LLM enum in your IDE or see the API documentation.

4. Alpha Testnet Features

The Alpha Testnet provides access to experimental features, including custom ML model inference and workflow deployment and execution. Run inference on any model hosted on the Model Hub, or deploy on-chain AI pipelines that connect models with data sources and can be scheduled for automated execution.

Note: Alpha features require connecting to the Alpha Testnet. See Network Configuration for details.

Custom Model Inference

Browse models on our Model Hub or upload your own:

result = client.alpha.infer(
    model_cid="your-model-cid",
    model_input={"input": [1.0, 2.0, 3.0]},
    inference_mode=og.InferenceMode.VANILLA,
)
print(f"Output: {result.model_output}")

Deploy a Workflow

import opengradient as og

client = og.init(
    private_key="your-private-key",
    email="your-email",
    password="your-password",
)

# Define input query for historical price data
input_query = og.HistoricalInputQuery(
    base="ETH",
    quote="USD",
    total_candles=10,
    candle_duration_in_mins=60,
    order=og.CandleOrder.DESCENDING,
    candle_types=[og.CandleType.CLOSE],
)

# Deploy a workflow (optionally with scheduling)
contract_address = client.alpha.new_workflow(
    model_cid="your-model-cid",
    input_query=input_query,
    input_tensor_name="input",
    scheduler_params=og.SchedulerParams(frequency=3600, duration_hours=24),  # Optional
)
print(f"Workflow deployed at: {contract_address}")

Execute and Read Results

# Manually trigger workflow execution
result = client.alpha.run_workflow(contract_address)
print(f"Inference output: {result}")

# Read the latest result
latest = client.alpha.read_workflow_result(contract_address)

# Get historical results
history = client.alpha.read_workflow_history(contract_address, num_results=5)

5. Examples

See code examples under examples.

CLI Usage

The SDK includes a command-line interface for quick operations. First, verify your configuration:

opengradient config show

Run a test inference:

opengradient infer -m QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ \
    --input '{"num_input1":[1.0, 2.0, 3.0], "num_input2":10}'

Use Cases

  1. Off-chain Applications: Use OpenGradient as a decentralized alternative to centralized AI providers like HuggingFace and OpenAI.

  2. Verifiable AI Execution: Leverage TEE inference for cryptographically attested AI outputs, enabling trustless AI applications and agents.

  3. Model Hosting: Manage, hosy and execute models on the Model Hub and integrate directly into your development workflow.

Documentation

For comprehensive documentation, API reference, and examples, visit:

Claude Code Users

If you use Claude Code, copy docs/CLAUDE_SDK_USERS.md to your project's CLAUDE.md to help Claude assist you with OpenGradient SDK development.

Support

  • Run opengradient --help for CLI command reference
  • Visit our documentation for detailed guides
  • Join our community for support

About

Python SDK for using verifiable AI inference on OpenGradient

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 11