Skip to content

A unified Python library for interaction with multiple Large Language Model (LLM) providers. Write once, run everywhere.

License

Notifications You must be signed in to change notification settings

lpalbou/AbstractCore

Repository files navigation

AbstractCore

PyPI version Python Version license GitHub stars

Unified LLM Interface

Write once, run everywhere

A powerful Python library providing seamless interaction with all major LLM providers with centralized configuration, universal media handling, and vision capabilities. Switch between OpenAI, Anthropic, Ollama, LMStudio and more with identical code.

First-class support for:

  • sync + async
  • streaming + non-streaming
  • universal tool calling (native + prompted tool syntax)
  • structured output (Pydantic)
  • media input (vision), even for text-only LLMs (*)
  • glyph visual-text compression for long documents (**)
  • unified openai-compatible endpoint for all providers and models

(*) If a model doesn’t support images, AbstractCore can use a configured vision model to generate an image description and feed that to your text-only model. See Media Handling and Centralized Config. (**) Optional visual-text compression: render long text/PDFs into images and process them with a vision model to reduce token usage. See Glyph Visual-Text Compression.

Docs: Getting Started · Docs Index · https://lpalbou.github.io/AbstractCore

Install

# Minimal install (local providers + core features)
pip install abstractcore

# Cloud providers
pip install abstractcore[openai]     # OpenAI SDK
pip install abstractcore[anthropic]  # Anthropic SDK

# Full installs
pip install abstractcore[all-apple]    # macOS/Apple Silicon (includes MLX, excludes vLLM)
pip install abstractcore[all-non-mlx]  # Linux/Windows (excludes MLX; use all-gpu for vLLM)
pip install abstractcore[all-gpu]      # Linux GPU (includes vLLM)

Quickstart

from abstractcore import create_llm

llm = create_llm("openai", model="gpt-5-mini")
response = llm.generate("What is the capital of France?")
print(response.content)

Conversation state (BasicSession)

from abstractcore import create_llm, BasicSession

session = BasicSession(create_llm("anthropic", model="claude-haiku-4-5"))
print(session.generate("Give me 3 bakery name ideas.").content)
print(session.generate("Pick the best one and explain why.").content)

Streaming

from abstractcore import create_llm

llm = create_llm("ollama", model="qwen3:4b-instruct")
for chunk in llm.generate("Write a short poem about distributed systems.", stream=True):
    print(chunk.content or "", end="", flush=True)

Async

import asyncio
from abstractcore import create_llm

async def main():
    llm = create_llm("openai", model="gpt-5-mini")
    resp = await llm.agenerate("Give me 5 bullet points about HTTP caching.")
    print(resp.content)

asyncio.run(main())

Token budgets (unified)

from abstractcore import create_llm

llm = create_llm(
    "openai",
    model="gpt-5-mini",
    max_tokens=8000,        # total budget (input + output)
    max_output_tokens=1200, # output cap
)

Providers (common)

  • openai: OPENAI_API_KEY, optional OPENAI_BASE_URL
  • anthropic: ANTHROPIC_API_KEY, optional ANTHROPIC_BASE_URL
  • openrouter: OPENROUTER_API_KEY, optional OPENROUTER_BASE_URL (default: https://openrouter.ai/api/v1)
  • ollama: local server at OLLAMA_BASE_URL (or legacy OLLAMA_HOST)
  • lmstudio: OpenAI-compatible local server at LMSTUDIO_BASE_URL (default: http://localhost:1234/v1)
  • vllm: OpenAI-compatible server at VLLM_BASE_URL (default: http://localhost:8000/v1)
  • openai-compatible: generic OpenAI-compatible endpoints via OPENAI_COMPATIBLE_BASE_URL (default: http://localhost:1234/v1)

You can also persist settings (including API keys) via the config CLI:

  • abstractcore --status
  • abstractcore --configure
  • abstractcore --set-api-key openai sk-...

What’s inside (quick tour)

Tool calling (passthrough by default)

By default (execute_tools=False), AbstractCore:

  • returns clean assistant text in response.content
  • returns structured tool calls in response.tool_calls (host/runtime executes them)
from abstractcore import create_llm, tool

@tool
def get_weather(city: str) -> str:
    return f"{city}: 22°C and sunny"

llm = create_llm("openai", model="gpt-5-mini")
resp = llm.generate("What's the weather in Paris? Use the tool.", tools=[get_weather])

print(resp.content)
print(resp.tool_calls)

If you need tool-call markup preserved/re-written in content for downstream parsers, pass tool_call_tags=... (e.g. "qwen3", "llama3", "xml"). See Tool Syntax Rewriting.

Structured output

from pydantic import BaseModel
from abstractcore import create_llm

class Answer(BaseModel):
    title: str
    bullets: list[str]

llm = create_llm("openai", model="gpt-5-mini")
answer = llm.generate("Summarize HTTP/3 in 3 bullets.", response_model=Answer)
print(answer.bullets)

Media / vision input

from abstractcore import create_llm

llm = create_llm("anthropic", model="claude-haiku-4-5")
resp = llm.generate("Describe the image.", media=["./image.png"])
print(resp.content)

See Media Handling and Vision Capabilities.

HTTP server (OpenAI-compatible gateway)

pip install abstractcore[server]
python -m abstractcore.server.app

Use any OpenAI-compatible client, and route to any provider/model via model="provider/model":

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
resp = client.chat.completions.create(
    model="ollama/qwen3:4b-instruct",
    messages=[{"role": "user", "content": "Hello from the gateway!"}],
)
print(resp.choices[0].message.content)

See Server.

CLI (optional)

Interactive chat:

abstractcore-chat --provider openai --model gpt-5-mini
abstractcore-chat --provider lmstudio --model qwen/qwen3-4b-2507 --base-url http://localhost:1234/v1
abstractcore-chat --provider openrouter --model openai/gpt-5-mini

Token limits:

  • startup: abstractcore-chat --max-tokens 8192 --max-output-tokens 1024 ...
  • in-REPL: /max-tokens 8192 and /max-output-tokens 1024

Built-in CLI apps

AbstractCore also ships with ready-to-use CLI apps:

  • summarizer, extractor, judge, intent, deepsearch (see docs/apps/)

Documentation map

Start here:

Core features:

Reference and internals:

  • Architecture — system overview + event system
  • API Reference — Python API (including events)
  • Server — OpenAI-compatible gateway with tool/media support
  • CLI Guide — interactive abstractcore-chat walkthrough

Project:

License

MIT

About

A unified Python library for interaction with multiple Large Language Model (LLM) providers. Write once, run everywhere.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •