This directory contains comprehensive guides for integrating tinyMem with various LLM providers, clients, and IDEs.
- Claude: Integration with Claude Desktop, Claude CLI, and MCP.
- GitHub Copilot: Configuration for Copilot Chat in VS Code.
- Qwen: Setup for Qwen CLI, Ollama, and LM Studio.
- Gemini: Using Gemini via MCP or with an adapter.
- OpenAI: Using the standard OpenAI Python/Node SDKs with tinyMem.
- DeepSeek: Configuration for DeepSeek API and local R1 models.
- Aider: Configuring the Aider AI pair programmer.
- Crush/Rush: Using Charm's Crush CLI with native MCP support.
- LangChain: Integration examples for LangChain Python.
- Windsurf: Setup for Codeium's Windsurf IDE.
- Cline: Setup for the Cline VS Code agent.
- IDEs: VS Code, Cursor, Zed, and Continue configuration.
- Local LLMs: Generic configuration for backends like Ollama, LM Studio, and Llama.cpp.
- Configuration: Full reference for
.tinyMem/config.tomloptions.
The AGENT MD folder (legacy) contains specific prompt directives for different AI models. These are now maintained in the root docs/agents/ directory:
| Mode | Best For | How it works |
|---|---|---|
| MCP | Claude Desktop, Crush, Cursor, Zed, Windsurf, Cline | tinymem runs as a stdio server, responding to tool calls. |
| Proxy | OpenAI SDK, Aider, LangChain, Copilot, DeepSeek | tinymem runs an HTTP server (:8080), intercepting and injecting memory into API calls. |