Skip to content

Add Mistral AI backend support with abstraction layer#64

Closed
ddulic wants to merge 27 commits intoallenporter:mainfrom
ddulic:main
Closed

Add Mistral AI backend support with abstraction layer#64
ddulic wants to merge 27 commits intoallenporter:mainfrom
ddulic:main

Conversation

@ddulic
Copy link

@ddulic ddulic commented Mar 11, 2026

Summary

  • Introduces an `AIService` abstract base class (`ocr_image`, `embed_text`, `generate_json`, `provider_name`) to decouple the processing pipeline from any specific AI provider
  • Adds `MistralService` implementing `AIService` — uses the dedicated `mistral-ocr-latest` OCR API, `mistral-embed` for embeddings, and `mistral-large-latest` with JSON mode for summaries
  • Refactors `GeminiService` to inherit `AIService` and accept model names at construction time, including a separate `SUPERNOTE_GEMINI_CHAT_MODEL` setting (fixes bug where chat/summary generation was accidentally using the OCR model)
  • Replaces `GeminiOcrModule`/`GeminiEmbeddingModule` with provider-agnostic `OcrModule`/`EmbeddingModule`
  • Provider is selected at startup: Mistral is used when `SUPERNOTE_MISTRAL_API_KEY` is set, otherwise falls back to Gemini
  • Changes default ports from 8080/8081 to 8000/8001 for the API and file server

New config options

Env var Default Description
`SUPERNOTE_MISTRAL_API_KEY` Enables Mistral backend when set
`SUPERNOTE_MISTRAL_OCR_MODEL` `mistral-ocr-latest` Dedicated OCR model
`SUPERNOTE_MISTRAL_EMBEDDING_MODEL` `mistral-embed` Embedding model
`SUPERNOTE_MISTRAL_CHAT_MODEL` `mistral-large-latest` Chat model for summaries
`SUPERNOTE_MISTRAL_MAX_CONCURRENCY` `5` Max concurrent API calls (minimum 1)
`SUPERNOTE_GEMINI_CHAT_MODEL` `gemini-2.0-flash` Gemini chat model for summaries

Port change

Default ports are now 8000 (API) and 8001 (file server), changed from 8080/8081. Update any existing configurations, reverse proxies, or firewall rules accordingly.

This is more of a personal preference, 8080 and 8081 are very common ports and used up on my machine.

Robustness improvements

  • Semaphore deadlock prevention: Both `GeminiService` and `MistralService` clamp `max_concurrency` to a minimum of 1; invalid env var values (non-integer or `< 1`) log a warning instead of silently misconfiguring
  • Embedding validation: `GeminiService.embed_text()` raises `ValueError` on empty/missing embedding values; `EmbeddingModule` validates before persisting to the database
  • Search zero-norm guard: `SearchService` skips candidate embeddings with zero L2 norm to prevent NaN/inf cosine similarity scores
  • Mistral OCR robustness: `ocr_image()` uses safe `getattr` access for `response.pages` and skips pages with missing markdown, returning an empty string rather than raising
  • Compact JSON schema prompts: `generate_json` serializes the schema without indentation to reduce token usage
  • PNG chunk accumulation: OCR module uses list + `b"".join()` instead of `bytes +=` to avoid O(n²) memory copies for large pages
  • Valid JSON guarantee: `MistralService.generate_json()` uses `json.dumps` (not `str()`) for non-string SDK responses to always return parseable JSON

Important note on provider switching

Mistral embeddings are 1024-dimensional vs Gemini's 3072-dimensional. Switching providers invalidates all stored embeddings — notes will need to be re-processed after a provider change. Mixing embeddings from different providers in the same search index is not supported.


Note that I don't have my Supernote device yet, it is arriving on Friday, I tested it with the (empty?) test note file, and it looks to be working.

Screenshot_20260311_134042 Screenshot_20260311_134150

Linting and tests are passing.

Full disclosure that Claude Code was used to write this, with assistance from Copilot in reviews.

ddulic and others added 27 commits March 10, 2026 19:41
Introduces an AIService abstraction layer so the server can use either
Google Gemini or Mistral AI for OCR, embeddings, and summary generation.
The active backend is selected at startup based on which API key is
configured (Mistral takes precedence when both are set).

- Add AIService abstract base class (ocr_image, embed_text, generate_json)
- Add MistralService implementing AIService via the mistralai SDK
- Refactor GeminiService to inherit AIService; accept model names at
  construction instead of per-call; add high-level AIService methods
- Replace GeminiOcrModule/GeminiEmbeddingModule with provider-agnostic
  OcrModule/EmbeddingModule accepting AIService
- Update SummaryModule and SearchService to use AIService
- Add Mistral config fields with env var support (SUPERNOTE_MISTRAL_API_KEY,
  SUPERNOTE_MISTRAL_OCR_MODEL, SUPERNOTE_MISTRAL_EMBEDDING_MODEL,
  SUPERNOTE_MISTRAL_CHAT_MODEL, SUPERNOTE_MISTRAL_MAX_CONCURRENCY)
- Add mistralai>=1.0.0 to server dependencies
- Update all tests to use the new AIService-based mocks

Note: switching AI providers invalidates stored embeddings; all notes
will need to be re-processed after a provider change.
- Switch MistralService.ocr_image() from Pixtral chat completions to
  the dedicated mistral-ocr-latest API (client.ocr.process_async)
- Update default mistral_ocr_model config to mistral-ocr-latest
- Validate non-empty embedding before persisting in EmbeddingModule
- Accumulate PNG chunks in list then join once (avoid quadratic bytes concat)
- Guard zero-norm query embedding in SearchService to prevent NaN scores
- Add gemini_chat_model config field (SUPERNOTE_GEMINI_CHAT_MODEL, default
  gemini-2.0-flash) and fix bug where gemini_ocr_model was incorrectly used
  for summary/chat generation
- Raise ValueError in GeminiService.embed_text() when values is empty/None,
  consistent with MistralService and the EmbeddingModule guard added earlier
- Remove unused config parameter from SearchService constructor and call sites
- Document in MistralService.ocr_image() why prompt is intentionally unused
- Remove unused logger/logging import from GeminiService and MistralService
  (would fail ruff unused-variable check in CI)
- Fix test_summary_module assertion to use call_args.kwargs["prompt"] since
  generate_json is called with keyword arguments, making call_args.args empty
…tent fallback

- Guard against zero-norm candidate embeddings in SearchService cosine
  similarity to prevent NaN/inf scores, mirroring the existing query guard
- Change generate_json non-string fallback from "" to str(content) so
  unexpected response shapes produce something inspectable rather than silent
  empty string
- Add unit tests for MistralService covering OCR, embeddings, JSON generation,
  error paths, and concurrency limiting via semaphore
- README: update tagline, pipeline description, and quick start to show
  both Gemini and Mistral as provider options
- server/README: update feature description, prerequisites, and expand AI
  configuration section with full env var tables for both providers,
  including the embedding dimension switching note
- note_processing_design: replace Gemini-specific references with
  provider-agnostic language
…lock

- Clamp max_concurrency to minimum 1 in GeminiService and MistralService
  constructors so a bad value can never create a blocking semaphore
- Log a warning and clamp to 1 when SUPERNOTE_GEMINI_MAX_CONCURRENCY or
  SUPERNOTE_MISTRAL_MAX_CONCURRENCY env vars are set to < 1
- Document the minimum value of 1 in server README config tables
Mirrors the earlier SearchService cleanup — config was assigned but never
read anywhere in the module after the Gemini-specific refactor. Removed from
the constructor, app.py call site, and both test fixtures.
…arnings

- Handle missing/empty pages in MistralService.ocr_image() with safe getattr
  access and per-page markdown guard, returning "" instead of raising TypeError
- Use compact json.dumps separators in generate_json() to reduce prompt token
  usage when sending the schema to the model
- Log a warning (instead of silently passing) when SUPERNOTE_GEMINI_MAX_CONCURRENCY
  or SUPERNOTE_MISTRAL_MAX_CONCURRENCY env vars contain non-integer values
- Update import from 'from mistralai import Mistral' to
  'from mistralai.client import Mistral' for mistralai>=2.0.0 compatibility
- Bump pyproject.toml constraint to mistralai>=2.0.0
- Fix test_full_processing_pipeline_with_real_file: generate_json was mocked
  to return '{}' (no segments) but assertion expected summary content from OCR
  text — fix mock to return a proper segments JSON and assert against it
…alidation, and search

- Add test_gemini.py: full coverage for GeminiService (was completely untested) —
  is_configured, provider_name, ocr_image, embed_text empty/missing values, generate_json,
  max_concurrency clamping, and concurrency limit enforcement
- test_mistral.py: add edge cases for empty/missing OCR pages, pages without markdown,
  generate_json with non-string content (json.dumps path), empty choices, and
  max_concurrency clamping to 1
- test_embedding.py: verify EmbeddingModule marks task FAILED when AI returns empty vector
- test_search.py: verify zero-norm candidate embeddings are skipped and don't produce NaN scores
- CONTRIBUTING.md: show both Gemini and Mistral API key options for local dev
- note_processing_design.md: replace GeminiAPIError example with generic ValueError
- PLAN.md: update Phase 5 to reflect AIService abstraction, MistralService,
  and renamed OcrModule/EmbeddingModule (were GeminiOcrModule/GeminiEmbeddingModule)
- Remove stale type: ignore[arg-type] comments in gemini.py and mistral.py
- Use proper UserMessage type for Mistral chat messages with cast to
  satisfy mypy list invariance requirement
- Handle Optional[List[float]] from Mistral embedding response
- Add method-assign to type: ignore comments on AsyncMock assignments
  in test_gemini.py and test_mistral.py
Switch to more standard Python web server defaults (8000 for main server,
8001 for MCP server) to avoid conflicts with common port 8080/8081 usage.
- Re-raise exception in SummaryModule.process() after logging so the
  task is marked FAILED (not COMPLETED) when AI summary generation fails
- Update mistral_api_key docstring to mention summaries alongside OCR
  and embeddings
- Remove partial API key characters from log output for both Gemini and
  Mistral API keys to avoid inadvertent credential exposure
Add Mistral as an alternative AI backend
Fix Dockerfile ports, add Docker Compose, and update README
The Dockerfile already sets SUPERNOTE_PORT=8000 but the config.py default
was still 8080, causing a mismatch when running outside Docker.
Fix default port to match Dockerfile (8080 -> 8000)
@ddulic
Copy link
Author

ddulic commented Mar 12, 2026

Closing to reopen from a feature branch instead of main.

@ddulic ddulic closed this Mar 12, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant