Docker configuration for the Nvisy server.
Nvisy requires two external services:
PostgreSQL 18+ with the pgvector extension. PostgreSQL serves as the primary
data store for all application state — accounts, workspaces, pipelines,
connections, file metadata — and provides vector similarity search through
pgvector. The recommended image is pgvector/pgvector:pg18.
NATS 2.10+ with JetStream enabled. NATS handles three concerns: pub/sub messaging for real-time events, persistent job queues for asynchronous processing, and object storage for uploaded files. JetStream must be enabled with sufficient storage allocation — the default configuration uses 1 GB of memory store and 10 GB of file store.
Start PostgreSQL (with pgvector) and NATS for local development:
docker compose -f docker-compose.dev.yml up -dThis starts both services with development defaults (postgres:postgres
credentials, JetStream enabled). Then generate configuration and run the server
locally:
make generate-all # .env, keys, migrations
cargo run --features dotenv --bin nvisy-serverThe API documentation is available at:
- Scalar UI:
http://localhost:8080/api/scalar - OpenAPI JSON:
http://localhost:8080/api/openapi.json
Build and run the complete stack:
cp .env.example .env
# Edit .env with production values
docker compose up -d --buildThe production compose file starts all three services on a private bridge network. The server waits for PostgreSQL and NATS health checks to pass before starting.
| Service | Port(s) | Description |
|---|---|---|
| PostgreSQL | 5432 | Primary database (with pgvector) |
| NATS | 4222, 8222 | Message queue (JetStream) |
| Server | 8080 | Nvisy API |
All configuration is provided through environment variables. See
.env.example at the repository root for a complete
reference with defaults and descriptions.
The server requires an Ed25519 keypair for JWT signing and a 32-byte key for connection credential encryption. Generate both with:
make generate-keysThis produces three files: private.pem, public.pem, and encryption.key. In
production, store these securely and reference them via the environment
variables above.
The Dockerfile uses a multi-stage build:
- Planner — generates a dependency recipe with cargo-chef
- Builder — builds dependencies from the recipe (cached), then builds the server binary and strips it
- Runtime — minimal Debian image with only the binary and runtime libraries
The runtime image runs as a non-root user (nvisy, UID 1000) and includes a
health check endpoint at /health.
The default NATS configuration (nats.conf) enables JetStream with:
- 1 GB memory store for high-throughput streams
- 10 GB file store for persistent data
- 8 MB maximum payload size
Adjust these values based on expected workload. The memory store is used for ephemeral streams; the file store is used for durable subscriptions and object storage.
All services expose health check endpoints:
| Service | Endpoint | Method |
|---|---|---|
| Server | /health |
HTTP GET |
| PostgreSQL | pg_isready |
CLI |
| NATS | /healthz on port 8222 |
HTTP GET |
The compose files configure health checks with 5-second intervals. The server depends on both PostgreSQL and NATS being healthy before it starts accepting requests.
Migrations are embedded in the server binary and applied automatically on startup. For manual control:
make generate-migrations # Apply and regenerate schema
make clear-migrations # Revert all (destructive)# Start services
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose down
# Reset data
docker compose down -v