This guide covers deploying TAG in various environments.
- Access to Tigris storage with API credentials
# Single node (TAG + ocache)
cd docker
docker-compose up -d
# Cluster mode (2 TAG + 3 ocache)
docker-compose -f docker-compose-cluster.yml up -dCreate a .env file in the docker/ directory:
# Required
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
# Optional
TAG_LOG_LEVEL=info# Start services
docker-compose -f docker/docker-compose.yml up -d
# View logs
docker-compose -f docker/docker-compose.yml logs -f tag
# Stop services
docker-compose -f docker/docker-compose.yml down# Start 2 TAG nodes + 3 ocache cluster
docker-compose -f docker/docker-compose-cluster.yml up -d
# TAG endpoints are available at:
# - http://localhost:8081 (tag-1)
# - http://localhost:8082 (tag-2)
# Stop cluster
docker-compose -f docker/docker-compose-cluster.yml down -v# Test with curl
curl -X GET http://localhost:8080/your-bucket/your-key \
-H "Authorization: AWS4-HMAC-SHA256 ..."
# Test with AWS CLI
aws s3 cp s3://your-bucket/your-key ./local-file \
--endpoint-url http://localhost:8080- A running Kubernetes cluster
- kubectl configured to access the cluster
# Create namespace (optional)
kubectl create namespace tag
# Create credentials secret
kubectl create secret generic tag-credentials \
--namespace tag \
--from-literal=AWS_ACCESS_KEY_ID=your_key \
--from-literal=AWS_SECRET_ACCESS_KEY=your_secret
# Apply manifests
kubectl apply -f kubernetes/ --namespace tagThe kubernetes/ directory contains:
| File | Description |
|---|---|
deployment.yaml |
TAG Deployment with replicas |
ocache.yaml |
ocache StatefulSet with 3 replicas |
service.yaml |
Service for internal access |
hpa.yaml |
Horizontal Pod Autoscaler |
Run TAG and OCache as native processes using pre-built binaries.
# Set credentials
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
# Start services
./native/run.sh start
# Check status
./native/run.sh status
# Stop services
./native/run.sh stop| Command | Description |
|---|---|
start |
Download binaries (if needed) and start TAG + OCache |
stop |
Stop all services |
stop --clean |
Stop services and remove all data |
status |
Show running status and health of services |
logs [service] |
Show logs (service: tag, ocache, or all) |
help |
Show usage information |
| Variable | Default | Description |
|---|---|---|
AWS_ACCESS_KEY_ID |
(required) | AWS access key |
AWS_SECRET_ACCESS_KEY |
(required) | AWS secret key |
TAG_VERSION |
v1.3.0 |
TAG version to download |
OCACHE_VERSION |
v1.2.2 |
OCache version to download |
TAG_LOG_LEVEL |
info |
Log level: debug, info, warn, error |
TAG_PORT |
8080 |
TAG HTTP port |
OCACHE_PORT |
9000 |
OCache data port |
OCACHE_HTTP_PORT |
9001 |
OCache HTTP port |
OCACHE_MAX_DISK_USAGE |
107374182400 |
Max disk usage in bytes (100GB) |
BIN_DIR |
native/.bin |
Binary download directory |
DATA_DIR |
/tmp/native-data |
Data directory for logs and cache |
# Start with debug logging
TAG_LOG_LEVEL=debug ./native/run.sh start
# Use specific versions
TAG_VERSION=v1.3.0 OCACHE_VERSION=v1.2.2 ./native/run.sh start
# View TAG logs
./native/run.sh logs tag
# Stop and clean all data
./native/run.sh stop --clean- Deploy multiple TAG replicas behind a load balancer
- Use Kubernetes Deployment with anti-affinity rules
- Configure health checks for automatic recovery
Horizontal Scaling:
- TAG is stateless - scale horizontally as needed
- Use HPA based on CPU or custom metrics
- Each replica connects to the same ocache cluster
Vertical Scaling:
- Increase memory for high concurrent connection counts
- Increase CPU for high request throughput
TAG exposes a health endpoint:
GET /health
Returns 200 OK when healthy.
- Expose
/metricsendpoint for Prometheus scraping - Set up alerts for:
- High error rate (
tag_requests_total{status="error"}) - Low cache hit ratio (
tag_cache_hits_total / (tag_cache_hits_total + tag_cache_misses_total)) - High upstream latency (
tag_upstream_request_duration_seconds)
- High error rate (
TAG requires an ocache cluster for caching. The deployment manifest includes an ocache StatefulSet with 3 replicas.
No cache hits:
- Verify ocache cluster is running:
kubectl get pods -l app=ocachein the same namespace - Check TAG logs for connection errors
Authentication failures:
- Verify credentials are set correctly
- Check clock sync between client and TAG
- Review signature calculation logs at debug level
High latency:
- Check upstream endpoint latency
- Monitor cache hit ratio
- Review ocache performance
Enable debug logging for troubleshooting:
TAG_LOG_LEVEL=debug ./tagOr in Kubernetes:
env:
- name: TAG_LOG_LEVEL
value: "debug"