Production Deployment
Deploying the Conserver in Production
This guide covers deploying the Conserver in production environments with considerations for scalability, reliability, and security.
Prerequisites
Docker and Docker Compose
Redis server (or Redis cluster for high availability)
Storage backends configured (PostgreSQL, S3, etc.)
Domain name and TLS certificates
Monitoring infrastructure (optional but recommended)
Architecture Overview
βββββββββββββββββββ
β Load Balancer β
β (nginx/ALB) β
ββββββββββ¬βββββββββ
β
βββββββββββββββββββββΌββββββββββββββββββββ
β β β
ββββββΌβββββ ββββββΌβββββ ββββββΌβββββ
βConserverβ βConserverβ βConserverβ
β API β β API β β API β
ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ
β β β
βββββββββββββββββββββΌββββββββββββββββββββ
β
ββββββββββΌβββββββββ
β Redis β
β (Queues/Cache) β
ββββββββββ¬βββββββββ
β
βββββββββββββββββββββΌββββββββββββββββββββ
β β β
ββββββΌβββββ ββββββΌβββββ ββββββΌβββββ
βPostgreSQLβ β S3 β β Milvus β
βββββββββββ βββββββββββ βββββββββββDocker Compose Production Setup
docker-compose.yml
nginx.conf
Scaling Considerations
Horizontal Scaling
The Conserver supports horizontal scaling because:
All state is stored in Redis
Multiple instances can process from the same queues
API requests are stateless
Scale workers based on queue depth:
Redis Configuration
For production Redis deployments:
Queue Monitoring
Monitor queue lengths to detect backlogs:
Security Hardening
API Token Management
Use token files instead of environment variables:
Rotate tokens regularly:
Use separate tokens for different purposes:
Internal API token for system operations
Partner-specific tokens via
ingress_auth
Network Security
Isolate Redis:
Enable Redis AUTH:
Use TLS for external connections
Secret Management
Consider using:
Docker secrets (as shown above)
HashiCorp Vault
AWS Secrets Manager
Kubernetes secrets
Monitoring and Observability
Health Checks
The Conserver exposes health information via:
Metrics Integration
The Conserver supports Datadog integration for metrics:
Log Aggregation
Configure structured JSON logging:
Logs include:
Request IDs for tracing
Processing times
Error details with stack traces
vCon UUIDs for correlation
Alerting
Set up alerts for:
Queue depth > 1000
Warning
Scale workers
DLQ depth > 100
Critical
Investigate failures
API latency p99 > 5s
Warning
Check resources
Error rate > 5%
Critical
Check logs
Graceful Shutdown
The Conserver handles SIGTERM for graceful shutdown:
Stops accepting new vCons
Completes in-flight processing
Returns unprocessed items to queues
Closes connections cleanly
Configure Docker stop timeout:
Backup and Recovery
Redis Persistence
Enable AOF for durability:
Backup Strategy
Redis RDB snapshots:
Storage backend backups:
PostgreSQL: pg_dump
S3: Enable versioning
Elasticsearch: Snapshot API
Configuration backup:
Disaster Recovery
Deploy Redis with persistence
Use storage backends with replication
Keep configuration in version control
Document recovery procedures
Deployment Checklist
Pre-deployment
Deployment
Post-deployment
Troubleshooting
Common Issues
Workers not processing:
High DLQ count:
Memory issues:
See Troubleshooting for more detailed solutions.
Last updated
Was this helpful?