Skip to content

Scaling

Scale the runtime container to handle more concurrent executions:

Terminal window
docker compose up -d --scale runtime=3

The API round-robins requests across runtime instances.

  • Each runtime instance respects MAX_CONCURRENT_EXECUTIONS
  • With 3 instances × 10 concurrent = 30 total concurrent executions
  • All instances share the same storage backend
  • No shared state between runtime instances (stateless)

Adjust resource limits in docker-compose.yml:

runtime:
deploy:
resources:
limits:
cpus: '8'
memory: 16G
reservations:
cpus: '2'
memory: 4G

Also increase concurrent executions:

MAX_CONCURRENT_EXECUTIONS=20

For higher database throughput:

Add these settings to your PostgreSQL service:

postgres:
command:
- "postgres"
- "-c"
- "max_connections=200"
- "-c"
- "shared_buffers=256MB"

Use a managed PostgreSQL service (AWS RDS, Azure Database, Cloud SQL):

# Remove the postgres service from docker-compose.yml
# Update DATABASE_URL to point to external database
DATABASE_URL=postgresql://user:pass@external-host:5432/flowlike

For multiple API instances, use an external load balancer:

Terminal window
docker compose up -d --scale api=2

Then configure nginx or similar:

upstream flowlike-api {
server localhost:8080;
}
server {
listen 80;
location / {
proxy_pass http://flowlike-api;
}
}
WorkloadAPI replicasRuntime replicasRuntime resources
Development112 CPU, 4GB
Small team124 CPU, 8GB
Medium244 CPU, 8GB
Large3+6+8 CPU, 16GB

For large production workloads, consider Kubernetes deployment for better orchestration, auto-scaling, and isolation.