Storage Providers
Flow-Like requires S3-compatible object storage for storing workflow data, execution logs, and content. Three providers are supported natively.
AWS S3
Section titled “AWS S3”STORAGE_PROVIDER=aws
# CredentialsAWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLEAWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Region and endpointAWS_REGION=us-east-1AWS_ENDPOINT= # Leave empty for AWS S3
# Bucket namesMETA_BUCKET=flow-like-metaCONTENT_BUCKET=flow-like-contentLOG_BUCKET=flow-like-logsS3 Express One Zone (Recommended for Meta Bucket)
Section titled “S3 Express One Zone (Recommended for Meta Bucket)”S3 Express One Zone is a high-performance, single-AZ storage class ideal for the meta bucket:
- 10x faster than standard S3 (single-digit millisecond latency)
- 50% lower cost per request than standard S3
- Consistent performance for metadata-heavy workloads
Express One Zone bucket names end with --<az-id>--x-s3 (e.g., flow-like-meta--usw2-az1--x-s3).
# Enable S3 Express for meta bucketMETA_BUCKET=flow-like-meta--usw2-az1--x-s3META_BUCKET_EXPRESS_ZONE=true
# Content bucket can also use Express if in same AZCONTENT_BUCKET=flow-like-contentCONTENT_BUCKET_EXPRESS_ZONE=false
# Logs bucket (standard S3 is usually sufficient)LOG_BUCKET=flow-like-logsLOGS_BUCKET_EXPRESS_ZONE=falseIAM Permissions
Section titled “IAM Permissions”The credentials need the following permissions on your buckets:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::flow-like-*", "arn:aws:s3:::flow-like-*/*" ] }, { "Effect": "Allow", "Action": [ "s3express:CreateSession" ], "Resource": [ "arn:aws:s3express:*:*:bucket/flow-like-*--*--x-s3" ] } ]}Scoped Runtime Credentials (STS AssumeRole)
Section titled “Scoped Runtime Credentials (STS AssumeRole)”Flow-Like generates scoped credentials for every execution using STS AssumeRole. This ensures users can only access their own prefix-isolated storage paths, providing strict isolation between users and apps.
# Role to assume for runtime credentialsRUNTIME_ROLE_ARN=arn:aws:iam::123456789012:role/FlowLikeRuntimeRoleThe runtime role needs a trust policy allowing the API to assume it:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123456789012:role/FlowLikeApiRole" }, "Action": "sts:AssumeRole" } ]}When RUNTIME_ROLE_ARN is set, each execution receives temporary credentials scoped to:
- Read/write the specific app’s data (
apps/{app_id}/*) - Read/write the user’s app data (
users/{user_id}/apps/{app_id}/*) - Write execution logs (
runs/{app_id}/*) - Access temporary storage (
tmp/user/{user_id}/apps/{app_id}/*)
Cloudflare R2
Section titled “Cloudflare R2”R2 is S3-compatible and supports prefix-scoped temporary credentials through Cloudflare’s proprietary API:
STORAGE_PROVIDER=r2
# R2 credentials for S3 API access (from R2 API tokens)R2_ACCESS_KEY_ID=your-r2-access-key-idR2_SECRET_ACCESS_KEY=your-r2-secret-access-key
# R2 endpoint (replace with your account ID)R2_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com
# R2 Temp Credentials API (required for scoped credentials)R2_ACCOUNT_ID=your-cloudflare-account-idR2_API_TOKEN=your-cloudflare-api-token
# Bucket namesMETA_BUCKET=flow-like-metaCONTENT_BUCKET=flow-like-contentLOG_BUCKET=flow-like-logsR2 API Token Setup
Section titled “R2 API Token Setup”The API token needs the Workers R2 Storage:Edit permission for the temp credentials API:
- Go to your Cloudflare Dashboard → Manage Account → API Tokens
- Create a custom token with:
- Permissions:
Account→Workers R2 Storage→Edit - Account Resources: Include your account
- Permissions:
How R2 Scoped Credentials Work
Section titled “How R2 Scoped Credentials Work”Unlike AWS STS, R2 uses Cloudflare’s proprietary temp credentials API which:
- Creates temporary S3-compatible credentials (access key, secret key, session token)
- Supports prefix-scoping via the
prefixesparameter - Returns credentials with configurable TTL (default: 1 hour)
Each execution receives temporary credentials scoped to access only:
- The specific app’s data prefixes
- The user’s app data prefixes
- Execution log paths
MinIO (Self-hosted)
Section titled “MinIO (Self-hosted)”For local development or air-gapped environments:
STORAGE_PROVIDER=aws
# MinIO credentialsAWS_ACCESS_KEY_ID=minioadminAWS_SECRET_ACCESS_KEY=minioadmin
# MinIO endpointAWS_ENDPOINT=http://minio:9000AWS_REGION=us-east-1AWS_USE_PATH_STYLE=true
# Bucket namesMETA_BUCKET=flow-like-metaCONTENT_BUCKET=flow-like-contentLOG_BUCKET=flow-like-logs
# STS AssumeRole for scoped credentials# MinIO requires STS to be enabled: https://min.io/docs/minio/linux/developers/security-token-service.htmlRUNTIME_ROLE_ARN=arn:minio:iam:::role/FlowLikeRuntimeRoleTo add MinIO to your Docker Compose stack, add this service:
services: minio: image: minio/minio command: server /data --console-address ":9001" ports: - "9000:9000" - "9001:9001" # Console environment: MINIO_ROOT_USER: minioadmin MINIO_ROOT_PASSWORD: minioadmin volumes: - minio_data:/data networks: - flowlike
volumes: minio_data:Azure Blob Storage
Section titled “Azure Blob Storage”STORAGE_PROVIDER=azure
# Azure credentialsAZURE_STORAGE_ACCOUNT_NAME=yourstorageaccountAZURE_STORAGE_ACCOUNT_KEY=your-access-key
# Container namesAZURE_META_CONTAINER=flow-like-metaAZURE_CONTENT_CONTAINER=flow-like-contentAZURE_LOG_CONTAINER=flow-like-logsCreating containers
Section titled “Creating containers”az storage container create --name flow-like-meta --account-name yourstorageaccountaz storage container create --name flow-like-content --account-name yourstorageaccountaz storage container create --name flow-like-logs --account-name yourstorageaccountGoogle Cloud Storage
Section titled “Google Cloud Storage”STORAGE_PROVIDER=gcp
# GCP projectGCS_PROJECT_ID=your-project-id
# Service account JSON (base64 encoded or raw)GOOGLE_APPLICATION_CREDENTIALS_JSON={"type":"service_account","project_id":"..."}
# Bucket namesGCP_META_BUCKET=flow-like-metaGCP_CONTENT_BUCKET=flow-like-contentGCP_LOG_BUCKET=flow-like-logsService account permissions
Section titled “Service account permissions”The service account needs the Storage Object Admin role on your buckets:
Path-style URLs
Section titled “Path-style URLs”Some S3-compatible providers (MinIO, R2) require path-style URLs:
AWS_USE_PATH_STYLE=trueThis changes requests from:
- Virtual-hosted style:
https://bucket.endpoint.com/key - Path style:
https://endpoint.com/bucket/key
Environment Variables Reference
Section titled “Environment Variables Reference”| Variable | Description | Default |
|---|---|---|
STORAGE_PROVIDER | Storage backend (aws, r2, azure, gcp) | aws |
META_BUCKET | Bucket for app metadata and execution state | Required |
CONTENT_BUCKET | Bucket for user content and workflow data | Required |
LOG_BUCKET | Bucket for execution logs | Required |
META_BUCKET_EXPRESS_ZONE | Enable S3 Express for meta bucket | false |
CONTENT_BUCKET_EXPRESS_ZONE | Enable S3 Express for content bucket | false |
LOGS_BUCKET_EXPRESS_ZONE | Enable S3 Express for logs bucket | false |
RUNTIME_ROLE_ARN | IAM role ARN for scoped runtime credentials (AWS/MinIO) | Optional |
R2_ACCESS_KEY_ID | R2 S3-compatible access key | R2 only |
R2_SECRET_ACCESS_KEY | R2 S3-compatible secret key | R2 only |
R2_ENDPOINT | R2 S3-compatible endpoint URL | R2 only |
R2_ACCOUNT_ID | Cloudflare account ID for R2 temp credentials | R2 only |
R2_API_TOKEN | Cloudflare API token for R2 temp credentials | R2 only |
EXECUTION_STATE_BACKEND | State store backend (postgres, redis, s3) | postgres |
AWS_USE_PATH_STYLE | Use path-style URLs (for MinIO/R2) | false |