It's a common concept to put a proxy in front of S3 that handles requests. Nginx is well suited for this and can be used to handle TLS and virtual-hosted style bucket URLs (using subdomains instead of subfolders).
For virtual-hosted style URL buckets, you'll need to add a wildcard DNS record for your S3 subdomain.
Make sure the config sets the X-Forwarded-Host and optionally the X-Forwarded-Port if you are using a non-standard port. SeaweedFS will automatically combine these headers to reconstruct the correct host information for signature verification.
Reverse Proxy with URL Path Prefixes
SeaweedFS S3 API supports the X-Forwarded-Prefix header for scenarios where a reverse proxy strips URL path prefixes before forwarding requests. This is common when hosting the S3 API under a subpath like /s3/ or /api/s3/.
How X-Forwarded-Prefix Works
When a reverse proxy strips a URL prefix:
- Client request:
https://example.com/s3/my-bucket/my-object - Proxy strips prefix and forwards:
https://backend:8333/my-bucket/my-object - Proxy adds header:
X-Forwarded-Prefix: /s3
SeaweedFS will:
- First attempt signature verification using the original path (
/s3/my-bucket/my-object) - Fall back to verification using the stripped path (
/my-bucket/my-object) if the first attempt fails
This ensures both regular S3 requests and presigned URLs work correctly with reverse proxies that strip prefixes.
Example Use Cases
- API Gateway:
/api/s3/bucket/object→/bucket/object - Multi-tenant setup:
/tenant1/s3/bucket/object→/bucket/object - Subpath hosting:
/storage/s3/bucket/object→/bucket/object
Important Notes
- The
X-Forwarded-Prefixheader should contain the stripped prefix (e.g.,/s3) X-Forwarded-Portis automatically combined withX-Forwarded-Hostfor non-standard ports- Standard ports (80 for HTTP, 443 for HTTPS) are omitted from the host header automatically
- Both regular S3 authentication and presigned URLs are supported
- This feature works with all S3 operations that require signature verification
Additionally, make sure that proxy_request_buffering is off (default is on), as the proxy will buffer the request, and send the request to the backend as a whole instead of chunked, and again the signature computed by the client side will be different as it would have taken into account the Transfer-Encoding: chunked header that is dropped by the proxy when it buffers.
Example Nginx config
Standard Configuration (without URL prefix stripping)
upstream seaweedfs {
# Hash on uploadId query string in the GET request create consistency for multipart uploads,
# only necessary when using local embedded filer store (leveldb)
hash $arg_uploadId consistent;
server localhost:8333 fail_timeout=0;
keepalive 20;
}
## Also you can use unix domain socket instead for better performance:
# upstream seaweedfs { server unix:/tmp/seaweedfs-s3-8333.sock; keepalive 20;}
server {
listen 443 ssl;
# Assumes that your subdomain is s3
# The regex will support path style as well as virtual-hosted style bucket URLs
# path style: http://s3.yourdomain.com/mybucket
# virtual-hosted style: http://mybucket.s3.yourdomain.com
server_name ~^(?:(?<bucket>[^.]+)\.)?s3\.yourdomain\.com;
ignore_invalid_headers off;
# Make sure that we can upload files larger than 1MB (nginx default cutoff)
client_max_body_size 0;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_request_buffering off;
chunked_transfer_encoding off;
# If bucket subdomain is not empty,
# rewrite request to backend.
if ($bucket != "") {
rewrite (.*) /$bucket$1 last;
}
location / {
proxy_pass http://seaweedfs;
}
ssl on;
ssl_certificate /{path_to_ssl_cert}/cert.pem;
ssl_certificate_key /{path_to_ssl_cert}/key.pem;
}
Configuration with URL Prefix Stripping (X-Forwarded-Prefix)
For scenarios where you need to host SeaweedFS S3 API under a subpath:
upstream seaweedfs {
hash $arg_uploadId consistent;
server localhost:8333 fail_timeout=0;
keepalive 20;
}
server {
listen 443 ssl;
server_name yourdomain.com;
ignore_invalid_headers off;
# Make sure that we can upload files larger than 1MB (nginx default cutoff)
client_max_body_size 0;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 300;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_request_buffering off;
chunked_transfer_encoding off;
# S3 API under /s3/ subpath
location /s3/ {
# Set the X-Forwarded-Prefix header to the stripped prefix
proxy_set_header X-Forwarded-Prefix /s3;
# Strip the /s3 prefix before forwarding to backend
rewrite ^/s3/(.*) /$1 break;
proxy_pass http://seaweedfs;
}
# Alternative: S3 API under /api/s3/ subpath
location /api/s3/ {
proxy_set_header X-Forwarded-Prefix /api/s3;
rewrite ^/api/s3/(.*) /$1 break;
proxy_pass http://seaweedfs;
}
ssl on;
ssl_certificate /{path_to_ssl_cert}/cert.pem;
ssl_certificate_key /{path_to_ssl_cert}/key.pem;
}
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
- TUS Resumable Uploads
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 Rate Limiting
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
S3 Table Bucket
S3 Authentication & IAM
- S3 Configuration - Start Here
- S3 Credentials (
-s3.config) - OIDC Integration (
-s3.iam.config) - S3 Policy Variables
- Amazon IAM API
- AWS IAM CLI
Server-Side Encryption
S3 Client Tools
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure
Security
- Security Overview
- Security Configuration
- Cryptography and FIPS Compliance
- Run Blob Storage on Public Internet