Table of Contents
- FAQ
- Can not upload due to "no free volumes left"
- How to speed up bucket deletion?
- Filer Metadata Store Growth
- Setting TTL
- Does SeaweedFS support S3 object versioning?
- How does versioning affect storage usage?
- Does SeaweedFS support encrypted range requests?
- Does SeaweedFS support bucket default encryption?
- Does SeaweedFS support S3 Object Lock?
- What's the difference between Governance and Compliance modes?
- S3 authentication fails when using reverse proxy
FAQ
Can not upload due to "no free volumes left"
The symptom is similar to https://github.com/seaweedfs/seaweedfs/issues/1631 where the logs show
Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 filer_server_handlers_write.go:42] failing to assign a file id: rpc error: code = Unknown desc = No free volumes left!
Nov 20 18:49:37 s2375.j weed[31818]: I1120 18:49:37 31818 common.go:53] response method:PUT URL:/buckets/dev-passport-video-recordings/02342a46-7435-b698-2437-c778db34ef59.mp4 with httpStatus:500 and JSON:{"error":"rpc error: code = Unknown desc = No free volumes left!"}
Nov 20 18:49:37 s2375.j weed[31818]: E1120 18:49:37 31818 s3api_object_handlers.go:336] upload to filer error: rpc error: code = Unknown desc = No free volumes left!
Each bucket will create one collection, with at least 7 volumes by default, and each volume is pre-allocated with a large size, usually 30GB.
There are 2 ways to fix this.
- Reduce the global volume size by adjusting
-volumeSizeLimitMBinweed masterCLI option. - Reduce the number of volumes to grow when a collection runs out of volumes. You can configure the per bucket storage this way in
weed shell:
> fs.configure -locationPrefix=/buckets/ -volumeGrowthCount=1 -apply
This will add 1 physical volume when existing volumes are full. If using replication, you will need to add more volumes, so that it is a multiple of the number of replicas:
fs.configure -locationPrefix=/buckets/ -replication=001 -volumeGrowthCount=2 -apply
See https://github.com/seaweedfs/seaweedfs/wiki/Path-Specific-Configuration
Or you can change volume growth count in master config file (weed scaffold -config=master), You have to change copy_x values.
How to speed up bucket deletion?
One common unexpected problem is the deletion can be slow. To delete a file, we need to delete the file content on the volume servers and delete the file entry from the filer store. It is almost the same amount of work as adding a file. If there are millions of files, it can take a long time to delete.
When you need to create large buckets and delete them often, you may choose leveldb3 as the filer store, or any other stores that supports Fast Bucket Deletion in https://github.com/seaweedfs/seaweedfs/wiki/Filer-Stores
leveldb3 can automatically create a separate LevelDB instance for each bucket.
So bucket deletion is as simple as deleting the LevelDB instance files and the collection of volume files.
Having separate LevelDB instance, or separate SQL tables, will help to isolate the storage and improve performance also.
Filer Metadata Store Growth
Due to the semantics of the S3 API, empty directories (aka prefixes) aren't shown. However, an entry is still stored in the filer metadata store. When workload access patterns create many unique directories and then remove all the objects inside those directories, the filer metadata store can grow unbounded with orphaned directories. These directories are visible in the filer metadata store itself, but not using the S3 API.
If the filer argument -s3.allowEmptyFolder=false is set, the orphaned directories are cleaned up during list requests for non bucket-level directories. Normally this works well, but if the workload never performs a list operation, the orphaned directories may never be cleaned up. To force cleanup, simply list an existing, non bucket-level directory.
Example using rclone:
rclone lsf seaweedfs:my-bucket/dir
If the directory dir exists in my-bucket, the orphaned metadata will be cleaned up. Note that due to slight API usage differences, rclone ls does not trigger cleanup, but rclone lsf will.
Setting TTL
It is possible to set a TTL for a specific directory using the S3 API. They are set using PutBucketLifecycleConfiguration.
Example of a JSON configuration is below. It is an equivalent of calling the following with weed shell.
fs.configure -locationPrefix /buckets/f341868e-baff-4e20-896a-08bc148e32f9/my-directory-whose-files-will-expire-in-20-days -ttl 20d -apply
:
{
"Rules": [
{
"Status": "Enabled",
"Filter": {
"Prefix": "my-directory-whose-files-will-expire-in-20-days"
},
"Expiration": {
"Days": 20
}
}
]
}
Save that in a .json file and call it, for example, via the aws cli:
BUCKET_NAME=f341868e-baff-4e20-896a-08bc148e32f9
aws --endpoint-url http://127.0.0.1:8333 s3api put-bucket-lifecycle-configuration --bucket $BUCKET_NAME --lifecycle-configuration "file://lifecycle_policy.json"
Note that you don't need to add the part /buckets/$BUCKET_NAME in the configurations "Filter.Prefix" (contrary to using fs.configure, this is taken care of for you in the S3 API.
Does SeaweedFS support S3 object versioning?
Yes, SeaweedFS supports S3 object versioning. You can enable versioning on a bucket using the PutBucketVersioning API. When enabled, SeaweedFS will store multiple versions of an object in the same bucket, providing data protection against accidental deletion or modification.
Key features supported:
- Enable/suspend versioning on buckets
- List all versions of objects
- Get, copy, and delete specific versions
- Delete markers for soft deletion
For detailed documentation and examples, see Amazon S3 API#s3-object-versioning.
How does versioning affect storage usage?
When versioning is enabled, each uploaded object creates a new version instead of overwriting the existing one. This means:
- Storage usage will increase as you accumulate versions
- All versions are preserved until explicitly deleted
- Delete operations create delete markers (soft delete) rather than immediately removing data
To manage storage growth, you should:
- Monitor storage usage regularly
- Implement lifecycle policies to automatically clean up old versions
- Use version-specific deletions for permanent removal when needed
Does SeaweedFS support encrypted range requests?
Yes. Range requests work just fine with encrypted objects across all SSE modes:
- SSE-KMS: Supported
- SSE-C: Supported
- SSE-S3: Supported
Does SeaweedFS support bucket default encryption?
Yes. You can set a bucket-level default encryption policy using the standard S3 bucket encryption API. Uploads without explicit encryption headers will follow the bucket policy. This applies to SSE-KMS and SSE-S3.
For setup guides, see Server-Side-Encryption.
Does SeaweedFS support S3 Object Lock?
Yes! SeaweedFS provides comprehensive support for S3 Object Lock features, including:
Object Lock Features
- Governance Mode: Objects can be deleted/modified by users with
s3:BypassGovernanceRetentionpermission - Compliance Mode: Objects cannot be deleted/modified by any user until retention expires
- Legal Hold: Objects cannot be deleted/modified until legal hold is explicitly removed
Supported APIs
GetObjectLockConfiguration/PutObjectLockConfiguration(bucket-level)GetObjectRetention/PutObjectRetention(object-level)GetObjectLegalHold/PutObjectLegalHold(object-level)- Governance bypass via
x-amz-bypass-governance-retentionheader
Requirements
- Object Lock must be enabled when creating the bucket (cannot be added later)
- Versioning is automatically enabled and required for Object Lock
- Compatible with standard AWS S3 tools and SDKs
For complete documentation, examples, and best practices, see S3 Object Lock and Retention.
What's the difference between Governance and Compliance modes?
Governance Mode:
- Designed for internal governance and compliance requirements
- Can be bypassed by users with proper permissions (
s3:BypassGovernanceRetention) - Admin users can always bypass governance retention
- Suitable for testing and development environments
Compliance Mode:
- Designed for regulatory compliance (SEC, FINRA, etc.)
- Cannot be bypassed by any user, including root/admin
- Provides the highest level of data protection
- Suitable for production environments with strict compliance requirements
Both modes prevent accidental deletion and provide audit trails for compliance purposes.
S3 authentication fails when using reverse proxy
Symptom
When accessing SeaweedFS S3 API through a reverse proxy, you might encounter signature verification errors such as:
SignatureDoesNotMatcherrors- Authentication failures for presigned URLs
- Inconsistent behavior between direct access and proxied access
Common Causes and Solutions
1. Missing X-Forwarded-Host header
The reverse proxy must set the X-Forwarded-Host header to preserve the original host information for signature calculation.
proxy_set_header X-Forwarded-Host $host;
2. URL path prefix stripping without X-Forwarded-Prefix
If your reverse proxy strips URL prefixes (e.g., /s3/bucket/object → /bucket/object), you must set the X-Forwarded-Prefix header:
# For /s3/ subpath
location /s3/ {
proxy_set_header X-Forwarded-Prefix /s3;
rewrite ^/s3/(.*) /$1 break;
proxy_pass http://seaweedfs;
}
3. Request buffering enabled
Nginx request buffering can interfere with chunked transfer encoding and signature verification:
proxy_request_buffering off;
proxy_buffering off;
4. Missing or incorrect forwarded headers
Ensure all necessary headers are forwarded:
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
Note: SeaweedFS automatically combines X-Forwarded-Host and X-Forwarded-Port for signature verification, omitting standard ports (80 for HTTP, 443 for HTTPS).
Testing Your Configuration
You can test your reverse proxy configuration using AWS CLI:
# Test basic bucket listing
aws s3 ls --endpoint-url https://yourdomain.com/s3
# Test presigned URL generation and access
aws s3 presign s3://test-bucket/test-object --endpoint-url https://yourdomain.com/s3
For detailed configuration examples, see the S3-Nginx-Proxy documentation.
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- S3 Credentials
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
Server-Side Encryption
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure