Table of Contents
- How it works?
- Supported APIs
- User Metadata (Custom Attributes)
- Feature difference
- Server-Side Encryption
- S3 Conditional Operations
- S3 Authentication
- S3 Object Versioning
- S3 Cross-Origin Resource Sharing (CORS)
- Reverse Proxy Support
- Authentication with Filer
To be compatible with Amazon S3 API, a separate "weed s3" command is provided. This provides much faster access when reading or writing files, compared to operating files on the cloud.
How it works?
weed s3 will start a stateless gateway server to bridge the Amazon S3 API to SeaweedFS Filer.
For convenience, weed server -s3 will start a master, a volume server, a filer, and the S3 gateway. And weed filer -s3 can start a filer and the S3 gateway together also.
Each bucket is stored in one collection, and mapped to folder /buckets/<bucket_name> by default.
A bucket can be deleted efficiently by deleting the whole collection.
From version 3.51 name of collection is filerGroup_bucketname or bucketname if filer group is not set.
Support Many Buckets
Each bucket has its own collection. Usually one collection uses 7 volumes, where each volume is 30GB by default. So if you want to create multiple buckets, you may run out of volumes very quickly unless you have a large disk.
Try to keep the volume size low. For example,
weed master -volumeSizeLimitMB=1024
In addition, you can also configure the per bucket storage this way in weed shell:
fs.configure -locationPrefix=/buckets/ -volumeGrowthCount=1 -apply
This will add 1 physical volume when existing volumes are full. If using replication, you will need to add more volumes, so that it is a multiple of the number of replicas:
fs.configure -locationPrefix=/buckets/ -replication=001 -volumeGrowthCount=2 -apply
See https://github.com/seaweedfs/seaweedfs/wiki/Path-Specific-Configuration
Supported APIs
Currently, the following APIs are supported.
Some additional endpoints might be (partially), supported but are not in this list.
To be sure, you can look at the function defined in the files weed/s3api/s3api_*_handlers_*.go.
// Object operations
* PutObject
* GetObject
* HeadObject
* CopyObject
* DeleteObject
* ListObjectsV2
* ListObjectsV1
* DeleteMultipleObjects
* PostPolicy
// Object Tagging
* GetObjectTagging
* PutObjectTagging
* DeleteObjectTagging
// User Metadata
* PutObject (with x-amz-meta-* headers)
* GetObject / HeadObject (returns x-amz-meta-* headers)
* CopyObject (with metadata directive)
// Server-Side Encryption
* PutObject (with SSE-KMS, SSE-C, SSE-S3)
* GetObject (with automatic decryption)
* HeadObject (with encryption metadata)
* CopyObject (with encryption/decryption)
* Multipart uploads with encryption
* Bucket default encryption
// Conditional Operations
* All object operations support conditional headers:
- If-Match
- If-None-Match
- If-Modified-Since
- If-Unmodified-Since
// Bucket operations
* PutBucket
* DeleteBucket
* HeadBucket
* ListBuckets
* PutBucketLifecycleConfiguration (partially, only for TTL)
* GetBucketLifecycleConfiguration (partially, only for TTL)
* DeleteBucketLifecycleConfiguration (partially, only for TTL)
* GetBucketCors
* PutBucketCors
* DeleteBucketCors
// Multipart upload operations
* NewMultipartUpload
* CompleteMultipartUpload
* AbortMultipartUpload
* ListMultipartUploads
* PutObjectPart
* CopyObjectPart
* ListObjectParts
// Object Versioning
* PutBucketVersioning
* GetBucketVersioning
* ListObjectVersions
* GetObject (with version ID)
* PutObject (with versioning)
* DeleteObject (with version ID)
* CopyObject (with version ID)
* RestoreObject (partial)
// Object Lock and Retention
* GetObjectLockConfiguration
* PutObjectLockConfiguration
* GetObjectRetention
* PutObjectRetention
* GetObjectLegalHold
* PutObjectLegalHold
* BypassGovernanceRetention (via x-amz-bypass-governance-retention header)
// Bucket Policies
* PutBucketPolicy
* GetBucketPolicy
* DeleteBucketPolicy
* Supported Conditions:
- s3:ExistingObjectTag
Not included:
User Metadata (Custom Attributes)
SeaweedFS supports S3 user-defined metadata via x-amz-meta-* headers. This allows you to attach custom key-value pairs to objects.
Setting User Metadata
# Using AWS CLI
aws s3 cp myfile.txt s3://mybucket/myfile.txt \
--metadata "expire=2025-12-01,author=john,project=demo"
# Using curl
curl -X PUT \
-H "x-amz-meta-expire: 2025-12-01" \
-H "x-amz-meta-author: john" \
--data-binary @myfile.txt \
"http://localhost:8333/mybucket/myfile.txt"
Reading User Metadata
User metadata is returned in response headers when you GET or HEAD an object:
# Using AWS CLI
aws s3api head-object --bucket mybucket --key myfile.txt
# Using curl
curl -I "http://localhost:8333/mybucket/myfile.txt"
# Response includes:
# x-amz-meta-expire: 2025-12-01
# x-amz-meta-author: john
Updating User Metadata
To update metadata, use CopyObject with x-amz-metadata-directive: REPLACE:
aws s3 cp s3://mybucket/myfile.txt s3://mybucket/myfile.txt \
--metadata "expire=2026-01-01" \
--metadata-directive REPLACE
Limits
Metadata keys are case-insensitive and stored in canonical format (e.g., x-amz-meta-My-Key becomes X-Amz-Meta-My-Key).
Feature difference
| Feature | SeaweedFS | Amazon S3 |
|---|---|---|
| Multi byte ranges reads | Yes | No |
| DeleteObject deletes a folder | Yes | No |
| same path for both a file and a folder | No | Yes |
| allows more than "/" as a delimiter | No | Yes |
| Object Versioning | Yes | Yes |
| MFA Delete for versioning | No | Yes |
| Server-Side Encryption (SSE-KMS) | Yes | Yes |
| Server-Side Encryption (SSE-C) | Yes | Yes |
| Server-Side Encryption (SSE-S3) | Yes | Yes |
| KMS Providers (Multi-cloud) | Yes | No |
| Conditional Headers (All operations) | Yes | Yes |
| Range requests with SSE-KMS | Yes | Yes |
| Range requests with SSE-C | Yes | Yes |
| Range requests with SSE-S3 | Yes | Yes |
Empty folders
SeaweedFS has directories while AWS S3 only has objects with "fake" directories. In AWS S3, if the last file is deleted in a directory, the directory will disappear also.
To be consistent with AWS S3, SeaweedFS automatically cleans up empty folders asynchronously after file deletions.
Server-Side Encryption
Need encryption at rest? SeaweedFS speaks the same SSE dialects as Amazon S3, so your existing tools and SDKs just work. You can choose from three options:
- SSE-KMS: Use an external KMS (AWS KMS, Google Cloud KMS, OpenBao/Vault)
- SSE-C: Bring your own keys for maximum control
- SSE-S3: Let SeaweedFS manage keys (explicit
AES256header or bucket default encryption)
All encryption types support:
- Automatic encryption/decryption
- Bucket default encryption
- Multipart upload encryption
- Cross-encryption copy operations
- AWS S3 compatibility
For detailed setup guides and examples, see:
Quick Examples
# SSE-KMS (Key Management Service)
aws s3 cp file.txt s3://mybucket/kms-encrypted.txt --server-side-encryption aws:kms --ssekms-key-id alias/my-key
# SSE-C (Customer-provided keys)
aws s3 cp file.txt s3://mybucket/customer-encrypted.txt --sse-c AES256 --sse-c-key fileb://my-key.bin
# SSE-S3 (Server-managed)
aws s3 cp file.txt s3://mybucket/server-encrypted.txt --server-side-encryption AES256
S3 Conditional Operations
SeaweedFS supports AWS S3-compatible conditional headers for safe concurrent operations and efficient caching:
- If-Match: Execute only if ETag matches (optimistic locking)
- If-None-Match: Execute only if ETag doesn't match (prevent overwrites, caching)
- If-Modified-Since: Execute only if modified after date (conditional downloads)
- If-Unmodified-Since: Execute only if not modified after date (safe updates)
Conditional operations enable:
- Optimistic concurrency control: Prevent lost updates
- Efficient caching: Reduce bandwidth with 304 Not Modified
- Atomic operations: Operations only proceed when safe
- Data integrity: Prevent accidental overwrites
For detailed usage patterns and examples, see S3 Conditional Operations.
Quick Examples
# Get current ETag
ETAG=$(aws s3api head-object --bucket mybucket --key file.txt --query ETag --output text)
# Conditional update (optimistic locking)
curl -X PUT -H "If-Match: $ETAG" -d "updated content" "http://localhost:8333/mybucket/file.txt"
# Conditional download (caching)
curl -H "If-None-Match: $ETAG" "http://localhost:8333/mybucket/file.txt"
# Returns 304 Not Modified if unchanged
# Prevent overwrite (atomic create)
curl -X PUT -H "If-None-Match: *" -d "new content" "http://localhost:8333/mybucket/newfile.txt"
S3 Authentication
For a complete overview of S3 configuration options, see S3 Configuration.
By default, the access key and secret key to access weed s3 is not authenticated. To enable credential based access, you can choose from:
| Method | Option | Documentation |
|---|---|---|
| Static Configuration | -s3.config=config.json |
S3 Credentials |
| Dynamic Configuration | s3.configure in weed shell |
See below |
| Admin UI | Web interface | Admin UI |
| OIDC/JWT (Web Identity) | -s3.iam.config=iam.json |
OIDC Integration |
Dynamic Configuration
Example command:
s3.configure -access_key=any -secret_key=any -buckets=bucket1 -user=me -actions=Read,Write,List,Tagging,Admin -apply
Output from above s3.configure command:
{
"identities": [
{
"name": "me",
"credentials": [
{
"accessKey": "any",
"secretKey": "any"
}
],
"actions": [
"Read:bucket1",
"Write:bucket1",
"List:bucket1",
"Tagging:bucket1",
"Admin:bucket1"
]
}
]
}
Static Configuration
To enable credential based access, create a config.json file similar to the example below, and specify it via weed s3 -config=config.json. The config file can be re-read on the HUP signal without restarting the main process via pkill -HUP weed
You just need to create a user with all "Admin", "Read", "Write", "List", "Tagging" actions. You can create as many users as needed. Each user can have multiple credentials.
- The "Admin" action is needed to list, create, and delete buckets.
- The "Write" action allows uploading files to all buckets.
- The "WriteAcp" action allows writing the access control list (ACL) from all buckets.
- The "Read" action allows reading files from all buckets.
- The "ReadAcp" action allows reading the access control list (ACL) from all buckets.
- The "List" action allows listing files from all buckets.
- The "Tagging" action allows tagging files from all buckets.
- The "Write:<bucket_name>" action allows uploading files within a bucket, e.g., "Write:bucket1".
- The "WriteAcp:<bucket_name>" action allows writing ACL within a bucket, e.g., "Write:bucket1".
- The "Read:<bucket_name>" action allows reading files within a bucket, e.g., "Read:bucket2".
- The "ReadAcp:<bucket_name>" action allows reading ACL within a bucket, e.g., "Read:bucket2".
- The "List:<bucket_name>" action allows listing files within a bucket, e.g., "List:bucket2".
- The "Tagging:<bucket_name>" action allows tagging files within a bucket, e.g., "Tagging:bucket2".
Public access (with anonymous download)
For public access, you can configure an identity with name "anonymous", usually with just "Read" action, or access to specific buckets.
{
"identities": [
{
"name": "anonymous",
"actions": [
"Read"
]
},
{
"name": "some_name",
"credentials": [
{
"accessKey": "some_access_key1",
"secretKey": "some_secret_key1"
}
],
"actions": [
"Admin",
"Read",
"ReadAcp",
"List",
"Tagging",
"Write",
"WriteAcp"
]
},
{
"name": "some_read_only_user",
"credentials": [
{
"accessKey": "some_access_key2",
"secretKey": "some_secret_key2"
}
],
"actions": [
"Read",
"List"
]
},
{
"name": "some_normal_user",
"credentials": [
{
"accessKey": "some_access_key3",
"secretKey": "some_secret_key3"
}
],
"actions": [
"Read",
"List",
"Tagging",
"Write"
]
},
{
"name": "user_limited_by_bucket",
"credentials": [
{
"accessKey": "some_access_key4",
"secretKey": "some_secret_key4"
}
],
"actions": [
"Read:bucket1",
"Read:bucket2",
"Read:bucket3",
"Write:bucket1"
]
}
]
}
Actions with wildcard
Wildcard is partially supported for prefix lookup. The following example actions are allowed:
Read
Read:bucket
Read:bucket_prefix*
Read:bucket/*
Read:bucket/a/b/*
Presigned URL
Presigned URL is supported. See AWS-CLI-with-SeaweedFS#presigned-url for example.
S3 Object Versioning
SeaweedFS supports S3 object versioning, which allows you to keep multiple variants of an object in the same bucket. This provides data protection against accidental deletion or modification.
For detailed information about object versioning, see the dedicated S3-Object-Versioning page.
S3 Cross-Origin Resource Sharing (CORS)
SeaweedFS supports S3-compatible Cross-Origin Resource Sharing (CORS) configuration, allowing web applications to make cross-origin requests to your S3 buckets. CORS is essential for web applications that need to access resources from different domains.
For detailed information about CORS configuration, see the dedicated S3-CORS page.
Reverse Proxy Support
SeaweedFS S3 API supports deployment behind reverse proxies with full AWS Signature v4 authentication compatibility. This includes support for:
- X-Forwarded-Host: Preserves original host header for signature verification
- X-Forwarded-Port: Automatically combines with X-Forwarded-Host for non-standard ports
- X-Forwarded-Prefix: Handles URL path prefix stripping by reverse proxies
- Standard forwarded headers: X-Forwarded-For, X-Forwarded-Proto, etc.
Path Prefix Handling
When using reverse proxies that strip URL prefixes (e.g., /s3/, /api/s3/), SeaweedFS automatically handles signature verification for both the original prefixed path and the stripped path. This ensures seamless operation with:
- API gateways
- Multi-tenant deployments
- Subpath hosting scenarios
For detailed configuration examples and setup instructions, see the dedicated S3-Nginx-Proxy page.
Multiple S3 Nodes
If you need to setup multiple S3 nodes, you can just start multiple s3 instances pointing to a filer.
Usually you would also want to have multiple filers. The easiest way is to run filer together with a S3.
weed filer -s3
Authentication with Filer
You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as
explained in Security-Configuration -
controlled by the grpc.* configuration in security.toml.
Starting with version 2.84, it is also possible to authenticate the HTTP
operations between the S3-API-Proxy and the Filer (especially
uploading new files). This is configured by setting
jwt.filer_signing.key and jwt.filer_signing.read.key in
security.toml.
With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication.
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
- TUS Resumable Uploads
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 Rate Limiting
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
S3 Authentication & IAM
- S3 Configuration - Start Here
- S3 Credentials (
-s3.config) - OIDC Integration (
-s3.iam.config) - Amazon IAM API
- AWS IAM CLI
Server-Side Encryption
S3 Client Tools
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure
Security
- Security Overview
- Security Configuration
- Cryptography and FIPS Compliance
- Run Blob Storage on Public Internet