Table of Contents
- S3 Credentials
- Authentication Methods
- 1. Configuration File (Highest Priority)
- 2. Filer Configuration (Medium Priority)
- 3. Admin UI (Web Interface)
- 4. Environment Variables (Fallback)
- Priority System
- Configuration Examples
- Credential Features
- Bucket-Specific Permissions
- Single Bucket Full Access
- Bucket-Specific Actions
- Multiple Bucket Access
- Wildcard Support
- Object-Level Permissions
- Configuration Methods
- Best Practices
- Troubleshooting
- Anonymous Access
- Configuration Reloading
- Static Configuration Files
- Filer-based Configuration
- Admin UI Configuration
- Environment Variables
- Verifying Configuration Reloads
- Troubleshooting
- Security Best Practices
S3 Credentials
SeaweedFS S3 API supports multiple authentication methods with a clear priority system. This page explains how to configure S3 credentials for your SeaweedFS setup.
Authentication Methods
1. Configuration File (Highest Priority)
Create a JSON configuration file and use the -config option:
{
"identities": [
{
"name": "admin_user",
"credentials": [
{
"accessKey": "admin_access_key",
"secretKey": "admin_secret_key"
}
],
"actions": ["Admin", "Read", "Write"]
},
{
"name": "read_only_user",
"credentials": [
{
"accessKey": "readonly_access_key",
"secretKey": "readonly_secret_key"
}
],
"actions": ["Read"]
}
]
}
Start S3 server with config file:
weed s3 -config=/path/to/s3.json -filer=localhost:8888
2. Filer Configuration (Medium Priority)
Store configuration in the filer using the credential manager. This allows dynamic configuration updates without restarting the S3 server.
3. Admin UI (Web Interface)
Use the SeaweedFS Admin UI to manage S3 credentials through a web interface:
# Start the admin interface (separate from filer)
weed admin -masters=localhost:9333
# Access the admin UI (default port 23646)
http://localhost:23646
Navigate to Object Store → Users (/object-store/users) to:
- Create Users: Add new S3 users with email and permissions
- Edit Permissions: Modify existing user access levels
- Manage Access Keys: Generate and delete access key pairs
- View User Details: Check user activity and current permissions
The Admin UI stores credentials in the filer using the same filer configuration method, so changes are automatically synchronized across all S3 servers connected to the same filer.
4. Environment Variables (Fallback)
Use AWS standard environment variables as a fallback when no other configuration is available:
export AWS_ACCESS_KEY_ID=your_access_key
export AWS_SECRET_ACCESS_KEY=your_secret_key
weed s3 -filer=localhost:8888
Important: Environment variables are only used when:
- No
-configoption is provided - No configuration is available from the filer
- Both
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYare set
Priority System
SeaweedFS uses the following priority order for S3 credentials:
- Configuration File (if
-configoption is provided) - Filer Configuration (if available and no config file)
- Admin UI (web interface that stores in filer configuration)
- Environment Variables (fallback only)
Higher priority methods completely override lower priority methods - there is no merging or supplementing.
Important: Admin UI and Filer Configuration both use the same underlying storage (filer), so they have the same effective priority. The Admin UI provides a user-friendly interface for managing what is stored as filer configuration.
Configuration Examples
Production Setup
# Use dedicated configuration file
weed s3 -config=/etc/seaweedfs/s3.json -filer=filer1:8888,filer2:8888
Development Setup
# Use environment variables for quick setup
export AWS_ACCESS_KEY_ID=dev_access_key
export AWS_SECRET_ACCESS_KEY=dev_secret_key
weed s3 -filer=localhost:8888
Docker Compose
version: '3.9'
services:
s3:
image: chrislusf/seaweedfs:latest
ports:
- 8333:8333
environment:
AWS_ACCESS_KEY_ID: s3admin
AWS_SECRET_ACCESS_KEY: s3secret
entrypoint: weed
command: s3 -filer=filer:8888
depends_on:
- filer
Credential Features
Actions
Identities can have different permission levels:
Admin: Full access to all S3 operationsRead: Read-only accessWrite: Read and write accessRead_ACP: Read access control permissionsWrite_ACP: Write access control permissions
Multiple Credentials
Each identity can have multiple access key/secret key pairs:
{
"name": "multi_key_user",
"credentials": [
{
"accessKey": "key1",
"secretKey": "secret1"
},
{
"accessKey": "key2",
"secretKey": "secret2"
}
],
"actions": ["Read", "Write"]
}
Account Management
Identities can be associated with accounts for better organization and cross-account access control.
Bucket-Specific Permissions
SeaweedFS supports restricting user access to specific buckets using bucket-scoped actions. This allows you to create users who have full access to one bucket but no access to other buckets.
Single Bucket Full Access
To create a user with full access to only one specific bucket, use bucket-scoped actions:
{
"identities": [
{
"name": "bucket1_user",
"credentials": [
{
"accessKey": "bucket1_access_key",
"secretKey": "bucket1_secret_key"
}
],
"actions": [
"Read:mybucket",
"Write:mybucket",
"List:mybucket",
"Tagging:mybucket",
"Admin:mybucket"
]
}
]
}
This user can:
- ✅ Read, write, list, and tag objects in
mybucket - ✅ Create and delete objects in
mybucket - ✅ Manage bucket settings for
mybucket - ❌ Access any other buckets
- ❌ Create new buckets (requires global Admin action)
Bucket-Specific Actions
Actions can be scoped to specific buckets using the format Action:BucketName:
| Action Format | Description | Example |
|---|---|---|
Read:bucket1 |
Read access to bucket1 only | Get objects from bucket1 |
Write:bucket1 |
Write access to bucket1 only | Put/delete objects in bucket1 |
List:bucket1 |
List access to bucket1 only | List objects in bucket1 |
Admin:bucket1 |
Admin access to bucket1 only | Bucket management for bucket1 |
Tagging:bucket1 |
Tagging access to bucket1 only | Manage object tags in bucket1 |
Multiple Bucket Access
Users can have access to multiple specific buckets:
{
"name": "multi_bucket_user",
"credentials": [{"accessKey": "key", "secretKey": "secret"}],
"actions": [
"Read:bucket1",
"Write:bucket1",
"List:bucket1",
"Read:bucket2",
"List:bucket2"
]
}
This user has:
- Full read/write access to
bucket1 - Read-only access to
bucket2 - No access to any other buckets
Wildcard Support
SeaweedFS supports wildcard patterns for flexible bucket access:
{
"name": "prefix_user",
"credentials": [{"accessKey": "key", "secretKey": "secret"}],
"actions": [
"Read:user-*",
"Write:user-*",
"List:user-*"
]
}
This user can access all buckets starting with user- (like user-data, user-logs, etc.).
Object-Level Permissions
You can restrict access to specific paths within a bucket:
{
"name": "path_limited_user",
"credentials": [{"accessKey": "key", "secretKey": "secret"}],
"actions": [
"Read:mybucket/uploads/*",
"Write:mybucket/uploads/*",
"List:mybucket"
]
}
This user can:
- Only read/write objects under
mybucket/uploads/path - List the bucket to see the directory structure
- Cannot access objects in other paths within the bucket
Configuration Methods
Bucket-specific permissions work with all configuration methods:
Dynamic Configuration (weed shell)
# Create user with access to specific bucket
s3.configure -access_key=bucket1user -secret_key=bucket1pass -buckets=mybucket -user=bucket1_user -actions=Read,Write,List,Tagging,Admin -apply
Static Configuration File
Use the JSON examples shown above in your configuration file.
Admin UI
- Navigate to Object Store → Users
- Create a new user
- In the permissions section, specify bucket-scoped actions like
Read:mybucket
Environment Variables
Environment variables create global admin access and cannot be scoped to specific buckets.
Best Practices
- Principle of Least Privilege: Grant only the minimum permissions needed
- Use Specific Bucket Names: Avoid wildcards unless necessary for flexibility
- Separate Users for Different Buckets: Create dedicated users for each bucket or application
- Test Permissions: Verify users can only access intended buckets
- Monitor Access: Use audit logs to track bucket access patterns
Troubleshooting
User can access other buckets:
- Verify no global actions (
Read,Write,Admin) are granted - Check for wildcard patterns that might be too broad
- Ensure bucket names in actions match exactly
User cannot access intended bucket:
- Verify bucket name spelling in actions
- Check that all required actions are granted (e.g.,
Listfor listing objects) - Test with AWS CLI:
aws --endpoint-url=http://localhost:8333 s3 ls s3://mybucket
Anonymous Access
By default, if no credentials are configured, SeaweedFS allows anonymous access to all S3 operations. To enable authentication:
- Configure at least one identity using any of the methods above
- Authentication will be automatically enabled
- All requests will require valid credentials
Configuration Reloading
SeaweedFS supports different reloading mechanisms depending on which authentication method you use:
| Configuration Method | Auto Reload | Manual Reload | Live Reload |
|---|---|---|---|
Configuration File (-config option) |
❌ No | ✅ SIGHUP | ❌ No |
| Filer Configuration (credential manager) | ✅ Yes | ✅ Yes | ✅ Yes |
| Admin UI (web interface) | ✅ Yes | ✅ Yes | ✅ Yes |
| Environment Variables | ❌ No | ❌ No | ❌ No |
Static Configuration Files
When using the -config option, you can reload the configuration by sending a SIGHUP signal:
# Find the S3 server process ID
ps aux | grep "weed s3"
# Send SIGHUP signal to reload configuration
kill -HUP <seaweedfs_s3_pid>
# Or if using systemd
systemctl reload seaweedfs-s3
The server will log the reload:
I0723 12:34:56.789 s3api_server.go:98] Loaded 3 identities from config file /etc/seaweedfs/s3.json
Filer-based Configuration
Filer-based configurations automatically reload when changes are detected:
# Changes are automatically applied
weed shell
> s3.configure -user=newuser -access_key=key123 -secret_key=secret123 -actions=Admin -apply
The server will automatically detect and apply changes:
I0723 12:35:12.456 auth_credentials_subscribe.go:55] updated /etc/seaweedfs/iam/identity.json
Admin UI Configuration
Admin UI changes are automatically applied in real-time since they use the same filer-based storage:
- Access Admin UI: Navigate to
http://localhost:23646 - Go to Users: Click Object Store → Users
- Make Changes: Create, edit, or delete users through the web interface
- Automatic Sync: Changes are immediately applied to all connected S3 servers
The server will show the same automatic detection messages as filer-based configuration since they share the same underlying storage mechanism.
Environment Variables
Environment variable changes require a complete restart of the S3 server:
# Update environment variables
export AWS_ACCESS_KEY_ID=new_access_key
export AWS_SECRET_ACCESS_KEY=new_secret_key
# Restart the S3 server
systemctl restart seaweedfs-s3
Verifying Configuration Reloads
Monitor the logs to verify configuration updates:
# Watch for reload messages
tail -f /var/log/seaweedfs/s3.log | grep -E "updated|Loaded.*identities"
# Check current configuration via shell
weed shell
> s3.configure
Troubleshooting
Common Issues
Environment variables not working:
- Check that no
-configoption is provided - Verify no configuration exists in the filer
- Ensure both
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYare set
Configuration file not loading:
- Verify the file path is correct
- Check JSON syntax is valid
- Ensure the file is readable by the SeaweedFS process
Invalid credentials error:
- Verify access key and secret key are correct
- Check that the identity has the required actions/permissions
- Ensure the credential store is properly configured
Debug Commands
Check current configuration:
# View current identities (if using filer store)
weed shell
> s3.configure -list
Test credentials:
# Test with AWS CLI
aws --endpoint-url=http://localhost:8333 s3 ls
Security Best Practices
- Use Configuration Files in Production: Environment variables are visible in process lists
- Rotate Credentials Regularly: Update access keys and secret keys periodically
- Principle of Least Privilege: Grant only the minimum required permissions
- Secure Storage: Store configuration files with appropriate file permissions
- Monitor Access: Enable audit logging to track S3 API usage
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- S3 Credentials
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
Server-Side Encryption
AWS IAM
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure