mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-02-09 09:17:28 +08:00
Page:
S3 Rate Limiting
Pages
AWS CLI with SeaweedFS
AWS IAM CLI
Actual Users
Admin UI
Amazon IAM API
Amazon S3 API
Applications
Async Backup
Async Filer Metadata Backup
Async Replication to Cloud
Async Replication to another Filer
Benchmark SeaweedFS as a GlusterFS replacement
Benchmarks from jinleileiking
Benchmarks
Cache Remote Storage
Choosing a Filer Store
Client Libraries
Cloud Drive Architecture
Cloud Drive Benefits
Cloud Drive Quick Setup
Cloud Monitoring
Cloud Tier
Components
Configure Remote Storage
Cryptography and FIPS Compliance
Customize Filer Store
Data Backup
Data Structure for Large Files
Deployment to Kubernetes and Minikube
Directories and Files
Docker Compose for S3
Docker Image Registry with SeaweedFS
Environment Variables
Erasure Coding for warm storage
Error reporting to sentry
FAQ
FIO benchmark
FUSE Mount
Failover Master Server
File Operations Quick Reference
Filer Active Active cross cluster continuous synchronization
Filer Cassandra Setup
Filer Change Data Capture
Filer Commands and Operations
Filer Data Encryption
Filer JWT Use
Filer Metadata Events
Filer Notification Webhook
Filer Redis Setup
Filer Server API
Filer Setup
Filer Store Replication
Filer Stores
Filer as a Key Large Value Store
Gateway to Remote Object Storage
Getting Started
HDFS via S3 connector
Hadoop Benchmark
Hadoop Compatible File System
Hardware
Hobbyest Tinkerer scale on premises tutorial
Home
Independent Benchmarks
Kafka to Kafka Gateway to SMQ to SQL
Kubernetes Backups and Recovery with K8up
Large File Handling
Load Command Line Options from a file
Master Server API
Migrate to Filer Store
Mount Remote Storage
OIDC Integration
Optimization
Path Specific Configuration
Path Specific Filer Store
PostgreSQL compatible Server weed db
Production Setup
Pub Sub to SMQ to SQL
Quick Start with weed mini
Replication
Run Blob Storage on Public Internet
Run Presto on SeaweedFS
S3 API Audit log
S3 API Benchmark
S3 API FAQ
S3 Bucket Quota
S3 CORS
S3 Conditional Operations
S3 Configuration
S3 Credentials
S3 Nginx Proxy
S3 Object Lock and Retention
S3 Object Versioning
S3 Policy Variables
S3 Rate Limiting
S3 Table Bucket Commands
S3 Table Bucket
SQL Queries on Message Queue
SQL Quick Reference
SRV Service Discovery
Seaweed Message Queue
SeaweedFS Java Client
SeaweedFS in Docker Swarm
Security Configuration
Security Overview
Server Side Encryption SSE C
Server Side Encryption SSE KMS
Server Side Encryption
Server Startup via Systemd
Store file with a Time To Live
Structured Data Lake with SMQ and SQL
Super Large Directories
System Metrics
TUS Resumable Uploads
TensorFlow with SeaweedFS
Tiered Storage
UrBackup with SeaweedFS
Use Cases
Volume Files Structure
Volume Management
Volume Server API
WebDAV
Words from SeaweedFS Users
Worker
fstab
nodejs with Seaweed S3
rclone with SeaweedFS
restic with SeaweedFS
run HBase on SeaweedFS
run Spark on SeaweedFS
s3cmd with SeaweedFS
weed shell
Clone
3
S3 Rate Limiting
chrislusf edited this page 2025-12-10 22:23:26 -08:00
S3 Rate Limiting
SeaweedFS provides rate limiting features to control resource usage and prevent overload on the S3 API server. There are two mechanisms available:
- Concurrent Upload Limiting - Command-line flags for simple upload throttling
- Circuit Breaker - Advanced per-action and per-bucket rate limiting
Concurrent Upload Limiting
These command-line flags control the total concurrent upload capacity when starting weed s3:
| Flag | Default | Description |
|---|---|---|
-concurrentUploadLimitMB |
128 | Limit total concurrent upload data size in MB |
-concurrentFileUploadLimit |
0 | Limit number of concurrent file uploads (0 = unlimited) |
Example
# Limit concurrent uploads to 256MB total data and max 100 files
weed s3 -filer=localhost:8888 -concurrentUploadLimitMB=256 -concurrentFileUploadLimit=100
Behavior
- These limits apply to write operations (PUT/POST)
- When limits are exceeded, new requests wait until capacity is available (backpressure)
- Requests are not rejected; they queue until the in-flight data/count drops below the limit
- Metrics are exposed via Prometheus:
seaweedfs_s3_inflight_upload_countandseaweedfs_s3_inflight_upload_bytes
Circuit Breaker
The circuit breaker provides more granular rate limiting that can be configured globally or per-bucket, and by action type.
Configuration
Circuit breaker is configured via weed shell and stored in the filer at /etc/s3/circuit_breaker.json. Changes are picked up dynamically by the S3 server.
Available Actions
| Action | Description |
|---|---|
Read |
GET/HEAD object operations |
Write |
PUT/POST object operations |
List |
List bucket/objects operations |
Tagging |
Object tagging operations |
Admin |
Administrative operations |
Limit Types
| Type | Description |
|---|---|
count |
Maximum number of simultaneous requests |
MB |
Maximum total content size (in MB) of simultaneous requests |
Examples
Global Limits
# Limit globally: max 500 concurrent Read requests, max 200 concurrent Write requests
weed shell
> s3.circuitBreaker -global -type count -actions Read,Write -values 500,200 -apply
# Limit by content size: max 1024MB concurrent Write data globally
> s3.circuitBreaker -global -type MB -actions Write -values 1024 -apply
# Apply same limit to all actions
> s3.circuitBreaker -global -type count -actions Read,Write,List,Tagging,Admin -values 100 -apply
Per-Bucket Limits
# Limit specific buckets: max 200 concurrent Reads, 100 concurrent Writes
> s3.circuitBreaker -buckets mybucket,otherbucket -type count -actions Read,Write -values 200,100 -apply
# Different limits per action for a bucket
> s3.circuitBreaker -buckets critical-bucket -type count -actions Read -values 50 -apply
> s3.circuitBreaker -buckets critical-bucket -type count -actions Write -values 10 -apply
Managing Configuration
# View current configuration (without -apply)
> s3.circuitBreaker
# Disable circuit breaker for specific buckets
> s3.circuitBreaker -buckets mybucket -disable -apply
# Disable global circuit breaker
> s3.circuitBreaker -global -disable -apply
# Delete circuit breaker config for specific buckets
> s3.circuitBreaker -buckets mybucket -delete -apply
# Delete specific actions from global config
> s3.circuitBreaker -global -actions Read -type count -delete -apply
# Clear all circuit breaker configuration
> s3.circuitBreaker -delete -apply
Behavior
- When limits are exceeded, the server returns HTTP 429 (Too Many Requests) with error code
SlowDown - When byte limits are exceeded, returns error code
RequestBytesExceed - Both global and bucket-specific limits are checked; the request must pass both
- Configuration changes are applied dynamically without restarting the S3 server
Configuration File Format
The circuit breaker configuration is stored as JSON. Here's an example of what the configuration looks like:
{
"global": {
"enabled": true,
"actions": {
"Read:count": 500,
"Write:count": 200,
"Write:bytes": 1073741824
}
},
"buckets": {
"mybucket": {
"enabled": true,
"actions": {
"Read:count": 100,
"Write:count": 50
}
}
}
}
Choosing the Right Approach
| Use Case | Recommended Approach |
|---|---|
| Simple upload throttling | -concurrentUploadLimitMB and -concurrentFileUploadLimit flags |
| Per-bucket limits | Circuit breaker with -buckets |
| Different limits per action type | Circuit breaker with -actions |
| Dynamic configuration changes | Circuit breaker (no restart needed) |
| Reject requests when overloaded | Circuit breaker (returns 429) |
| Queue requests when overloaded | Concurrent upload flags (backpressure) |
Monitoring
Rate limiting metrics are exposed via Prometheus when -metricsPort is configured:
weed s3 -filer=localhost:8888 -metricsPort=9327
Relevant metrics:
seaweedfs_s3_inflight_upload_count- Current number of in-flight uploadsseaweedfs_s3_inflight_upload_bytes- Current bytes of in-flight upload dataseaweedfs_s3_request_total- Total requests by action and status (look for 429 status)
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
- TUS Resumable Uploads
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 Rate Limiting
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
S3 Table Bucket
S3 Authentication & IAM
- S3 Configuration - Start Here
- S3 Credentials (
-s3.config) - OIDC Integration (
-s3.iam.config) - S3 Policy Variables
- Amazon IAM API
- AWS IAM CLI
Server-Side Encryption
S3 Client Tools
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure
Security
- Security Overview
- Security Configuration
- Cryptography and FIPS Compliance
- Run Blob Storage on Public Internet