Table of Contents
- TUS Resumable Uploads
- Features
- Configuration
- TUS Endpoints
- Supported Extensions
- Usage Examples
- 1. Discover Server Capabilities
- 2. Create Upload Session
- 3. Upload Data
- 4. Check Upload Progress
- 5. Resume Interrupted Upload
- 6. Cancel Upload
- TUS Headers
- Client Libraries
- JavaScript Example
- Python Example
- Go Example
- How It Works
- Limitations
- See Also
TUS Resumable Uploads
SeaweedFS supports the TUS protocol for resumable file uploads. TUS is an open protocol for resumable uploads built on HTTP, allowing clients to upload files in chunks and resume interrupted uploads.
Features
- Resumable uploads: Resume interrupted uploads without re-sending data
- Chunked uploads: Upload large files in smaller pieces
- Simple protocol: Standard HTTP methods with custom headers
- Wide client support: Libraries available for JavaScript, Python, Go, Java, and more
Configuration
TUS is enabled by default at the /.tus endpoint. You can customize or disable TUS using the -tusBasePath flag:
Filer Command
# Default: TUS enabled at /.tus
weed filer -master=localhost:9333
# Custom path
weed filer -master=localhost:9333 -tusBasePath=uploads/tus
# Disable TUS
weed filer -master=localhost:9333 -tusBasePath=
Server Command
When using weed server, use the -filer.tusBasePath option:
# Default: TUS enabled at /.tus
weed server -filer=true
# Custom path
weed server -filer=true -filer.tusBasePath=uploads/tus
# Disable TUS
weed server -filer=true -filer.tusBasePath=
TUS Endpoints
| Method | Path | Description |
|---|---|---|
OPTIONS |
/.tus/ |
Server capability discovery |
POST |
/.tus/{path} |
Create new upload session |
HEAD |
/.tus/.uploads/{id} |
Get current upload offset |
PATCH |
/.tus/.uploads/{id} |
Upload data at offset |
DELETE |
/.tus/.uploads/{id} |
Cancel upload |
Supported Extensions
creation: Create new upload sessionscreation-with-upload: Upload data in the creation requesttermination: Cancel/delete uploads
Usage Examples
1. Discover Server Capabilities
curl -X OPTIONS http://localhost:8888/.tus/ \
-H "Tus-Resumable: 1.0.0"
Response headers:
Tus-Resumable: 1.0.0
Tus-Version: 1.0.0
Tus-Extension: creation,creation-with-upload,termination
Tus-Max-Size: 5368709120
2. Create Upload Session
curl -X POST http://localhost:8888/.tus/path/to/file.txt \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Length: 1000" \
-H "Upload-Metadata: filename dGVzdC50eHQ=,content-type dGV4dC9wbGFpbg=="
Response:
HTTP/1.1 201 Created
Location: /.tus/.uploads/abc123-uuid
3. Upload Data
curl -X PATCH http://localhost:8888/.tus/.uploads/abc123-uuid \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Offset: 0" \
-H "Content-Type: application/offset+octet-stream" \
--data-binary @file.txt
Response:
HTTP/1.1 204 No Content
Upload-Offset: 1000
4. Check Upload Progress
curl -I http://localhost:8888/.tus/.uploads/abc123-uuid \
-H "Tus-Resumable: 1.0.0"
Response:
HTTP/1.1 200 OK
Upload-Offset: 500
Upload-Length: 1000
5. Resume Interrupted Upload
# First, check current offset
OFFSET=$(curl -s -I http://localhost:8888/.tus/.uploads/abc123-uuid \
-H "Tus-Resumable: 1.0.0" | grep -i "Upload-Offset" | cut -d' ' -f2 | tr -d '\r')
# Resume from offset
curl -X PATCH http://localhost:8888/.tus/.uploads/abc123-uuid \
-H "Tus-Resumable: 1.0.0" \
-H "Upload-Offset: $OFFSET" \
-H "Content-Type: application/offset+octet-stream" \
--data-binary @remaining_data.bin
6. Cancel Upload
curl -X DELETE http://localhost:8888/.tus/.uploads/abc123-uuid \
-H "Tus-Resumable: 1.0.0"
TUS Headers
Request Headers
| Header | Required | Description |
|---|---|---|
Tus-Resumable |
Yes | Protocol version (must be 1.0.0) |
Upload-Length |
POST | Total file size in bytes |
Upload-Offset |
PATCH | Current byte offset |
Upload-Metadata |
Optional | Base64-encoded key-value pairs |
Content-Type |
PATCH | Must be application/offset+octet-stream |
Content-Length |
PATCH | Size of data being uploaded |
Response Headers
| Header | Description |
|---|---|
Tus-Resumable |
Protocol version |
Tus-Version |
Supported versions |
Tus-Extension |
Supported extensions |
Tus-Max-Size |
Maximum upload size (5GB default) |
Upload-Offset |
Current byte offset |
Upload-Length |
Total file size |
Location |
Upload URL (on POST) |
Client Libraries
TUS has official and community client libraries for many languages:
- JavaScript: tus-js-client
- Python: tuspy
- Go: go-tus
- Java: tus-java-client
- iOS: TUSKit
- Android: tus-android-client
See the TUS implementations page for more options.
JavaScript Example
import * as tus from "tus-js-client";
const file = document.querySelector("input[type=file]").files[0];
const upload = new tus.Upload(file, {
endpoint: "http://localhost:8888/.tus/uploads/",
retryDelays: [0, 3000, 5000, 10000, 20000],
metadata: {
filename: file.name,
filetype: file.type,
},
onError: (error) => {
console.log("Upload failed:", error);
},
onProgress: (bytesUploaded, bytesTotal) => {
const percentage = ((bytesUploaded / bytesTotal) * 100).toFixed(2);
console.log(`${percentage}%`);
},
onSuccess: () => {
console.log("Upload complete:", upload.url);
},
});
upload.start();
Python Example
from tusclient import client
my_client = client.TusClient('http://localhost:8888/.tus/')
uploader = my_client.uploader('path/to/file.txt', chunk_size=1024*1024)
uploader.upload()
Go Example
package main
import (
"os"
"github.com/eventials/go-tus"
)
func main() {
f, _ := os.Open("file.txt")
defer f.Close()
client, _ := tus.NewClient("http://localhost:8888/.tus/", nil)
upload, _ := tus.NewUploadFromFile(f)
uploader, _ := client.CreateUpload(upload)
uploader.Upload()
}
How It Works
-
Session Creation: Client sends POST to create an upload session, specifying the total file size. Server returns a unique upload URL.
-
Data Upload: Client sends PATCH requests with chunks of data at the current offset. Server stores chunks and returns the new offset.
-
Resume: If upload is interrupted, client sends HEAD to get current offset, then continues with PATCH from that offset.
-
Completion: When all data is uploaded (offset equals file size), the server assembles the final file at the target path.
-
Cleanup: Upload sessions are stored temporarily and cleaned up after completion or expiration.
Limitations
- Maximum upload size: 5GB (configurable)
- Session expiration: 24 hours (configurable)
- Only single-file uploads (no concatenation extension)
See Also
Introduction
API
Configuration
- Replication
- Store file with a Time To Live
- Failover Master Server
- Erasure coding for warm storage
- Server Startup via Systemd
- Environment Variables
Filer
- Filer Setup
- Directories and Files
- File Operations Quick Reference
- Data Structure for Large Files
- Filer Data Encryption
- Filer Commands and Operations
- Filer JWT Use
- TUS Resumable Uploads
Filer Stores
- Filer Cassandra Setup
- Filer Redis Setup
- Super Large Directories
- Path-Specific Filer Store
- Choosing a Filer Store
- Customize Filer Store
Management
Advanced Filer Configurations
- Migrate to Filer Store
- Add New Filer Store
- Filer Store Replication
- Filer Active Active cross cluster continuous synchronization
- Filer as a Key-Large-Value Store
- Path Specific Configuration
- Filer Change Data Capture
FUSE Mount
WebDAV
Cloud Drive
- Cloud Drive Benefits
- Cloud Drive Architecture
- Configure Remote Storage
- Mount Remote Storage
- Cache Remote Storage
- Cloud Drive Quick Setup
- Gateway to Remote Object Storage
AWS S3 API
- Amazon S3 API
- S3 Conditional Operations
- S3 CORS
- S3 Object Lock and Retention
- S3 Object Versioning
- S3 API Benchmark
- S3 API FAQ
- S3 Bucket Quota
- S3 Rate Limiting
- S3 API Audit log
- S3 Nginx Proxy
- Docker Compose for S3
S3 Table Bucket
S3 Authentication & IAM
- S3 Configuration - Start Here
- S3 Credentials (
-s3.config) - OIDC Integration (
-s3.iam.config) - S3 Policy Variables
- Amazon IAM API
- AWS IAM CLI
Server-Side Encryption
S3 Client Tools
- AWS CLI with SeaweedFS
- s3cmd with SeaweedFS
- rclone with SeaweedFS
- restic with SeaweedFS
- nodejs with Seaweed S3
Machine Learning
HDFS
- Hadoop Compatible File System
- run Spark on SeaweedFS
- run HBase on SeaweedFS
- run Presto on SeaweedFS
- Hadoop Benchmark
- HDFS via S3 connector
Replication and Backup
- Async Replication to another Filer [Deprecated]
- Async Backup
- Async Filer Metadata Backup
- Async Replication to Cloud [Deprecated]
- Kubernetes Backups and Recovery with K8up
Metadata Change Events
Messaging
- Structured Data Lake with SMQ and SQL
- Seaweed Message Queue
- SQL Queries on Message Queue
- SQL Quick Reference
- PostgreSQL-compatible Server weed db
- Pub-Sub to SMQ to SQL
- Kafka to Kafka Gateway to SMQ to SQL
Use Cases
Operations
Advanced
- Large File Handling
- Optimization
- Volume Management
- Tiered Storage
- Cloud Tier
- Cloud Monitoring
- Load Command Line Options from a file
- SRV Service Discovery
- Volume Files Structure
Security
- Security Overview
- Security Configuration
- Cryptography and FIPS Compliance
- Run Blob Storage on Public Internet