Lisandro Pin 3dd5348616 volume.check.disk: add support for uni- or bi-directional sync between volume replicas. (#7484)
* `volume.check.disk`: add support for uni- or bi-directional sync between volume replicas.

We'll need this to support repairing broken replicas, which involve syncing from a known good source replica without modifying it.

* S3: Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization (#7480)

* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization

* revert

Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata

* Lazy Entry Fetch for SSE, Skip Conditional Header Check

* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)

* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS

* refactor

* Removed Premature Mutual Exclusivity Check

* check for the presence of the X-Amz-Server-Side-Encryption header

* not used

* fmt

* Volume Server: avoid aggressive volume assignment (#7501)

* avoid aggressive volume assignment

* also test ec shards

* separate DiskLocation instances for each subtest

* edge cases

* No volumes plus low disk space
* Multiple EC volumes

* simplify

* chore(deps): bump github.com/getsentry/sentry-go from 0.36.1 to 0.38.0 (#7498)

Bumps [github.com/getsentry/sentry-go](https://github.com/getsentry/sentry-go) from 0.36.1 to 0.38.0.
- [Release notes](https://github.com/getsentry/sentry-go/releases)
- [Changelog](https://github.com/getsentry/sentry-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-go/compare/v0.36.1...v0.38.0)

---
updated-dependencies:
- dependency-name: github.com/getsentry/sentry-go
  dependency-version: 0.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump go.etcd.io/etcd/client/v3 from 3.6.5 to 3.6.6 (#7496)

Bumps [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd) from 3.6.5 to 3.6.6.
- [Release notes](https://github.com/etcd-io/etcd/releases)
- [Commits](https://github.com/etcd-io/etcd/compare/v3.6.5...v3.6.6)

---
updated-dependencies:
- dependency-name: go.etcd.io/etcd/client/v3
  dependency-version: 3.6.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump github.com/hanwen/go-fuse/v2 from 2.8.0 to 2.9.0 (#7495)

Bumps [github.com/hanwen/go-fuse/v2](https://github.com/hanwen/go-fuse) from 2.8.0 to 2.9.0.
- [Commits](https://github.com/hanwen/go-fuse/compare/v2.8.0...v2.9.0)

---
updated-dependencies:
- dependency-name: github.com/hanwen/go-fuse/v2
  dependency-version: 2.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump github.com/linxGnu/grocksdb from 1.10.2 to 1.10.3 (#7494)

Bumps [github.com/linxGnu/grocksdb](https://github.com/linxGnu/grocksdb) from 1.10.2 to 1.10.3.
- [Release notes](https://github.com/linxGnu/grocksdb/releases)
- [Commits](https://github.com/linxGnu/grocksdb/compare/v1.10.2...v1.10.3)

---
updated-dependencies:
- dependency-name: github.com/linxGnu/grocksdb
  dependency-version: 1.10.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump actions/dependency-review-action from 4.8.1 to 4.8.2 (#7493)

Bumps [actions/dependency-review-action](https://github.com/actions/dependency-review-action) from 4.8.1 to 4.8.2.
- [Release notes](https://github.com/actions/dependency-review-action/releases)
- [Commits](40c09b7dc9...3c4e3dcb1a)

---
updated-dependencies:
- dependency-name: actions/dependency-review-action
  dependency-version: 4.8.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump golang.org/x/image from 0.32.0 to 0.33.0 (#7497)

* chore(deps): bump golang.org/x/image from 0.32.0 to 0.33.0

Bumps [golang.org/x/image](https://github.com/golang/image) from 0.32.0 to 0.33.0.
- [Commits](https://github.com/golang/image/compare/v0.32.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/image
  dependency-version: 0.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* go mod tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>

* chore: fix the diagram in RDMA sidecar readme (#7503)

* de/compress the fs meta file if filename ends with gz/gzip (#7500)

* de/compress the fs meta file if filename ends with gz/gzip

* gemini code review

* update help msg

* faster master startup

* chore(deps): bump org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0 in /other/java/hdfs2 (#7502)

chore(deps): bump org.apache.hadoop:hadoop-common in /other/java/hdfs2

Bumps org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-version: 3.4.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* S3: Directly read write volume servers (#7481)

* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization

* revert

Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata

* Lazy Entry Fetch for SSE, Skip Conditional Header Check

* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)

* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS

* refactor

* Removed Premature Mutual Exclusivity Check

* check for the presence of the X-Amz-Server-Side-Encryption header

* not used

* fmt

* directly read write volume servers

* HTTP Range Request Support

* set header

* md5

* copy object

* fix sse

* fmt

* implement sse

* sse continue

* fixed the suffix range bug (bytes=-N for "last N bytes")

* debug logs

* Missing PartsCount Header

* profiling

* url encoding

* test_multipart_get_part

* headers

* debug

* adjust log level

* handle part number

* Update s3api_object_handlers.go

* nil safety

* set ModifiedTsNs

* remove

* nil check

* fix sse header

* same logic as filer

* decode values

* decode ivBase64

* s3: Fix SSE decryption JWT authentication and streaming errors

Critical fix for SSE (Server-Side Encryption) test failures:

1. **JWT Authentication Bug** (Root Cause):
   - Changed from GenJwtForFilerServer to GenJwtForVolumeServer
   - S3 API now uses correct JWT when directly reading from volume servers
   - Matches filer's authentication pattern for direct volume access
   - Fixes 'unexpected EOF' and 500 errors in SSE tests

2. **Streaming Error Handling**:
   - Added error propagation in getEncryptedStreamFromVolumes goroutine
   - Use CloseWithError() to properly communicate stream failures
   - Added debug logging for streaming errors

3. **Response Header Timing**:
   - Removed premature WriteHeader(http.StatusOK) call
   - Let Go's http package write status automatically on first write
   - Prevents header lock when errors occur during streaming

4. **Enhanced SSE Decryption Debugging**:
   - Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
   - Better error messages for missing or invalid encryption metadata
   - Added glog.V(2) debugging for decryption setup

This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.

* s3: Fix SSE multipart upload metadata preservation

Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):

**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:

1. **Wrong Constant Name** (Key Mismatch Bug):
   - Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
   - Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
   - Result: SSE-KMS metadata was NEVER copied → 500 errors

2. **Missing SSE-C and SSE-S3 Headers**:
   - SSE-C requires: IV, Algorithm, KeyMD5
   - SSE-S3 requires: encrypted key data + standard headers
   - Old code: copied nothing for SSE-C/SSE-S3 → decryption failures

**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:

- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption

Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)

**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.

Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)

* s3: Add debug logging for versioning state diagnosis

Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.

Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)

This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers

* s3: Enhanced versioning state tracing for suspended versioning diagnosis

Added comprehensive logging across the entire versioning state flow:

PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change

setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code

updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails

getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors

This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues

Expected to show why bucket still reports 'Enabled' after 'Suspended' call.

* s3: Add SSE chunk detection debugging for multipart uploads

Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:

detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)

This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly

Expected to show if chunks have SseType=0 (none) or proper SSE types set.

* s3: Trace SSE chunk metadata through multipart completion and retrieval

Added end-to-end logging to track SSE chunk metadata lifecycle:

**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes

**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result

This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug

Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.

* s3: Fix SSE-C multipart IV base64 decoding bug

**Critical Bug Found**: SSE-C multipart uploads were failing because:

Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error

Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"

Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures

Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET

This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.

* fixes

* kms sse

* handle retry if not found in .versions folder and should read the normal object

* quick check (no retries) to see if the .versions/ directory exists

* skip retry if object is not found

* explicit update to avoid sync delay

* fix map update lock

* Remove fmt.Printf debug statements

* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating

* fmt

* Fix ACL grants storage logic

* header handling

* nil handling

* range read for sse content

* test range requests for sse objects

* fmt

* unused code

* upload in chunks

* header case

* fix url

* bucket policy error vs bucket not found

* jwt handling

* fmt

* jwt in request header

* Optimize Case-Insensitive Prefix Check

* dead code

* Eliminated Unnecessary Stream Prefetch for Multipart SSE

* range sse

* sse

* refactor

* context

* fmt

* fix type

* fix SSE-C IV Mismatch

* Fix Headers Being Set After WriteHeader

* fix url parsing

* propergate sse headers

* multipart sse-s3

* aws sig v4 authen

* sse kms

* set content range

* better errors

* Update s3api_object_handlers_copy.go

* Update s3api_object_handlers.go

* Update s3api_object_handlers.go

* avoid magic number

* clean up

* Update s3api_bucket_policy_handlers.go

* fix url parsing

* context

* data and metadata both use background context

* adjust the offset

* SSE Range Request IV Calculation

* adjust logs

* IV relative to offset in each part, not the whole file

* collect logs

* offset

* fix offset

* fix url

* logs

* variable

* jwt

* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.

* sse

* adjust IV and offset

* multipart boundaries

* ensures PUT and GET operations return consistent ETags

* Metadata Header Case

* CommonPrefixes Sorting with URL Encoding

* always sort

* remove the extra PathUnescape call

* fix the multipart get part ETag

* the FileChunk is created without setting ModifiedTsNs

* Sort CommonPrefixes lexicographically to match AWS S3 behavior

* set md5 for multipart uploads

* prevents any potential data loss or corruption in the small-file inline storage path

* compiles correctly

* decryptedReader will now be properly closed after use

* Fixed URL encoding and sort order for CommonPrefixes

* Update s3api_object_handlers_list.go

* SSE-x Chunk View Decryption

* Different IV offset calculations for single-part vs multipart objects

* still too verbose in logs

* less logs

* ensure correct conversion

* fix listing

* nil check

* minor fixes

* nil check

* single character delimiter

* optimize

* range on empty object or zero-length

* correct IV based on its position within that part, not its position in the entire object

* adjust offset

* offset

Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk

* look breaking

* refactor

* error on no content

* handle intra-block byte skipping

* Incomplete HTTP Response Error Handling

* multipart SSE

* Update s3api_object_handlers.go

* address comments

* less logs

* handling directory

* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups

* Revert "handling directory"

This reverts commit 3a335f0ac33c63f51975abc63c40e5328857a74b.

* constant

* Consolidate nil entry checks in GetObjectHandler

* add range tests

* Consolidate redundant nil entry checks in HeadObjectHandler

* adjust logs

* SSE type

* large files

* large files

Reverted the plain-object range test

* ErrNoEncryptionConfig

* Fixed SSERangeReader Infinite Loop Vulnerability

* Fixed SSE-KMS Multipart ChunkReader HTTP Body Leak

* handle empty directory in S3, added PyArrow tests

* purge unused code

* Update s3_parquet_test.py

* Update requirements.txt

* According to S3 specifications, when both partNumber and Range are present, the Range should apply within the selected part's boundaries, not to the full object.

* handle errors

* errors after writing header

* https

* fix: Wait for volume assignment readiness before running Parquet tests

The test-implicit-dir-with-server test was failing with an Internal Error
because volume assignment was not ready when tests started. This fix adds
a check that attempts a volume assignment and waits for it to succeed
before proceeding with tests.

This ensures that:
1. Volume servers are registered with the master
2. Volume growth is triggered if needed
3. The system can successfully assign volumes for writes

Fixes the timeout issue where boto3 would retry 4 times and fail with
'We encountered an internal error, please try again.'

* sse tests

* store derived IV

* fix: Clean up gRPC ports between tests to prevent port conflicts

The second test (test-implicit-dir-with-server) was failing because the
volume server's gRPC port (18080 = VOLUME_PORT + 10000) was still in use
from the first test. The cleanup code only killed HTTP port processes,
not gRPC port processes.

Added cleanup for gRPC ports in all stop targets:
- Master gRPC: MASTER_PORT + 10000 (19333)
- Volume gRPC: VOLUME_PORT + 10000 (18080)
- Filer gRPC: FILER_PORT + 10000 (18888)

This ensures clean state between test runs in CI.

* add import

* address comments

* docs: Add placeholder documentation files for Parquet test suite

Added three missing documentation files referenced in test/s3/parquet/README.md:

1. TEST_COVERAGE.md - Documents 43 total test cases (17 Go unit tests,
   6 Python integration tests, 20 Python end-to-end tests)

2. FINAL_ROOT_CAUSE_ANALYSIS.md - Explains the s3fs compatibility issue
   with PyArrow, the implicit directory problem, and how the fix works

3. MINIO_DIRECTORY_HANDLING.md - Compares MinIO's directory handling
   approach with SeaweedFS's implementation

Each file contains:
- Title and overview
- Key technical details relevant to the topic
- TODO sections for future expansion

These placeholder files resolve the broken README links and provide
structure for future detailed documentation.

* clean up if metadata operation failed

* Update s3_parquet_test.py

* clean up

* Update Makefile

* Update s3_parquet_test.py

* Update Makefile

* Handle ivSkip for non-block-aligned offsets

* Update README.md

* stop volume server faster

* stop volume server in 1 second

* different IV for each chunk in SSE-S3 and SSE-KMS

* clean up if fails

* testing upload

* error propagation

* fmt

* simplify

* fix copying

* less logs

* endian

* Added marshaling error handling

* handling invalid ranges

* error handling for adding to log buffer

* fix logging

* avoid returning too quickly and ensure proper cleaning up

* Activity Tracking for Disk Reads

* Cleanup Unused Parameters

* Activity Tracking for Kafka Publishers

* Proper Test Error Reporting

* refactoring

* less logs

* less logs

* go fmt

* guard it with if entry.Attributes.TtlSec > 0 to match the pattern used elsewhere.

* Handle bucket-default encryption config errors explicitly for multipart

* consistent activity tracking

* obsolete code for s3 on filer read/write handlers

* Update weed/s3api/s3api_object_handlers_list.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* S3: Add tests for PyArrow with native S3 filesystem (#7508)

* PyArrow native S3 filesystem

* add sse-s3 tests

* update

* minor

* ENABLE_SSE_S3

* Update test_pyarrow_native_s3.py

* clean up

* refactoring

* Update test_pyarrow_native_s3.py

* filer store: add foundationdb (#7178)

* add foundationdb

* Update foundationdb_store.go

* fix

* apply the patch

* avoid panic on error

* address comments

* remove extra data

* address comments

* adds more debug messages

* fix range listing

* delete with prefix range; list with right start key

* fix docker files

* use the more idiomatic FoundationDB KeySelectors

* address comments

* proper errors

* fix API versions

* more efficient

* recursive deletion

* clean up

* clean up

* pagination, one transaction for deletion

* error checking

* Use fdb.Strinc() to compute the lexicographically next string and create a proper range

* fix docker

* Update README.md

* delete in batches

* delete in batches

* fix build

* add foundationdb build

* Updated FoundationDB Version

* Fixed glibc/musl Incompatibility (Alpine → Debian)

* Update container_foundationdb_version.yml

* build SeaweedFS

* build tag

* address comments

* separate transaction

* address comments

* fix build

* empty vs no data

* fixes

* add go test

* Install FoundationDB client libraries

* nil compare

* chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 in /test/kafka/kafka-client-loadtest (#7510)

chore(deps): bump golang.org/x/crypto

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.43.0 to 0.45.0.
- [Commits](https://github.com/golang/crypto/compare/v0.43.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Account Info (#7507)

* Account Info

Add account info on s3.configure

* address comments

* Update command_s3_configure.go

---------

Co-authored-by: chrislu <chris.lu@gmail.com>

* chore(deps): bump org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0 in /other/java/hdfs-over-ftp (#7513)

chore(deps): bump org.apache.hadoop:hadoop-common

Bumps org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-version: 3.4.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0 (#7511)

* chore(deps): bump golang.org/x/crypto from 0.43.0 to 0.45.0

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.43.0 to 0.45.0.
- [Commits](https://github.com/golang/crypto/compare/v0.43.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* go mod tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>

* chore(deps): bump org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0 in /other/java/hdfs3 (#7512)

* chore(deps): bump org.apache.hadoop:hadoop-common in /other/java/hdfs3

Bumps org.apache.hadoop:hadoop-common from 3.2.4 to 3.4.0.

---
updated-dependencies:
- dependency-name: org.apache.hadoop:hadoop-common
  dependency-version: 3.4.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* add java client unit tests

* Update dependency-reduced-pom.xml

* add java integration tests

* fix

* fix buffer

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>

* S3: JWT generation for volume server authentication (#7514)

* Refactor JWT generation for volume server authentication to use centralized function from filer package, improving code clarity and reducing redundancy.

* Update s3api_object_handlers.go

* S3: S3 Object Retention API to include XML namespace support (#7517)

* Refactor S3 Object Retention API to include XML namespace support and improve compatibility with Veeam. Updated XML tags to remove hardcoded namespaces and added test cases for retention and legal hold configurations without namespaces.

* Added XMLNS field setting in both places

* S3: adds FilerClient to use cached volume id (#7518)

* adds FilerClient to use cached volume id

* refactor: MasterClient embeds vidMapClient to eliminate ~150 lines of duplication

- Create masterVolumeProvider that implements VolumeLocationProvider
- MasterClient now embeds vidMapClient instead of maintaining duplicate cache logic
- Removed duplicate methods: LookupVolumeIdsWithFallback, getStableVidMap, etc.
- MasterClient still receives real-time updates via KeepConnected streaming
- Updates call inherited addLocation/deleteLocation from vidMapClient
- Benefits: DRY principle, shared singleflight, cache chain logic reused
- Zero behavioral changes - only architectural improvement

* refactor: mount uses FilerClient for efficient volume location caching

- Add configurable vidMap cache size (default: 5 historical snapshots)
- Add FilerClientOption struct for clean configuration
  * GrpcTimeout: default 5 seconds (prevents hanging requests)
  * UrlPreference: PreferUrl or PreferPublicUrl
  * CacheSize: number of historical vidMap snapshots (for volume moves)
- NewFilerClient uses option struct for better API extensibility
- Improved error handling in filerVolumeProvider.LookupVolumeIds:
  * Distinguish genuine 'not found' from communication failures
  * Log volumes missing from filer response
  * Return proper error context with volume count
  * Document that filer Locations lacks Error field (unlike master)
- FilerClient.GetLookupFileIdFunction() handles URL preference automatically
- Mount (WFS) creates FilerClient with appropriate options
- Benefits for weed mount:
  * Singleflight: Deduplicates concurrent volume lookups
  * Cache history: Old volume locations available briefly when volumes move
  * Configurable cache depth: Tune for different deployment environments
  * Battle-tested vidMap cache with cache chain
  * Better concurrency handling with timeout protection
  * Improved error visibility and debugging
- Old filer.LookupFn() kept for backward compatibility
- Performance improvement for mount operations with high concurrency

* fix: prevent vidMap swap race condition in LookupFileIdWithFallback

- Hold vidMapLock.RLock() during entire vm.LookupFileId() call
- Prevents resetVidMap() from swapping vidMap mid-operation
- Ensures atomic access to the current vidMap instance
- Added documentation warnings to getStableVidMap() about swap risks
- Enhanced withCurrentVidMap() documentation for clarity

This fixes a subtle race condition where:
1. Thread A: acquires lock, gets vm pointer, releases lock
2. Thread B: calls resetVidMap(), swaps vc.vidMap
3. Thread A: calls vm.LookupFileId() on old/stale vidMap

While the old vidMap remains valid (in cache chain), holding the lock
ensures we consistently use the current vidMap for the entire operation.

* fix: FilerClient supports multiple filer addresses for high availability

Critical fix: FilerClient now accepts []ServerAddress instead of single address
- Prevents mount failure when first filer is down (regression fix)
- Implements automatic failover to remaining filers
- Uses round-robin with atomic index tracking (same pattern as WFS.WithFilerClient)
- Retries all configured filers before giving up
- Updates successful filer index for future requests

Changes:
- NewFilerClient([]pb.ServerAddress, ...) instead of (pb.ServerAddress, ...)
- filerVolumeProvider references FilerClient for failover access
- LookupVolumeIds tries all filers with util.Retry pattern
- Mount passes all option.FilerAddresses for HA
- S3 wraps single filer in slice for API consistency

This restores the high availability that existed in the old implementation
where mount would automatically failover between configured filers.

* fix: restore leader change detection in KeepConnected stream loop

Critical fix: Leader change detection was accidentally removed from the streaming loop
- Master can announce leader changes during an active KeepConnected stream
- Without this check, client continues talking to non-leader until connection breaks
- This can lead to stale data or operational errors

The check needs to be in TWO places:
1. Initial response (lines 178-187): Detect redirect on first connect
2. Stream loop (lines 203-209): Detect leader changes during active stream

Restored the loop check that was accidentally removed during refactoring.
This ensures the client immediately reconnects to new leader when announced.

* improve: address code review findings on error handling and documentation

1. Master provider now preserves per-volume errors
   - Surface detailed errors from master (e.g., misconfiguration, deletion)
   - Return partial results with aggregated errors using errors.Join
   - Callers can now distinguish specific volume failures from general errors
   - Addresses issue of losing vidLoc.Error details

2. Document GetMaster initialization contract
   - Add comprehensive documentation explaining blocking behavior
   - Clarify that KeepConnectedToMaster must be started first
   - Provide typical initialization pattern example
   - Prevent confusing timeouts during warm-up

3. Document partial results API contract
   - LookupVolumeIdsWithFallback explicitly documents partial results
   - Clear examples of how to handle result + error combinations
   - Helps prevent callers from discarding valid partial results

4. Add safeguards to legacy filer.LookupFn
   - Add deprecation warning with migration guidance
   - Implement simple 10,000 entry cache limit
   - Log warning when limit reached
   - Recommend wdclient.FilerClient for new code
   - Prevents unbounded memory growth in long-running processes

These changes improve API clarity and operational safety while maintaining
backward compatibility.

* fix: handle partial results correctly in LookupVolumeIdsWithFallback callers

Two callers were discarding partial results by checking err before processing
the result map. While these are currently single-volume lookups (so partial
results aren't possible), the code was fragile and would break if we ever
batched multiple volumes together.

Changes:
- Check result map FIRST, then conditionally check error
- If volume is found in result, use it (ignore errors about other volumes)
- If volume is NOT found and err != nil, include error context with %w
- Add defensive comments explaining the pattern for future maintainers

This makes the code:
1. Correct for future batched lookups
2. More informative (preserves underlying error details)
3. Consistent with filer_grpc_server.go which already handles this correctly

Example: If looking up ["1", "2", "999"] and only 999 fails, callers
looking for volumes 1 or 2 will succeed instead of failing unnecessarily.

* improve: address remaining code review findings

1. Lazy initialize FilerClient in mount for proxy-only setups
   - Only create FilerClient when VolumeServerAccess != "filerProxy"
   - Avoids wasted work when all reads proxy through filer
   - filerClient is nil for proxy mode, initialized for direct access

2. Fix inaccurate deprecation comment in filer.LookupFn
   - Updated comment to reflect current behavior (10k bounded cache)
   - Removed claim of "unbounded growth" after adding size limit
   - Still directs new code to wdclient.FilerClient for better features

3. Audit all MasterClient usages for KeepConnectedToMaster
   - Verified all production callers start KeepConnectedToMaster early
   - Filer, Shell, Master, Broker, Benchmark, Admin all correct
   - IAM creates MasterClient but never uses it (harmless)
   - Test code doesn't need KeepConnectedToMaster (mocks)

All callers properly follow the initialization pattern documented in
GetMaster(), preventing unexpected blocking or timeouts.

* fix: restore observability instrumentation in MasterClient

During the refactoring, several important stats counters and logging
statements were accidentally removed from tryConnectToMaster. These are
critical for monitoring and debugging the health of master client connections.

Restored instrumentation:
1. stats.MasterClientConnectCounter("total") - tracks all connection attempts
2. stats.MasterClientConnectCounter(FailedToKeepConnected) - when KeepConnected stream fails
3. stats.MasterClientConnectCounter(FailedToReceive) - when Recv() fails in loop
4. stats.MasterClientConnectCounter(Failed) - when overall gprcErr occurs
5. stats.MasterClientConnectCounter(OnPeerUpdate) - when peer updates detected

Additionally restored peer update logging:
- "+ filer@host noticed group.type address" for node additions
- "- filer@host noticed group.type address" for node removals
- Only logs updates matching the client's FilerGroup for noise reduction

This information is valuable for:
- Monitoring cluster health and connection stability
- Debugging cluster membership changes
- Tracking master failover and reconnection patterns
- Identifying network issues between clients and masters

No functional changes - purely observability restoration.

* improve: implement gRPC-aware retry for FilerClient volume lookups

The previous implementation used util.Retry which only retries errors
containing the string "transport". This is insufficient for handling
the full range of transient gRPC errors.

Changes:
1. Added isRetryableGrpcError() to properly inspect gRPC status codes
   - Retries: Unavailable, DeadlineExceeded, ResourceExhausted, Aborted
   - Falls back to string matching for non-gRPC network errors

2. Replaced util.Retry with custom retry loop
   - 3 attempts with exponential backoff (1s, 1.5s, 2.25s)
   - Tries all N filers on each attempt (N*3 total attempts max)
   - Fast-fails on non-retryable errors (NotFound, PermissionDenied, etc.)

3. Improved logging
   - Shows both filer attempt (x/N) and retry attempt (y/3)
   - Logs retry reason and wait time for debugging

Benefits:
- Better handling of transient gRPC failures (server restarts, load spikes)
- Faster failure for permanent errors (no wasted retries)
- More informative logs for troubleshooting
- Maintains existing HA failover across multiple filers

Example: If all 3 filers return Unavailable (server overload):
- Attempt 1: try all 3 filers, wait 1s
- Attempt 2: try all 3 filers, wait 1.5s
- Attempt 3: try all 3 filers, fail

Example: If filer returns NotFound (volume doesn't exist):
- Attempt 1: try all 3 filers, fast-fail (no retry)

* fmt

* improve: add circuit breaker to skip known-unhealthy filers

The previous implementation tried all filers on every failure, including
known-unhealthy ones. This wasted time retrying permanently down filers.

Problem scenario (3 filers, filer0 is down):
- Last successful: filer1 (saved as filerIndex=1)
- Next lookup when filer1 fails:
  Retry 1: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
  Retry 2: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
  Retry 3: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
  Total wasted: 15 seconds on known-bad filer!

Solution: Circuit breaker pattern
- Track consecutive failures per filer (atomic int32)
- Skip filers with 3+ consecutive failures
- Re-check unhealthy filers every 30 seconds
- Reset failure count on success

New behavior:
- filer0 fails 3 times → marked unhealthy
- Future lookups skip filer0 for 30 seconds
- After 30s, re-check filer0 (allows recovery)
- If filer0 succeeds, reset failure count to 0

Benefits:
1. Avoids wasting time on known-down filers
2. Still sticks to last healthy filer (via filerIndex)
3. Allows recovery (30s re-check window)
4. No configuration needed (automatic)

Implementation details:
- filerHealth struct tracks failureCount (atomic) + lastFailureTime
- shouldSkipUnhealthyFiler(): checks if we should skip this filer
- recordFilerSuccess(): resets failure count to 0
- recordFilerFailure(): increments count, updates timestamp
- Logs when skipping unhealthy filers (V(2) level)

Example with circuit breaker:
- filer0 down, saved filerIndex=1 (filer1 healthy)
- Lookup 1: filer1(ok) → Done (0.01s)
- Lookup 2: filer1(fail) → filer2(ok) → Done, save filerIndex=2 (0.01s)
- Lookup 3: filer2(fail) → skip filer0 (unhealthy) → filer1(ok) → Done (0.01s)

Much better than wasting 15s trying filer0 repeatedly!

* fix: OnPeerUpdate should only process updates for matching FilerGroup

Critical bug: The OnPeerUpdate callback was incorrectly moved outside the
FilerGroup check when restoring observability instrumentation. This caused
clients to process peer updates for ALL filer groups, not just their own.

Problem:
  Before: mc.OnPeerUpdate only called for update.FilerGroup == mc.FilerGroup
  Bug:    mc.OnPeerUpdate called for ALL updates regardless of FilerGroup

Impact:
- Multi-tenant deployments with separate filer groups would see cross-group
  updates (e.g., group A clients processing group B updates)
- Could cause incorrect cluster membership tracking
- OnPeerUpdate handlers (like Filer's DLM ring updates) would receive
  irrelevant updates from other groups

Example scenario:
  Cluster has two filer groups: "production" and "staging"
  Production filer connects with FilerGroup="production"

  Incorrect behavior (bug):
    - Receives "staging" group updates
    - Incorrectly adds staging filers to production DLM ring
    - Cross-tenant data access issues

  Correct behavior (fixed):
    - Only receives "production" group updates
    - Only adds production filers to production DLM ring
    - Proper isolation between groups

Fix:
  Moved mc.OnPeerUpdate(update, time.Now()) back INSIDE the FilerGroup check
  where it belongs, matching the original implementation.

The logging and stats counter were already correctly scoped to matching
FilerGroup, so they remain inside the if block as intended.

* improve: clarify Aborted error handling in volume lookups

Added documentation and logging to address the concern that codes.Aborted
might not always be retryable in all contexts.

Context-specific justification for treating Aborted as retryable:

Volume location lookups (LookupVolume RPC) are simple, read-only operations:
  - No transactions
  - No write conflicts
  - No application-level state changes
  - Idempotent (safe to retry)

In this context, Aborted is most likely caused by:
  - Filer restarting/recovering (transient)
  - Connection interrupted mid-request (transient)
  - Server-side resource cleanup (transient)

NOT caused by:
  - Application-level conflicts (no writes)
  - Transaction failures (no transactions)
  - Logical errors (read-only lookup)

Changes:
1. Added detailed comment explaining the context-specific reasoning
2. Added V(1) logging when treating Aborted as retryable
   - Helps detect misclassification if it occurs
   - Visible in verbose logs for troubleshooting
3. Split switch statement for clarity (one case per line)

If future analysis shows Aborted should not be retried, operators will
now have visibility via logs to make that determination. The logging
provides evidence for future tuning decisions.

Alternative approaches considered but not implemented:
  - Removing Aborted entirely (too conservative for read-only ops)
  - Message content inspection (adds complexity, no known patterns yet)
  - Different handling per RPC type (premature optimization)

* fix: IAM server must start KeepConnectedToMaster for masterClient usage

The IAM server creates and uses a MasterClient but never started
KeepConnectedToMaster, which could cause blocking if IAM config files
have chunks requiring volume lookups.

Problem flow:
  NewIamApiServerWithStore()
    → creates masterClient
    →  NEVER starts KeepConnectedToMaster

  GetS3ApiConfigurationFromFiler()
    → filer.ReadEntry(iama.masterClient, ...)
      → StreamContent(masterClient, ...) if file has chunks
        → masterClient.GetLookupFileIdFunction()
          → GetMaster(ctx) ← BLOCKS indefinitely waiting for connection!

While IAM config files (identity & policies) are typically small and
stored inline without chunks, the code path exists and would block
if the files ever had chunks.

Fix:
  Start KeepConnectedToMaster in background goroutine right after
  creating masterClient, following the documented pattern:

    mc := wdclient.NewMasterClient(...)
    go mc.KeepConnectedToMaster(ctx)

This ensures masterClient is usable if ReadEntry ever needs to
stream chunked content from volume servers.

Note: This bug was dormant because IAM config files are small (<256 bytes)
and SeaweedFS stores small files inline in Entry.Content, not as chunks.
The bug would only manifest if:
  - IAM config grew > 256 bytes (inline threshold)
  - Config was stored as chunks on volume servers
  - ReadEntry called StreamContent
  - GetMaster blocked indefinitely

Now all 9 production MasterClient instances correctly follow the pattern.

* fix: data race on filerHealth.lastFailureTime in circuit breaker

The circuit breaker tracked lastFailureTime as time.Time, which was
written in recordFilerFailure and read in shouldSkipUnhealthyFiler
without synchronization, causing a data race.

Data race scenario:
  Goroutine 1: recordFilerFailure(0)
    health.lastFailureTime = time.Now()  //  unsynchronized write

  Goroutine 2: shouldSkipUnhealthyFiler(0)
    time.Since(health.lastFailureTime)   //  unsynchronized read

  → RACE DETECTED by -race detector

Fix:
  Changed lastFailureTime from time.Time to int64 (lastFailureTimeNs)
  storing Unix nanoseconds for atomic access:

  Write side (recordFilerFailure):
    atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())

  Read side (shouldSkipUnhealthyFiler):
    lastFailureNs := atomic.LoadInt64(&health.lastFailureTimeNs)
    if lastFailureNs == 0 { return false }  // Never failed
    lastFailureTime := time.Unix(0, lastFailureNs)
    time.Since(lastFailureTime) > 30*time.Second

Benefits:
  - Atomic reads/writes (no data race)
  - Efficient (int64 is 8 bytes, always atomic on 64-bit systems)
  - Zero value (0) naturally means "never failed"
  - No mutex needed (lock-free circuit breaker)

Note: sync/atomic was already imported for failureCount, so no new
import needed.

* fix: create fresh timeout context for each filer retry attempt

The timeout context was created once at function start and reused across
all retry attempts, causing subsequent retries to run with progressively
shorter (or expired) deadlines.

Problem flow:
  Line 244: timeoutCtx, cancel := context.WithTimeout(ctx, 5s)
  defer cancel()

  Retry 1, filer 0: client.LookupVolume(timeoutCtx, ...) ← 5s available 
  Retry 1, filer 1: client.LookupVolume(timeoutCtx, ...) ← 3s left
  Retry 1, filer 2: client.LookupVolume(timeoutCtx, ...) ← 0.5s left
  Retry 2, filer 0: client.LookupVolume(timeoutCtx, ...) ← EXPIRED! 

Result: Retries always fail with DeadlineExceeded, defeating the purpose
of retries.

Fix:
  Moved context.WithTimeout inside the per-filer loop, creating a fresh
  timeout context for each attempt:

    for x := 0; x < n; x++ {
      timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
      err := pb.WithGrpcFilerClient(..., func(client) {
        resp, err := client.LookupVolume(timeoutCtx, ...)
        ...
      })
      cancel()  // Clean up immediately after call
    }

Benefits:
  - Each filer attempt gets full fc.grpcTimeout (default 5s)
  - Retries actually have time to complete
  - No context leaks (cancel called after each attempt)
  - More predictable timeout behavior

Example with fix:
  Retry 1, filer 0: fresh 5s timeout 
  Retry 1, filer 1: fresh 5s timeout 
  Retry 2, filer 0: fresh 5s timeout 

Total max time: 3 retries × 3 filers × 5s = 45s (plus backoff)

Note: The outer ctx (from caller) still provides overall cancellation if
the caller cancels or times out the entire operation.

* fix: always reset vidMap cache on master reconnection

The previous refactoring removed the else block that resets vidMap when
the first message from a newly connected master is not a VolumeLocation.

Problem scenario:
  1. Client connects to master-1 and builds vidMap cache
  2. Master-1 fails, client connects to master-2
  3. First message from master-2 is a ClusterNodeUpdate (not VolumeLocation)
  4. Old code: vidMap is reset and updated 
  5. New code: vidMap is NOT reset 
  6. Result: Client uses stale cache from master-1 → data access errors

Example flow with bug:
  Connect to master-2
  First message: ClusterNodeUpdate {filer.x added}
  → No resetVidMap() call
  → vidMap still has master-1's stale volume locations
  → Client reads from wrong volume servers → 404 errors

Fix:
  Restored the else block that resets vidMap when first message is not
  a VolumeLocation:

    if resp.VolumeLocation != nil {
      // ... check leader, reset, and update ...
    } else {
      // First message is ClusterNodeUpdate or other type
      // Must still reset to avoid stale data
      mc.resetVidMap()
    }

This ensures the cache is always cleared when establishing a new master
connection, regardless of what the first message type is.

Root cause:
  During the vidMapClient refactoring, this else block was accidentally
  dropped, making failover behavior fragile and non-deterministic (depends
  on which message type arrives first from the new master).

Impact:
  - High severity for master failover scenarios
  - Could cause read failures, 404s, or wrong data access
  - Only manifests when first message is not VolumeLocation

* fix: goroutine and connection leak in IAM server shutdown

The IAM server's KeepConnectedToMaster goroutine used context.Background(),
which is non-cancellable, causing the goroutine and its gRPC connections
to leak on server shutdown.

Problem:
  go masterClient.KeepConnectedToMaster(context.Background())

  - context.Background() never cancels
  - KeepConnectedToMaster goroutine runs forever
  - gRPC connection to master stays open
  - No way to stop cleanly on server shutdown

Result: Resource leaks when IAM server is stopped

Fix:
  1. Added shutdownContext and shutdownCancel to IamApiServer struct
  2. Created cancellable context in NewIamApiServerWithStore:
       shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
  3. Pass shutdownCtx to KeepConnectedToMaster:
       go masterClient.KeepConnectedToMaster(shutdownCtx)
  4. Added Shutdown() method to invoke cancel:
       func (iama *IamApiServer) Shutdown() {
           if iama.shutdownCancel != nil {
               iama.shutdownCancel()
           }
       }

  5. Stored masterClient reference on IamApiServer for future use

Benefits:
  - Goroutine stops cleanly when Shutdown() is called
  - gRPC connections are closed properly
  - No resource leaks on server restart/stop
  - Shutdown() is idempotent (safe to call multiple times)

Usage (for future graceful shutdown):
  iamServer, _ := iamapi.NewIamApiServer(...)
  defer iamServer.Shutdown()

  // or in signal handler:
  sigChan := make(chan os.Signal, 1)
  signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
  go func() {
      <-sigChan
      iamServer.Shutdown()
      os.Exit(0)
  }()

Note: Current command implementations (weed/command/iam.go) don't have
shutdown paths yet, but this makes IAM server ready for proper lifecycle
management when that infrastructure is added.

* refactor: remove unnecessary KeepMasterClientConnected wrapper in filer

The Filer.KeepMasterClientConnected() method was an unnecessary wrapper that
just forwarded to MasterClient.KeepConnectedToMaster(). This wrapper added
no value and created inconsistency with other components that call
KeepConnectedToMaster directly.

Removed:
  filer.go:178-180
    func (fs *Filer) KeepMasterClientConnected(ctx context.Context) {
        fs.MasterClient.KeepConnectedToMaster(ctx)
    }

Updated caller:
  filer_server.go:181
    - go fs.filer.KeepMasterClientConnected(context.Background())
    + go fs.filer.MasterClient.KeepConnectedToMaster(context.Background())

Benefits:
  - Consistent with other components (S3, IAM, Shell, Mount)
  - Removes unnecessary indirection
  - Clearer that KeepConnectedToMaster runs in background goroutine
  - Follows the documented pattern from MasterClient.GetMaster()

Note: shell/commands.go was verified and already correctly starts
KeepConnectedToMaster in a background goroutine (shell_liner.go:51):
  go commandEnv.MasterClient.KeepConnectedToMaster(ctx)

* fix: use client ID instead of timeout for gRPC signature parameter

The pb.WithGrpcFilerClient signature parameter is meant to be a client
identifier for logging and tracking (added as 'sw-client-id' gRPC metadata
in streaming mode), not a timeout value.

Problem:
  timeoutMs := int32(fc.grpcTimeout.Milliseconds())  // 5000 (5 seconds)
  err := pb.WithGrpcFilerClient(false, timeoutMs, filerAddress, ...)

  - Passing timeout (5000ms) as signature/client ID
  - Misuse of API: signature should be a unique client identifier
  - Timeout is already handled by timeoutCtx passed to gRPC call
  - Inconsistent with other callers (all use 0 or proper client ID)

How WithGrpcFilerClient uses signature parameter:
  func WithGrpcClient(..., signature int32, ...) {
    if streamingMode && signature != 0 {
      md := metadata.New(map[string]string{"sw-client-id": fmt.Sprintf("%d", signature)})
      ctx = metadata.NewOutgoingContext(ctx, md)
    }
    ...
  }

It's for client identification, not timeout control!

Fix:
  1. Added clientId int32 field to FilerClient struct
  2. Initialize with rand.Int31() in NewFilerClient for unique ID
  3. Removed timeoutMs variable (and misleading comment)
  4. Use fc.clientId in pb.WithGrpcFilerClient call

Before:
  err := pb.WithGrpcFilerClient(false, timeoutMs, ...)
                                      ^^^^^^^^^ Wrong! (5000)

After:
  err := pb.WithGrpcFilerClient(false, fc.clientId, ...)
                                      ^^^^^^^^^^^^ Correct! (random int31)

Benefits:
  - Correct API usage (signature = client ID, not timeout)
  - Timeout still works via timeoutCtx (unchanged)
  - Consistent with other pb.WithGrpcFilerClient callers
  - Enables proper client tracking on filer side via gRPC metadata
  - Each FilerClient instance has unique ID for debugging

Examples of correct usage elsewhere:
  weed/iamapi/iamapi_server.go:145     pb.WithGrpcFilerClient(false, 0, ...)
  weed/command/s3.go:215               pb.WithGrpcFilerClient(false, 0, ...)
  weed/shell/commands.go:110           pb.WithGrpcFilerClient(streamingMode, 0, ...)

All use 0 (or a proper signature), not a timeout value.

* fix: add timeout to master volume lookup to prevent indefinite blocking

The masterVolumeProvider.LookupVolumeIds method was using the context
directly without a timeout, which could cause it to block indefinitely
if the master is slow to respond or unreachable.

Problem:
  err := pb.WithMasterClient(false, p.masterClient.GetMaster(ctx), ...)
  resp, err := client.LookupVolume(ctx, &master_pb.LookupVolumeRequest{...})

  - No timeout on gRPC call to master
  - Could block indefinitely if master is unresponsive
  - Inconsistent with FilerClient which uses 5s timeout
  - This is a fallback path (cache miss) but still needs protection

Scenarios where this could hang:
  1. Master server under heavy load (slow response)
  2. Network issues between client and master
  3. Master server hung or deadlocked
  4. Master in process of shutting down

Fix:
  timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
  defer cancel()

  err := pb.WithMasterClient(false, p.masterClient.GetMaster(timeoutCtx), ...)
  resp, err := client.LookupVolume(timeoutCtx, &master_pb.LookupVolumeRequest{...})

Benefits:
  - Prevents indefinite blocking on master lookup
  - Consistent with FilerClient timeout pattern (5 seconds)
  - Faster failure detection when master is unresponsive
  - Caller's context still honored (timeout is in addition, not replacement)
  - Improves overall system resilience

Note: 5 seconds is a reasonable default for volume lookups:
  - Long enough for normal master response (~10-50ms)
  - Short enough to fail fast on issues
  - Matches FilerClient's grpcTimeout default

* purge

* refactor: address code review feedback on comments and style

Fixed several code quality issues identified during review:

1. Corrected backoff algorithm description in filer_client.go:
   - Changed "Exponential backoff" to "Multiplicative backoff with 1.5x factor"
   - The formula waitTime * 3/2 produces 1s, 1.5s, 2.25s, not exponential 2^n
   - More accurate terminology prevents confusion

2. Removed redundant nil check in vidmap_client.go:
   - After the for loop, node is guaranteed to be non-nil
   - Loop either returns early or assigns non-nil value to node
   - Simplified: if node != nil { node.cache.Store(nil) } → node.cache.Store(nil)

3. Added startup logging to IAM server for consistency:
   - Log when master client connection starts
   - Matches pattern in S3ApiServer (line 100 in s3api_server.go)
   - Improves operational visibility during startup
   - Added missing glog import

4. Fixed indentation in filer/reader_at.go:
   - Lines 76-91 had incorrect indentation (extra tab level)
   - Line 93 also misaligned
   - Now properly aligned with surrounding code

5. Updated deprecation comment to follow Go convention:
   - Changed "DEPRECATED:" to "Deprecated:" (standard Go format)
   - Tools like staticcheck and IDEs recognize the standard format
   - Enables automated deprecation warnings in tooling
   - Better developer experience

All changes are cosmetic and do not affect functionality.

* fmt

* refactor: make circuit breaker parameters configurable in FilerClient

The circuit breaker failure threshold (3) and reset timeout (30s) were
hardcoded, making it difficult to tune the client's behavior in different
deployment environments without modifying the code.

Problem:
  func shouldSkipUnhealthyFiler(index int32) bool {
    if failureCount < 3 {              // Hardcoded threshold
      return false
    }
    if time.Since(lastFailureTime) > 30*time.Second {  // Hardcoded timeout
      return false
    }
  }

Different environments have different needs:
  - High-traffic production: may want lower threshold (2) for faster failover
  - Development/testing: may want higher threshold (5) to tolerate flaky networks
  - Low-latency services: may want shorter reset timeout (10s)
  - Batch processing: may want longer reset timeout (60s)

Solution:
  1. Added fields to FilerClientOption:
     - FailureThreshold int32 (default: 3)
     - ResetTimeout time.Duration (default: 30s)

  2. Added fields to FilerClient:
     - failureThreshold int32
     - resetTimeout time.Duration

  3. Applied defaults in NewFilerClient with option override:
     failureThreshold := int32(3)
     resetTimeout := 30 * time.Second
     if opt.FailureThreshold > 0 {
       failureThreshold = opt.FailureThreshold
     }
     if opt.ResetTimeout > 0 {
       resetTimeout = opt.ResetTimeout
     }

  4. Updated shouldSkipUnhealthyFiler to use configurable values:
     if failureCount < fc.failureThreshold { ... }
     if time.Since(lastFailureTime) > fc.resetTimeout { ... }

Benefits:
  ✓ Tunable for different deployment environments
  ✓ Backward compatible (defaults match previous hardcoded values)
  ✓ No breaking changes to existing code
  ✓ Better maintainability and flexibility

Example usage:
  // Aggressive failover for low-latency production
  fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
    FailureThreshold: 2,
    ResetTimeout:     10 * time.Second,
  })

  // Tolerant of flaky networks in development
  fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
    FailureThreshold: 5,
    ResetTimeout:     60 * time.Second,
  })

* retry parameters

* refactor: make retry and timeout parameters configurable

Made retry logic and gRPC timeouts configurable across FilerClient and
MasterClient to support different deployment environments and network
conditions.

Problem 1: Hardcoded retry parameters in FilerClient
  waitTime := time.Second          // Fixed at 1s
  maxRetries := 3                  // Fixed at 3 attempts
  waitTime = waitTime * 3 / 2      // Fixed 1.5x multiplier

Different environments have different needs:
  - Unstable networks: may want more retries (5) with longer waits (2s)
  - Low-latency production: may want fewer retries (2) with shorter waits (500ms)
  - Batch processing: may want exponential backoff (2x) instead of 1.5x

Problem 2: Hardcoded gRPC timeout in MasterClient
  timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)

Master lookups may need different timeouts:
  - High-latency cross-region: may need 10s timeout
  - Local network: may use 2s timeout for faster failure detection

Solution for FilerClient:
  1. Added fields to FilerClientOption:
     - MaxRetries int (default: 3)
     - InitialRetryWait time.Duration (default: 1s)
     - RetryBackoffFactor float64 (default: 1.5)

  2. Added fields to FilerClient:
     - maxRetries int
     - initialRetryWait time.Duration
     - retryBackoffFactor float64

  3. Updated LookupVolumeIds to use configurable values:
     waitTime := fc.initialRetryWait
     maxRetries := fc.maxRetries
     for retry := 0; retry < maxRetries; retry++ {
       ...
       waitTime = time.Duration(float64(waitTime) * fc.retryBackoffFactor)
     }

Solution for MasterClient:
  1. Added grpcTimeout field to MasterClient (default: 5s)
  2. Initialize in NewMasterClient with 5 * time.Second default
  3. Updated masterVolumeProvider to use p.masterClient.grpcTimeout

Benefits:
  ✓ Tunable for different network conditions and deployment scenarios
  ✓ Backward compatible (defaults match previous hardcoded values)
  ✓ No breaking changes to existing code
  ✓ Consistent configuration pattern across FilerClient and MasterClient

Example usage:
  // Fast-fail for low-latency production with stable network
  fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
    MaxRetries:         2,
    InitialRetryWait:   500 * time.Millisecond,
    RetryBackoffFactor: 2.0,  // Exponential backoff
    GrpcTimeout:        2 * time.Second,
  })

  // Patient retries for unstable network or batch processing
  fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
    MaxRetries:         5,
    InitialRetryWait:   2 * time.Second,
    RetryBackoffFactor: 1.5,
    GrpcTimeout:        10 * time.Second,
  })

Note: MasterClient timeout is currently set at construction time and not
user-configurable via NewMasterClient parameters. Future enhancement could
add a MasterClientOption struct similar to FilerClientOption.

* fix: rename vicCacheLock to vidCacheLock for consistency

Fixed typo in variable name for better code consistency and readability.

Problem:
  vidCache := make(map[string]*filer_pb.Locations)
  var vicCacheLock sync.RWMutex  // Typo: vic instead of vid

  vicCacheLock.RLock()
  locations, found := vidCache[vid]
  vicCacheLock.RUnlock()

The variable name 'vicCacheLock' is inconsistent with 'vidCache'.
Both should use 'vid' prefix (volume ID) not 'vic'.

Fix:
  Renamed all 5 occurrences:
  - var vicCacheLock → var vidCacheLock (line 56)
  - vicCacheLock.RLock() → vidCacheLock.RLock() (line 62)
  - vicCacheLock.RUnlock() → vidCacheLock.RUnlock() (line 64)
  - vicCacheLock.Lock() → vidCacheLock.Lock() (line 81)
  - vicCacheLock.Unlock() → vidCacheLock.Unlock() (line 91)

Benefits:
  ✓ Consistent variable naming convention
  ✓ Clearer intent (volume ID cache lock)
  ✓ Better code readability
  ✓ Easier code navigation

* fix: use defer cancel() with anonymous function for proper context cleanup

Fixed context cancellation to use defer pattern correctly in loop iteration.

Problem:
  for x := 0; x < n; x++ {
    timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
    err := pb.WithGrpcFilerClient(...)
    cancel() // Only called on normal return, not on panic
  }

Issues with original approach:
  1. If pb.WithGrpcFilerClient panics, cancel() is never called → context leak
  2. If callback returns early (though unlikely here), cleanup might be missed
  3. Not following Go best practices for context.WithTimeout usage

Problem with naive defer in loop:
  for x := 0; x < n; x++ {
    timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
    defer cancel() //  WRONG: All defers accumulate until function returns
  }

In Go, defer executes when the surrounding *function* returns, not when
the loop iteration ends. This would accumulate n deferred cancel() calls
and leak contexts until LookupVolumeIds returns.

Solution: Wrap in anonymous function
  for x := 0; x < n; x++ {
    err := func() error {
      timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
      defer cancel() //  Executes when anonymous function returns (per iteration)
      return pb.WithGrpcFilerClient(...)
    }()
  }

Benefits:
  ✓ Context always cancelled, even on panic
  ✓ defer executes after each iteration (not accumulated)
  ✓ Follows Go best practices for context.WithTimeout
  ✓ No resource leaks during retry loop execution
  ✓ Cleaner error handling

Reference:
  Go documentation for context.WithTimeout explicitly shows:
    ctx, cancel := context.WithTimeout(...)
    defer cancel()

This is the idiomatic pattern that should always be followed.

* Can't use defer directly in loop

* improve: add data center preference and URL shuffling for consistent performance

Added missing data center preference and load distribution (URL shuffling)
to ensure consistent performance and behavior across all code paths.

Problem 1: PreferPublicUrl path missing DC preference and shuffling
Location: weed/wdclient/filer_client.go lines 184-192

The custom PreferPublicUrl implementation was simply iterating through
locations and building URLs without considering:
  1. Data center proximity (latency optimization)
  2. Load distribution across volume servers

Before:
  for _, loc := range locations {
    url := loc.PublicUrl
    if url == "" { url = loc.Url }
    fullUrls = append(fullUrls, "http://"+url+"/"+fileId)
  }
  return fullUrls, nil

After:
  var sameDcUrls, otherDcUrls []string
  dataCenter := fc.GetDataCenter()
  for _, loc := range locations {
    url := loc.PublicUrl
    if url == "" { url = loc.Url }
    httpUrl := "http://" + url + "/" + fileId
    if dataCenter != "" && dataCenter == loc.DataCenter {
      sameDcUrls = append(sameDcUrls, httpUrl)
    } else {
      otherDcUrls = append(otherDcUrls, httpUrl)
    }
  }
  rand.Shuffle(len(sameDcUrls), ...)
  rand.Shuffle(len(otherDcUrls), ...)
  fullUrls = append(sameDcUrls, otherDcUrls...)

Problem 2: Cache miss path missing URL shuffling
Location: weed/wdclient/vidmap_client.go lines 95-108

The cache miss path (fallback lookup) was missing URL shuffling, while
the cache hit path (vm.LookupFileId) already shuffles URLs. This
inconsistency meant:
  - Cache hit: URLs shuffled → load distributed
  - Cache miss: URLs not shuffled → first server always hit

Before:
  var sameDcUrls, otherDcUrls []string
  // ... build URLs ...
  fullUrls = append(sameDcUrls, otherDcUrls...)
  return fullUrls, nil

After:
  var sameDcUrls, otherDcUrls []string
  // ... build URLs ...
  rand.Shuffle(len(sameDcUrls), ...)
  rand.Shuffle(len(otherDcUrls), ...)
  fullUrls = append(sameDcUrls, otherDcUrls...)
  return fullUrls, nil

Benefits:
  ✓ Reduced latency by preferring same-DC volume servers
  ✓ Even load distribution across all volume servers
  ✓ Consistent behavior between cache hit/miss paths
  ✓ Consistent behavior between PreferUrl and PreferPublicUrl
  ✓ Matches behavior of existing vidMap.LookupFileId implementation

Impact on performance:
  - Lower read latency (same-DC preference)
  - Better volume server utilization (load spreading)
  - No single volume server becomes a hotspot

Note: Added math/rand import to vidmap_client.go for shuffle support.

* Update weed/wdclient/masterclient.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* improve: call IAM server Shutdown() for best-effort cleanup

Added call to iamApiServer.Shutdown() to ensure cleanup happens when possible,
and documented the limitations of the current approach.

Problem:
  The Shutdown() method was defined in IamApiServer but never called anywhere,
  meaning the KeepConnectedToMaster goroutine would continue running even when
  the IAM server stopped, causing resource leaks.

Changes:
  1. Store iamApiServer instance in weed/command/iam.go
     - Changed: _, iamApiServer_err := iamapi.NewIamApiServer(...)
     - To: iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(...)

  2. Added defer call for best-effort cleanup
     - defer iamApiServer.Shutdown()
     - This will execute if startIamServer() returns normally

  3. Added logging in Shutdown() method
     - Log when shutdown is triggered for visibility

  4. Documented limitations and future improvements
     - Added note that defer only works for normal function returns
     - SeaweedFS commands don't currently have signal handling
     - Suggested future enhancement: add SIGTERM/SIGINT handling

Current behavior:
  - ✓ Cleanup happens if HTTP server fails to start (glog.Fatalf path)
  - ✓ Cleanup happens if Serve() returns with error (unlikely)
  - ✗ Cleanup does NOT happen on SIGTERM/SIGINT (process killed)

The last case is a limitation of the current command architecture - all
SeaweedFS commands (s3, filer, volume, master, iam) lack signal handling
for graceful shutdown. This is a systemic issue that affects all services.

Future enhancement:
  To properly handle SIGTERM/SIGINT, the command layer would need:

    sigChan := make(chan os.Signal, 1)
    signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)

    go func() {
      httpServer.Serve(listener) // Non-blocking
    }()

    <-sigChan
    glog.V(0).Infof("Received shutdown signal")
    iamApiServer.Shutdown()
    httpServer.Shutdown(context.Background())

This would require refactoring the command structure for all services,
which is out of scope for this change.

Benefits of current approach:
  ✓ Best-effort cleanup (better than nothing)
  ✓ Proper cleanup in error paths
  ✓ Documented for future improvement
  ✓ Consistent with how other SeaweedFS services handle lifecycle

* data racing in test

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* test read write by s3fs and PyArrow native file system for s3 (#7520)

* test read write by s3fs and PyArrow native file system for s3

* address comments

* add github action

* S3: list owned buckets (#7519)

* list owned buckets

* simplify

* add unit tests

* no-owner buckets

* set identity id

* fallback to request header if iam is not enabled

* refactor to test

* fix comparing

* fix security vulnerability

* Update s3api_bucket_handlers.go

* Update s3api_bucket_handlers.go

* Update s3api_bucket_handlers.go

* S3: set identity to request context, and remove obsolete code (#7523)

* list owned buckets

* simplify

* add unit tests

* no-owner buckets

* set identity id

* fallback to request header if iam is not enabled

* refactor to test

* fix comparing

* fix security vulnerability

* Update s3api_bucket_handlers.go

* Update s3api_bucket_handlers.go

* Update s3api_bucket_handlers.go

* set identity to request context

* remove SeaweedFSIsDirectoryKey

* remove obsolete

* simplify

* reuse

* refactor or remove obsolete logic on filer

* Removed the redundant check in GetOrHeadHandler

* surfacing invalid X-Amz-Tagging as a client error

* clean up

* constant

* reuse

* multiple header values

* code reuse

* err on duplicated tag key

* check errors

* read inside filer

* add debugging for InvalidAccessKeyId

* fix read only volumes

* error format

* do not implement checkReadOnlyVolumes

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Dima Tisnek <dimaqq@gmail.com>
Co-authored-by: Feng Shao <88640691+shaofeng66@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Leonardo Lara <49646901+digitalinfobr@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-21 17:20:04 -08:00
2024-07-29 09:13:41 -07:00
2025-10-13 18:05:17 -07:00
2019-04-30 03:23:20 +00:00
2025-10-17 20:49:47 -07:00
2023-01-05 11:01:22 -08:00
2025-07-19 21:43:34 -07:00
2025-07-23 02:21:53 -07:00

SeaweedFS

Slack Twitter Build Status GoDoc Wiki Docker Pulls SeaweedFS on Maven Central Artifact Hub

SeaweedFS Logo

Sponsor SeaweedFS via Patreon

SeaweedFS is an independent Apache-licensed open source project with its ongoing development made possible entirely thanks to the support of these awesome backers. If you'd like to grow SeaweedFS even stronger, please consider joining our sponsors on Patreon.

Your support will be really appreciated by me and other supporters!

Gold Sponsors

nodion piknik keepsec


Table of Contents

Quick Start

Quick Start for S3 API on Docker

docker run -p 8333:8333 chrislusf/seaweedfs server -s3

Quick Start with Single Binary

  • Download the latest binary from https://github.com/seaweedfs/seaweedfs/releases and unzip a single binary file weed or weed.exe. Or run go install github.com/seaweedfs/seaweedfs/weed@latest.
  • export AWS_ACCESS_KEY_ID=admin ; export AWS_SECRET_ACCESS_KEY=key as the admin credentials to access the object store.
  • Run weed server -dir=/some/data/dir -s3 to start one master, one volume server, one filer, and one S3 gateway.

Also, to increase capacity, just add more volume servers by running weed volume -dir="/some/data/dir2" -mserver="<master_host>:9333" -port=8081 locally, or on a different machine, or on thousands of machines. That is it!

Quick Start SeaweedFS S3 on AWS

Introduction

SeaweedFS is a simple and highly scalable distributed file system. There are two objectives:

  1. to store billions of files!
  2. to serve the files fast!

SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages volumes on volume servers, and these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (O(1), usually just one disk read operation).

There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.

SeaweedFS started by implementing Facebook's Haystack design paper. Also, SeaweedFS implements erasure coding with ideas from f4: Facebooks Warm BLOB Storage System, and has a lot of similarities with Facebooks Tectonic Filesystem

On top of the object store, optional Filer can support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql, Postgres, Redis, Cassandra, HBase, Mongodb, Elastic Search, LevelDB, RocksDB, Sqlite, MemSql, TiDB, Etcd, CockroachDB, YDB, etc.

For any distributed key value stores, the large values can be offloaded to SeaweedFS. With the fast access speed and linearly scalable capacity, SeaweedFS can work as a distributed Key-Large-Value store.

SeaweedFS can transparently integrate with the cloud. With hot data on local cluster, and warm data on the cloud with O(1) access time, SeaweedFS can achieve both fast local access time and elastic cloud storage capacity. What's more, the cloud storage access API cost is minimized. Faster and cheaper than direct cloud storage!

Back to TOC

Features

Additional Features

  • Can choose no replication or different replication levels, rack and data center aware.
  • Automatic master servers failover - no single point of failure (SPOF).
  • Automatic Gzip compression depending on file MIME type.
  • Automatic compaction to reclaim disk space after deletion or update.
  • Automatic entry TTL expiration.
  • Any server with some disk space can add to the total storage space.
  • Adding/Removing servers does not cause any data re-balancing unless triggered by admin commands.
  • Optional picture resizing.
  • Support ETag, Accept-Range, Last-Modified, etc.
  • Support in-memory/leveldb/readonly mode tuning for memory/performance balance.
  • Support rebalancing the writable and readonly volumes.
  • Customizable Multiple Storage Tiers: Customizable storage disk types to balance performance and cost.
  • Transparent cloud integration: unlimited capacity via tiered cloud storage for warm data.
  • Erasure Coding for warm storage Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.

Back to TOC

Filer Features

Kubernetes

Back to TOC

Example: Using Seaweed Object Store

By default, the master node runs on port 9333, and the volume nodes run on port 8080. Let's start one master node, and two volume nodes on port 8080 and 8081. Ideally, they should be started from different machines. We'll use localhost as an example.

SeaweedFS uses HTTP REST operations to read, write, and delete. The responses are in JSON or JSONP format.

Start Master Server

> ./weed master

Start Volume Servers

> weed volume -dir="/tmp/data1" -max=5  -mserver="localhost:9333" -port=8080 &
> weed volume -dir="/tmp/data2" -max=10 -mserver="localhost:9333" -port=8081 &

Write File

To upload a file: first, send a HTTP POST, PUT, or GET request to /dir/assign to get an fid and a volume server URL:

> curl http://localhost:9333/dir/assign
{"count":1,"fid":"3,01637037d6","url":"127.0.0.1:8080","publicUrl":"localhost:8080"}

Second, to store the file content, send a HTTP multi-part POST request to url + '/' + fid from the response:

> curl -F file=@/home/chris/myphoto.jpg http://127.0.0.1:8080/3,01637037d6
{"name":"myphoto.jpg","size":43234,"eTag":"1cc0118e"}

To update, send another POST request with updated file content.

For deletion, send an HTTP DELETE request to the same url + '/' + fid URL:

> curl -X DELETE http://127.0.0.1:8080/3,01637037d6

Save File Id

Now, you can save the fid, 3,01637037d6 in this case, to a database field.

The number 3 at the start represents a volume id. After the comma, it's one file key, 01, and a file cookie, 637037d6.

The volume id is an unsigned 32-bit integer. The file key is an unsigned 64-bit integer. The file cookie is an unsigned 32-bit integer, used to prevent URL guessing.

The file key and file cookie are both coded in hex. You can store the <volume id, file key, file cookie> tuple in your own format, or simply store the fid as a string.

If stored as a string, in theory, you would need 8+1+16+8=33 bytes. A char(33) would be enough, if not more than enough, since most uses will not need 2^32 volumes.

If space is really a concern, you can store the file id in your own format. You would need one 4-byte integer for volume id, 8-byte long number for file key, and a 4-byte integer for the file cookie. So 16 bytes are more than enough.

Read File

Here is an example of how to render the URL.

First look up the volume server's URLs by the file's volumeId:

> curl http://localhost:9333/dir/lookup?volumeId=3
{"volumeId":"3","locations":[{"publicUrl":"localhost:8080","url":"localhost:8080"}]}

Since (usually) there are not too many volume servers, and volumes don't move often, you can cache the results most of the time. Depending on the replication type, one volume can have multiple replica locations. Just randomly pick one location to read.

Now you can take the public URL, render the URL or directly read from the volume server via URL:

 http://localhost:8080/3,01637037d6.jpg

Notice we add a file extension ".jpg" here. It's optional and just one way for the client to specify the file content type.

If you want a nicer URL, you can use one of these alternative URL formats:

 http://localhost:8080/3/01637037d6/my_preferred_name.jpg
 http://localhost:8080/3/01637037d6.jpg
 http://localhost:8080/3,01637037d6.jpg
 http://localhost:8080/3/01637037d6
 http://localhost:8080/3,01637037d6

If you want to get a scaled version of an image, you can add some params:

http://localhost:8080/3/01637037d6.jpg?height=200&width=200
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fit
http://localhost:8080/3/01637037d6.jpg?height=200&width=200&mode=fill

Rack-Aware and Data Center-Aware Replication

SeaweedFS applies the replication strategy at a volume level. So, when you are getting a file id, you can specify the replication strategy. For example:

curl http://localhost:9333/dir/assign?replication=001

The replication parameter options are:

000: no replication
001: replicate once on the same rack
010: replicate once on a different rack, but same data center
100: replicate once on a different data center
200: replicate twice on two different data center
110: replicate once on a different rack, and once on a different data center

More details about replication can be found on the wiki.

You can also set the default replication strategy when starting the master server.

Allocate File Key on Specific Data Center

Volume servers can be started with a specific data center name:

 weed volume -dir=/tmp/1 -port=8080 -dataCenter=dc1
 weed volume -dir=/tmp/2 -port=8081 -dataCenter=dc2

When requesting a file key, an optional "dataCenter" parameter can limit the assigned volume to the specific data center. For example, this specifies that the assigned volume should be limited to 'dc1':

 http://localhost:9333/dir/assign?dataCenter=dc1

Other Features

Back to TOC

Object Store Architecture

Usually distributed file systems split each file into chunks, a central master keeps a mapping of filenames, chunk indices to chunk handles, and also which chunks each chunk server has.

The main drawback is that the central master can't handle many small files efficiently, and since all read requests need to go through the chunk master, so it might not scale well for many concurrent users.

Instead of managing chunks, SeaweedFS manages data volumes in the master server. Each data volume is 32GB in size, and can hold a lot of files. And each storage node can have many data volumes. So the master node only needs to store the metadata about the volumes, which is a fairly small amount of data and is generally stable.

The actual file metadata is stored in each volume on volume servers. Since each volume server only manages metadata of files on its own disk, with only 16 bytes for each file, all file access can read file metadata just from memory and only needs one disk operation to actually read file data.

For comparison, consider that an xfs inode structure in Linux is 536 bytes.

Master Server and Volume Server

The architecture is fairly simple. The actual data is stored in volumes on storage nodes. One volume server can have multiple volumes, and can both support read and write access with basic authentication.

All volumes are managed by a master server. The master server contains the volume id to volume server mapping. This is fairly static information, and can be easily cached.

On each write request, the master server also generates a file key, which is a growing 64-bit unsigned integer. Since write requests are not generally as frequent as read requests, one master server should be able to handle the concurrency well.

Write and Read files

When a client sends a write request, the master server returns (volume id, file key, file cookie, volume node URL) for the file. The client then contacts the volume node and POSTs the file content.

When a client needs to read a file based on (volume id, file key, file cookie), it asks the master server by the volume id for the (volume node URL, volume node public URL), or retrieves this from a cache. Then the client can GET the content, or just render the URL on web pages and let browsers fetch the content.

Please see the example for details on the write-read process.

Storage Size

In the current implementation, each volume can hold 32 gibibytes (32GiB or 8x2^32 bytes). This is because we align content to 8 bytes. We can easily increase this to 64GiB, or 128GiB, or more, by changing 2 lines of code, at the cost of some wasted padding space due to alignment.

There can be 4 gibibytes (4GiB or 2^32 bytes) of volumes. So the total system size is 8 x 4GiB x 4GiB which is 128 exbibytes (128EiB or 2^67 bytes).

Each individual file size is limited to the volume size.

Saving memory

All file meta information stored on a volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.

Tiered Storage to the cloud

The local volume servers are much faster, while cloud storages have elastic capacity and are actually more cost-efficient if not accessed often (usually free to upload, but relatively costly to access). With the append-only structure and O(1) access time, SeaweedFS can take advantage of both local and cloud storage by offloading the warm data to the cloud.

Usually hot data are fresh and warm data are old. SeaweedFS puts the newly created volumes on local servers, and optionally upload the older volumes on the cloud. If the older data are accessed less often, this literally gives you unlimited capacity with limited local servers, and still fast for new data.

With the O(1) access time, the network latency cost is kept at minimum.

If the hot/warm data is split as 20/80, with 20 servers, you can achieve storage capacity of 100 servers. That's a cost saving of 80%! Or you can repurpose the 80 servers to store new data also, and get 5X storage throughput.

Back to TOC

Compared to Other File Systems

Most other distributed file systems seem more complicated than necessary.

SeaweedFS is meant to be fast and simple, in both setup and operation. If you do not understand how it works when you reach here, we've failed! Please raise an issue with any questions or update this file with clarifications.

SeaweedFS is constantly moving forward. Same with other systems. These comparisons can be outdated quickly. Please help to keep them updated.

Back to TOC

Compared to HDFS

HDFS uses the chunk approach for each file, and is ideal for storing large files.

SeaweedFS is ideal for serving relatively smaller files quickly and concurrently.

SeaweedFS can also store extra large files by splitting them into manageable data chunks, and store the file ids of the data chunks into a meta chunk. This is managed by "weed upload/download" tool, and the weed master or volume servers are agnostic about it.

Back to TOC

Compared to GlusterFS, Ceph

The architectures are mostly the same. SeaweedFS aims to store and read files fast, with a simple and flat architecture. The main differences are

  • SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
  • SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
  • SeaweedFS Filer metadata store can be any well-known and proven data store, e.g., Redis, Cassandra, HBase, Mongodb, Elastic Search, MySql, Postgres, Sqlite, MemSql, TiDB, CockroachDB, Etcd, YDB etc, and is easy to customize.
  • SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
System File Metadata File Content Read POSIX REST API Optimized for large number of small files
SeaweedFS lookup volume id, cacheable O(1) disk seek Yes Yes
SeaweedFS Filer Linearly Scalable, Customizable O(1) disk seek FUSE Yes Yes
GlusterFS hashing FUSE, NFS
Ceph hashing + rules FUSE Yes
MooseFS in memory FUSE No
MinIO separate meta file for each file Yes No

Back to TOC

Compared to GlusterFS

GlusterFS stores files, both directories and content, in configurable volumes called "bricks".

GlusterFS hashes the path and filename into ids, and assigned to virtual volumes, and then mapped to "bricks".

Back to TOC

Compared to MooseFS

MooseFS chooses to neglect small file issue. From moosefs 3.0 manual, "even a small file will occupy 64KiB plus additionally 4KiB of checksums and 1KiB for the header", because it "was initially designed for keeping large amounts (like several thousands) of very big files"

MooseFS Master Server keeps all meta data in memory. Same issue as HDFS namenode.

Back to TOC

Compared to Ceph

Ceph can be setup similar to SeaweedFS as a key->blob store. It is much more complicated, with the need to support layers on top of it. Here is a more detailed comparison

SeaweedFS has a centralized master group to look up free volumes, while Ceph uses hashing and metadata servers to locate its objects. Having a centralized master makes it easy to code and manage.

Ceph, like SeaweedFS, is based on the object store RADOS. Ceph is rather complicated with mixed reviews.

Ceph uses CRUSH hashing to automatically manage data placement, which is efficient to locate the data. But the data has to be placed according to the CRUSH algorithm. Any wrong configuration would cause data loss. Topology changes, such as adding new servers to increase capacity, will cause data migration with high IO cost to fit the CRUSH algorithm. SeaweedFS places data by assigning them to any writable volumes. If writes to one volume failed, just pick another volume to write. Adding more volumes is also as simple as it can be.

SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.

SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Sqlite, Mongodb, Redis, Elastic Search, Cassandra, HBase, MemSql, TiDB, CockroachCB, Etcd, YDB, to manage file directories. These stores are proven, scalable, and easier to manage.

SeaweedFS comparable to Ceph advantage
Master MDS simpler
Volume OSD optimized for small files
Filer Ceph FS linearly scalable, Customizable, O(1) or O(logN)

Back to TOC

Compared to MinIO

MinIO follows AWS S3 closely and is ideal for testing for S3 API. It has good UI, policies, versionings, etc. SeaweedFS is trying to catch up here. It is also possible to put MinIO as a gateway in front of SeaweedFS later.

MinIO metadata are in simple files. Each file write will incur extra writes to corresponding meta file.

MinIO does not have optimization for lots of small files. The files are simply stored as is to local disks. Plus the extra meta file and shards for erasure coding, it only amplifies the LOSF problem.

MinIO has multiple disk IO to read one file. SeaweedFS has O(1) disk reads, even for erasure coded files.

MinIO has full-time erasure coding. SeaweedFS uses replication on hot data for faster speed and optionally applies erasure coding on warm data.

MinIO does not have POSIX-like API support.

MinIO has specific requirements on storage layout. It is not flexible to adjust capacity. In SeaweedFS, just start one volume server pointing to the master. That's all.

Dev Plan

  • More tools and documentation, on how to manage and scale the system.
  • Read and write stream data.
  • Support structured data.

This is a super exciting project! And we need helpers and support!

Back to TOC

Installation Guide

Installation guide for users who are not familiar with golang

Step 1: install go on your machine and setup the environment by following the instructions at:

https://golang.org/doc/install

make sure to define your $GOPATH

Step 2: checkout this repo:

git clone https://github.com/seaweedfs/seaweedfs.git

Step 3: download, compile, and install the project by executing the following command

cd seaweedfs/weed && make install

Once this is done, you will find the executable "weed" in your $GOPATH/bin directory

Back to TOC

Hard Drive Performance

When testing read performance on SeaweedFS, it basically becomes a performance test of your hard drive's random read speed. Hard drives usually get 100MB/s~200MB/s.

Solid State Disk

To modify or delete small files, SSD must delete a whole block at a time, and move content in existing blocks to a new block. SSD is fast when brand new, but will get fragmented over time and you have to garbage collect, compacting blocks. SeaweedFS is friendly to SSD since it is append-only. Deletion and compaction are done on volume level in the background, not slowing reading and not causing fragmentation.

Back to TOC

Benchmark

My Own Unscientific Single Machine Results on Mac Book with Solid State Disk, CPU: 1 Intel Core i7 2.6GHz.

Write 1 million 1KB file:

Concurrency Level:      16
Time taken for tests:   66.753 seconds
Completed requests:      1048576
Failed requests:        0
Total transferred:      1106789009 bytes
Requests per second:    15708.23 [#/sec]
Transfer rate:          16191.69 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.3      1.0       84.3      0.9

Percentage of the requests served within a certain time (ms)
   50%      0.8 ms
   66%      1.0 ms
   75%      1.1 ms
   80%      1.2 ms
   90%      1.4 ms
   95%      1.7 ms
   98%      2.1 ms
   99%      2.6 ms
  100%     84.3 ms

Randomly read 1 million files:

Concurrency Level:      16
Time taken for tests:   22.301 seconds
Completed requests:      1048576
Failed requests:        0
Total transferred:      1106812873 bytes
Requests per second:    47019.38 [#/sec]
Transfer rate:          48467.57 [Kbytes/sec]

Connection Times (ms)
              min      avg        max      std
Total:        0.0      0.3       54.1      0.2

Percentage of the requests served within a certain time (ms)
   50%      0.3 ms
   90%      0.4 ms
   98%      0.6 ms
   99%      0.7 ms
  100%     54.1 ms

Run WARP and launch a mixed benchmark.

make benchmark
warp: Benchmark data written to "warp-mixed-2023-10-16[102354]-l70a.csv.zst"                                                                                                                                                                                               
Mixed operations.
Operation: DELETE, 10%, Concurrency: 20, Ran 4m59s.
 * Throughput: 6.19 obj/s

Operation: GET, 45%, Concurrency: 20, Ran 5m0s.
 * Throughput: 279.85 MiB/s, 27.99 obj/s

Operation: PUT, 15%, Concurrency: 20, Ran 5m0s.
 * Throughput: 89.86 MiB/s, 8.99 obj/s

Operation: STAT, 30%, Concurrency: 20, Ran 5m0s.
 * Throughput: 18.63 obj/s

Cluster Total: 369.74 MiB/s, 61.79 obj/s, 0 errors over 5m0s.

To see segmented request statistics, use the --analyze.v parameter.

warp analyze --analyze.v warp-mixed-2023-10-16[102354]-l70a.csv.zst
18642 operations loaded... Done!
Mixed operations.
----------------------------------------
Operation: DELETE - total: 1854, 10.0%, Concurrency: 20, Ran 5m0s, starting 2023-10-16 10:23:57.115 +0500 +05
 * Throughput: 6.19 obj/s

Requests considered: 1855:
 * Avg: 104ms, 50%: 30ms, 90%: 207ms, 99%: 1.355s, Fastest: 1ms, Slowest: 4.613s, StdDev: 320ms

----------------------------------------
Operation: GET - total: 8388, 45.3%, Size: 10485760 bytes. Concurrency: 20, Ran 5m0s, starting 2023-10-16 10:23:57.12 +0500 +05
 * Throughput: 279.77 MiB/s, 27.98 obj/s

Requests considered: 8389:
 * Avg: 221ms, 50%: 106ms, 90%: 492ms, 99%: 1.739s, Fastest: 8ms, Slowest: 8.633s, StdDev: 383ms
 * TTFB: Avg: 81ms, Best: 2ms, 25th: 24ms, Median: 39ms, 75th: 65ms, 90th: 171ms, 99th: 669ms, Worst: 4.783s StdDev: 163ms
 * First Access: Avg: 240ms, 50%: 105ms, 90%: 511ms, 99%: 2.08s, Fastest: 12ms, Slowest: 8.633s, StdDev: 480ms
 * First Access TTFB: Avg: 88ms, Best: 2ms, 25th: 24ms, Median: 38ms, 75th: 64ms, 90th: 179ms, 99th: 919ms, Worst: 4.783s StdDev: 199ms
 * Last Access: Avg: 219ms, 50%: 106ms, 90%: 463ms, 99%: 1.782s, Fastest: 9ms, Slowest: 8.633s, StdDev: 416ms
 * Last Access TTFB: Avg: 81ms, Best: 2ms, 25th: 24ms, Median: 39ms, 75th: 65ms, 90th: 161ms, 99th: 657ms, Worst: 4.783s StdDev: 176ms

----------------------------------------
Operation: PUT - total: 2688, 14.5%, Size: 10485760 bytes. Concurrency: 20, Ran 5m0s, starting 2023-10-16 10:23:57.115 +0500 +05
 * Throughput: 89.83 MiB/s, 8.98 obj/s

Requests considered: 2689:
 * Avg: 1.165s, 50%: 878ms, 90%: 2.015s, 99%: 5.74s, Fastest: 99ms, Slowest: 8.264s, StdDev: 968ms

----------------------------------------
Operation: STAT - total: 5586, 30.2%, Concurrency: 20, Ran 5m0s, starting 2023-10-16 10:23:57.113 +0500 +05
 * Throughput: 18.63 obj/s

Requests considered: 5587:
 * Avg: 15ms, 50%: 11ms, 90%: 34ms, 99%: 80ms, Fastest: 0s, Slowest: 245ms, StdDev: 17ms
 * First Access: Avg: 14ms, 50%: 10ms, 90%: 33ms, 99%: 69ms, Fastest: 0s, Slowest: 203ms, StdDev: 16ms
 * Last Access: Avg: 15ms, 50%: 11ms, 90%: 34ms, 99%: 74ms, Fastest: 0s, Slowest: 203ms, StdDev: 17ms

Cluster Total: 369.64 MiB/s, 61.77 obj/s, 0 errors over 5m0s.
Total Errors:0.

Back to TOC

Enterprise

For enterprise users, please visit seaweedfs.com for the SeaweedFS Enterprise Edition, which has a self-healing storage format with better data protection.

Back to TOC

License

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).

Back to TOC

Stargazers over time

Stargazers over time

Description
SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.
Readme Apache-2.0 332 MiB
Languages
Go 84.6%
templ 4.8%
Java 3.9%
Shell 2.3%
Makefile 1.6%
Other 2.6%