mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2025-09-18 20:47:56 +08:00
More efficient copy object (#6665)
* it compiles * refactored * reduce to 4 concurrent chunk upload * CopyObjectPartHandler * copy a range of the chunk data, fix offset size in copied chunks * Update s3api_object_handlers_copy.go What the PR Accomplishes: CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations) Range Support - Properly handles range-based copying for partial object copies * fix compilation * fix part destination * handling small objects * use mkFile * copy to existing file or part * add testing tools * adjust tests * fix chunk lookup * refactoring * fix TestObjectCopyRetainingMetadata * ensure bucket name not conflicting * fix conditional copying tests * remove debug messages * add custom s3 copy tests
This commit is contained in:
2
Makefile
2
Makefile
@@ -18,7 +18,7 @@ full_install: admin-generate
|
||||
cd weed; go install -tags "elastic gocdk sqlite ydb tarantool tikv rclone"
|
||||
|
||||
server: install
|
||||
weed -v 0 server -s3 -filer -filer.maxMB=64 -volume.max=0 -master.volumeSizeLimitMB=1024 -volume.preStopSeconds=1 -s3.port=8000 -s3.allowEmptyFolder=false -s3.allowDeleteBucketNotEmpty=true -s3.config=./docker/compose/s3.json -metricsPort=9324
|
||||
weed -v 0 server -s3 -filer -filer.maxMB=64 -volume.max=0 -master.volumeSizeLimitMB=100 -volume.preStopSeconds=1 -s3.port=8000 -s3.allowEmptyFolder=false -s3.allowDeleteBucketNotEmpty=true -s3.config=./docker/compose/s3.json -metricsPort=9324
|
||||
|
||||
benchmark: install warp_install
|
||||
pkill weed || true
|
||||
|
Reference in New Issue
Block a user