seaweedfs/test
Chris Lu d892538d32
More efficient copy object (#6665)
* it compiles

* refactored

* reduce to 4 concurrent chunk upload

* CopyObjectPartHandler

* copy a range of the chunk data, fix offset size in copied chunks

* Update s3api_object_handlers_copy.go

What the PR Accomplishes:
CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file
CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions
Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations)
Range Support - Properly handles range-based copying for partial object copies

* fix compilation

* fix part destination

* handling small objects

* use mkFile

* copy to existing file or part

* add testing tools

* adjust tests

* fix chunk lookup

* refactoring

* fix TestObjectCopyRetainingMetadata

* ensure bucket name not conflicting

* fix conditional copying tests

* remove debug messages

* add custom s3 copy tests
2025-07-11 18:51:32 -07:00
..
data volume: large_volume version has bug when using in memory index 2021-06-28 15:48:07 -07:00
mq Admin UI: Add message queue to admin UI (#6958) 2025-07-11 10:19:27 -07:00
random_access Java 3.59 2023-11-13 08:23:53 -08:00
s3 More efficient copy object (#6665) 2025-07-11 18:51:32 -07:00