mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2025-08-20 07:02:09 +08:00
![]() * it compiles * refactored * reduce to 4 concurrent chunk upload * CopyObjectPartHandler * copy a range of the chunk data, fix offset size in copied chunks * Update s3api_object_handlers_copy.go What the PR Accomplishes: CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations) Range Support - Properly handles range-based copying for partial object copies * fix compilation * fix part destination * handling small objects * use mkFile * copy to existing file or part * add testing tools * adjust tests * fix chunk lookup * refactoring * fix TestObjectCopyRetainingMetadata * ensure bucket name not conflicting * fix conditional copying tests * remove debug messages * add custom s3 copy tests |
||
---|---|---|
.. | ||
data | ||
mq | ||
random_access | ||
s3 |