mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2025-11-08 04:46:13 +08:00
S3 API: Add SSE-KMS (#7144)
* implement sse-c * fix Content-Range * adding tests * Update s3_sse_c_test.go * copy sse-c objects * adding tests * refactor * multi reader * remove extra write header call * refactor * SSE-C encrypted objects do not support HTTP Range requests * robust * fix server starts * Update Makefile * Update Makefile * ci: remove SSE-C integration tests and workflows; delete test/s3/encryption/ * s3: SSE-C MD5 must be base64 (case-sensitive); fix validation, comparisons, metadata storage; update tests * minor * base64 * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/s3api/s3api_object_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * fix test * fix compilation * Bucket Default Encryption To complete the SSE-KMS implementation for production use: Add AWS KMS Provider - Implement weed/kms/aws/aws_kms.go using AWS SDK Integrate with S3 Handlers - Update PUT/GET object handlers to use SSE-KMS Add Multipart Upload Support - Extend SSE-KMS to multipart uploads Configuration Integration - Add KMS configuration to filer.toml Documentation - Update SeaweedFS wiki with SSE-KMS usage examples * store bucket sse config in proto * add more tests * Update SSE-C_IMPLEMENTATION.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Fix rebase errors and restore structured BucketMetadata API Merge Conflict Fixes: - Fixed merge conflicts in header.go (SSE-C and SSE-KMS headers) - Fixed merge conflicts in s3api_errors.go (SSE-C and SSE-KMS error codes) - Fixed merge conflicts in s3_sse_c.go (copy strategy constants) - Fixed merge conflicts in s3api_object_handlers_copy.go (copy strategy usage) API Restoration: - Restored BucketMetadata struct with Tags, CORS, and Encryption fields - Restored structured API functions: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata - Restored helper functions: UpdateBucketTags, UpdateBucketCORS, UpdateBucketEncryption - Restored clear functions: ClearBucketTags, ClearBucketCORS, ClearBucketEncryption Handler Updates: - Updated GetBucketTaggingHandler to use GetBucketMetadata() directly - Updated PutBucketTaggingHandler to use UpdateBucketTags() - Updated DeleteBucketTaggingHandler to use ClearBucketTags() - Updated CORS handlers to use UpdateBucketCORS() and ClearBucketCORS() - Updated loadCORSFromBucketContent to use GetBucketMetadata() Internal Function Updates: - Updated getBucketMetadata() to return *BucketMetadata struct - Updated setBucketMetadata() to accept *BucketMetadata struct - Updated getBucketEncryptionMetadata() to use GetBucketMetadata() - Updated setBucketEncryptionMetadata() to use SetBucketMetadata() Benefits: - Resolved all rebase conflicts while preserving both SSE-C and SSE-KMS functionality - Maintained consistent structured API throughout the codebase - Eliminated intermediate wrapper functions for cleaner code - Proper error handling with better granularity - All tests passing and build successful The bucket metadata system now uses a unified, type-safe, structured API that supports tags, CORS, and encryption configuration consistently. * Fix updateEncryptionConfiguration for first-time bucket encryption setup - Change getBucketEncryptionMetadata to getBucketMetadata to avoid failures when no encryption config exists - Change setBucketEncryptionMetadata to setBucketMetadataWithEncryption for consistency - This fixes the critical issue where bucket encryption configuration failed for buckets without existing encryption Fixes: https://github.com/seaweedfs/seaweedfs/pull/7144#discussion_r2285669572 * Fix rebase conflicts and maintain structured BucketMetadata API Resolved Conflicts: - Fixed merge conflicts in s3api_bucket_config.go between structured API (HEAD) and old intermediate functions - Kept modern structured API approach: UpdateBucketCORS, ClearBucketCORS, UpdateBucketEncryption - Removed old intermediate functions: setBucketTags, deleteBucketTags, setBucketMetadataWithEncryption API Consistency Maintained: - updateCORSConfiguration: Uses UpdateBucketCORS() directly - removeCORSConfiguration: Uses ClearBucketCORS() directly - updateEncryptionConfiguration: Uses UpdateBucketEncryption() directly - All structured API functions preserved: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata Benefits: - Maintains clean separation between API layers - Preserves atomic metadata updates with proper error handling - Eliminates function indirection for better performance - Consistent API usage pattern throughout codebase - All tests passing and build successful The bucket metadata system continues to use the unified, type-safe, structured API that properly handles tags, CORS, and encryption configuration without any intermediate wrapper functions. * Fix complex rebase conflicts and maintain clean structured BucketMetadata API Resolved Complex Conflicts: - Fixed merge conflicts between modern structured API (HEAD) and mixed approach - Removed duplicate function declarations that caused compilation errors - Consistently chose structured API approach over intermediate functions Fixed Functions: - BucketMetadata struct: Maintained clean field alignment - loadCORSFromBucketContent: Uses GetBucketMetadata() directly - updateCORSConfiguration: Uses UpdateBucketCORS() directly - removeCORSConfiguration: Uses ClearBucketCORS() directly - getBucketMetadata: Returns *BucketMetadata struct consistently - setBucketMetadata: Accepts *BucketMetadata struct consistently Removed Duplicates: - Eliminated duplicate GetBucketMetadata implementations - Eliminated duplicate SetBucketMetadata implementations - Eliminated duplicate UpdateBucketMetadata implementations - Eliminated duplicate helper functions (UpdateBucketTags, etc.) API Consistency Achieved: - Single, unified BucketMetadata struct for all operations - Atomic updates through UpdateBucketMetadata with function callbacks - Type-safe operations with proper error handling - No intermediate wrapper functions cluttering the API Benefits: - Clean, maintainable codebase with no function duplication - Consistent structured API usage throughout all bucket operations - Proper error handling and type safety - Build successful and all tests passing The bucket metadata system now has a completely clean, structured API without any conflicts, duplicates, or inconsistencies. * Update remaining functions to use new structured BucketMetadata APIs directly Updated functions to follow the pattern established in bucket config: - getEncryptionConfiguration() -> Uses GetBucketMetadata() directly - removeEncryptionConfiguration() -> Uses ClearBucketEncryption() directly Benefits: - Consistent API usage pattern across all bucket metadata operations - Simpler, more readable code that leverages the structured API - Eliminates calls to intermediate legacy functions - Better error handling and logging consistency - All tests pass with improved functionality This completes the transition to using the new structured BucketMetadata API throughout the entire bucket configuration and encryption subsystem. * Fix GitHub PR #7144 code review comments Address all code review comments from Gemini Code Assist bot: 1. **High Priority - SSE-KMS Key Validation**: Fixed ValidateSSEKMSKey to allow empty KMS key ID - Empty key ID now indicates use of default KMS key (consistent with AWS behavior) - Updated ParseSSEKMSHeaders to call validation after parsing - Enhanced isValidKMSKeyID to reject keys with spaces and invalid characters 2. **Medium Priority - KMS Registry Error Handling**: Improved error collection in CloseAll - Now collects all provider close errors instead of only returning the last one - Uses proper error formatting with %w verb for error wrapping - Returns single error for one failure, combined message for multiple failures 3. **Medium Priority - Local KMS Aliases Consistency**: Fixed alias handling in CreateKey - Now updates the aliases slice in-place to maintain consistency - Ensures both p.keys map and key.Aliases slice use the same prefixed format All changes maintain backward compatibility and improve error handling robustness. Tests updated and passing for all scenarios including edge cases. * Use errors.Join for KMS registry error handling Replace manual string building with the more idiomatic errors.Join function: - Removed manual error message concatenation with strings.Builder - Simplified error handling logic by using errors.Join(allErrors...) - Removed unnecessary string import - Added errors import for errors.Join This approach is cleaner, more idiomatic, and automatically handles: - Returning nil for empty error slice - Returning single error for one-element slice - Properly formatting multiple errors with newlines The errors.Join function was introduced in Go 1.20 and is the recommended way to combine multiple errors. * Update registry.go * Fix GitHub PR #7144 latest review comments Address all new code review comments from Gemini Code Assist bot: 1. **High Priority - SSE-KMS Detection Logic**: Tightened IsSSEKMSEncrypted function - Now relies only on the canonical x-amz-server-side-encryption header - Removed redundant check for x-amz-encrypted-data-key metadata - Prevents misinterpretation of objects with inconsistent metadata state - Updated test case to reflect correct behavior (encrypted data key only = false) 2. **Medium Priority - UUID Validation**: Enhanced KMS key ID validation - Replaced simplistic length/hyphen count check with proper regex validation - Added regexp import for robust UUID format checking - Regex pattern: ^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}$ - Prevents invalid formats like '------------------------------------' from passing 3. **Medium Priority - Alias Mutation Fix**: Avoided input slice modification - Changed CreateKey to not mutate the input aliases slice in-place - Uses local variable for modified alias to prevent side effects - Maintains backward compatibility while being safer for callers All changes improve code robustness and follow AWS S3 standards more closely. Tests updated and passing for all scenarios including edge cases. * Fix failing SSE tests Address two failing test cases: 1. **TestSSEHeaderConflicts**: Fixed SSE-C and SSE-KMS mutual exclusion - Modified IsSSECRequest to return false if SSE-KMS headers are present - Modified IsSSEKMSRequest to return false if SSE-C headers are present - This prevents both detection functions from returning true simultaneously - Aligns with AWS S3 behavior where SSE-C and SSE-KMS are mutually exclusive 2. **TestBucketEncryptionEdgeCases**: Fixed XML namespace validation - Added namespace validation in encryptionConfigFromXMLBytes function - Now rejects XML with invalid namespaces (only allows empty or AWS standard namespace) - Validates XMLName.Space to ensure proper XML structure - Prevents acceptance of malformed XML with incorrect namespaces Both fixes improve compliance with AWS S3 standards and prevent invalid configurations from being accepted. All SSE and bucket encryption tests now pass successfully. * Fix GitHub PR #7144 latest review comments Address two new code review comments from Gemini Code Assist bot: 1. **High Priority - Race Condition in UpdateBucketMetadata**: Fixed thread safety issue - Added per-bucket locking mechanism to prevent race conditions - Introduced bucketMetadataLocks map with RWMutex for each bucket - Added getBucketMetadataLock helper with double-checked locking pattern - UpdateBucketMetadata now uses bucket-specific locks to serialize metadata updates - Prevents last-writer-wins scenarios when concurrent requests update different metadata parts 2. **Medium Priority - KMS Key ARN Validation**: Improved robustness of ARN validation - Enhanced isValidKMSKeyID function to strictly validate ARN structure - Changed from 'len(parts) >= 6' to 'len(parts) != 6' for exact part count - Added proper resource validation for key/ and alias/ prefixes - Prevents malformed ARNs with incorrect structure from being accepted - Now validates: arn:aws:kms:region:account:key/keyid or arn:aws:kms:region:account:alias/aliasname Both fixes improve system reliability and prevent edge cases that could cause data corruption or security issues. All existing tests continue to pass. * format * address comments * Configuration Adapter * Regex Optimization * Caching Integration * add negative cache for non-existent buckets * remove bucketMetadataLocks * address comments * address comments * copying objects with sse-kms * copying strategy * store IV in entry metadata * implement compression reader * extract json map as sse kms context * bucket key * comments * rotate sse chunks * KMS Data Keys use AES-GCM + nonce * add comments * Update weed/s3api/s3_sse_kms.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update s3api_object_handlers_put.go * get IV from response header * set sse headers * Update s3api_object_handlers.go * deterministic JSON marshaling * store iv in entry metadata * address comments * not used * store iv in destination metadata ensures that SSE-C copy operations with re-encryption (decrypt/re-encrypt scenario) now properly store the destination encryption metadata * add todo * address comments * SSE-S3 Deserialization * add BucketKMSCache to BucketConfig * fix test compilation * already not empty * use constants * fix: critical metadata (encrypted data keys, encryption context, etc.) was never stored during PUT/copy operations * address comments * fix tests * Fix SSE-KMS Copy Re-encryption * Cache now persists across requests * fix test * iv in metadata only * SSE-KMS copy operations should follow the same pattern as SSE-C * fix size overhead calculation * Filer-Side SSE Metadata Processing * SSE Integration Tests * fix tests * clean up * Update s3_sse_multipart_test.go * add s3 sse tests * unused * add logs * Update Makefile * Update Makefile * s3 health check * The tests were failing because they tried to run both SSE-C and SSE-KMS tests * Update weed/s3api/s3_sse_c.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update Makefile * add back * Update Makefile * address comments * fix tests * Update s3-sse-tests.yml * Update s3-sse-tests.yml * fix sse-kms for PUT operation * IV * Update auth_credentials.go * fix multipart with kms * constants * multipart sse kms Modified handleSSEKMSResponse to detect multipart SSE-KMS objects Added createMultipartSSEKMSDecryptedReader to handle each chunk independently Each chunk now gets its own decrypted reader before combining into the final stream * validate key id * add SSEType * permissive kms key format * Update s3_sse_kms_test.go * format * assert equal * uploading SSE-KMS metadata per chunk * persist sse type and metadata * avoid re-chunk multipart uploads * decryption process to use stored PartOffset values * constants * sse-c multipart upload * Unified Multipart SSE Copy * purge * fix fatalf * avoid io.MultiReader which does not close underlying readers * unified cross-encryption * fix Single-object SSE-C * adjust constants * range read sse files * remove debug logs --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
454
test/s3/sse/Makefile
Normal file
454
test/s3/sse/Makefile
Normal file
@@ -0,0 +1,454 @@
|
||||
# Makefile for S3 SSE Integration Tests
|
||||
# This Makefile provides targets for running comprehensive S3 Server-Side Encryption tests
|
||||
|
||||
# Default values
|
||||
SEAWEEDFS_BINARY ?= weed
|
||||
S3_PORT ?= 8333
|
||||
FILER_PORT ?= 8888
|
||||
VOLUME_PORT ?= 8080
|
||||
MASTER_PORT ?= 9333
|
||||
TEST_TIMEOUT ?= 15m
|
||||
BUCKET_PREFIX ?= test-sse-
|
||||
ACCESS_KEY ?= some_access_key1
|
||||
SECRET_KEY ?= some_secret_key1
|
||||
VOLUME_MAX_SIZE_MB ?= 50
|
||||
VOLUME_MAX_COUNT ?= 100
|
||||
|
||||
# SSE-KMS configuration
|
||||
KMS_KEY_ID ?= test-key-123
|
||||
KMS_TYPE ?= local
|
||||
|
||||
# Test directory
|
||||
TEST_DIR := $(shell pwd)
|
||||
SEAWEEDFS_ROOT := $(shell cd ../../../ && pwd)
|
||||
|
||||
# Colors for output
|
||||
RED := \033[0;31m
|
||||
GREEN := \033[0;32m
|
||||
YELLOW := \033[1;33m
|
||||
NC := \033[0m # No Color
|
||||
|
||||
.PHONY: all test clean start-seaweedfs stop-seaweedfs stop-seaweedfs-safe start-seaweedfs-ci check-binary build-weed help help-extended test-with-server test-quick-with-server test-metadata-persistence
|
||||
|
||||
all: test-basic
|
||||
|
||||
# Build SeaweedFS binary (GitHub Actions compatible)
|
||||
build-weed:
|
||||
@echo "Building SeaweedFS binary..."
|
||||
@cd $(SEAWEEDFS_ROOT)/weed && go install -buildvcs=false
|
||||
@echo "✅ SeaweedFS binary built successfully"
|
||||
|
||||
help:
|
||||
@echo "SeaweedFS S3 SSE Integration Tests"
|
||||
@echo ""
|
||||
@echo "Available targets:"
|
||||
@echo " test-basic - Run basic S3 put/get tests first"
|
||||
@echo " test - Run all S3 SSE integration tests"
|
||||
@echo " test-ssec - Run SSE-C tests only"
|
||||
@echo " test-ssekms - Run SSE-KMS tests only"
|
||||
@echo " test-copy - Run SSE copy operation tests"
|
||||
@echo " test-multipart - Run SSE multipart upload tests"
|
||||
@echo " test-errors - Run SSE error condition tests"
|
||||
@echo " benchmark - Run SSE performance benchmarks"
|
||||
@echo " start-seaweedfs - Start SeaweedFS server for testing"
|
||||
@echo " stop-seaweedfs - Stop SeaweedFS server"
|
||||
@echo " clean - Clean up test artifacts"
|
||||
@echo " check-binary - Check if SeaweedFS binary exists"
|
||||
@echo ""
|
||||
@echo "Configuration:"
|
||||
@echo " SEAWEEDFS_BINARY=$(SEAWEEDFS_BINARY)"
|
||||
@echo " S3_PORT=$(S3_PORT)"
|
||||
@echo " FILER_PORT=$(FILER_PORT)"
|
||||
@echo " VOLUME_PORT=$(VOLUME_PORT)"
|
||||
@echo " MASTER_PORT=$(MASTER_PORT)"
|
||||
@echo " TEST_TIMEOUT=$(TEST_TIMEOUT)"
|
||||
@echo " VOLUME_MAX_SIZE_MB=$(VOLUME_MAX_SIZE_MB)"
|
||||
|
||||
check-binary:
|
||||
@if ! command -v $(SEAWEEDFS_BINARY) > /dev/null 2>&1; then \
|
||||
echo "$(RED)Error: SeaweedFS binary '$(SEAWEEDFS_BINARY)' not found in PATH$(NC)"; \
|
||||
echo "Please build SeaweedFS first by running 'make' in the root directory"; \
|
||||
exit 1; \
|
||||
fi
|
||||
@echo "$(GREEN)SeaweedFS binary found: $$(which $(SEAWEEDFS_BINARY))$(NC)"
|
||||
|
||||
start-seaweedfs: check-binary
|
||||
@echo "$(YELLOW)Starting SeaweedFS server for SSE testing...$(NC)"
|
||||
@# Use port-based cleanup for consistency and safety
|
||||
@echo "Cleaning up any existing processes..."
|
||||
@lsof -ti :$(MASTER_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(VOLUME_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(FILER_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(S3_PORT) | xargs -r kill -TERM || true
|
||||
@sleep 2
|
||||
|
||||
# Create necessary directories
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-master
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-volume
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-filer
|
||||
|
||||
# Start master server with volume size limit and explicit gRPC port
|
||||
@nohup $(SEAWEEDFS_BINARY) master -port=$(MASTER_PORT) -port.grpc=$$(( $(MASTER_PORT) + 10000 )) -mdir=/tmp/seaweedfs-test-sse-master -volumeSizeLimitMB=$(VOLUME_MAX_SIZE_MB) -ip=127.0.0.1 > /tmp/seaweedfs-sse-master.log 2>&1 &
|
||||
@sleep 3
|
||||
|
||||
# Start volume server with master HTTP port and increased capacity
|
||||
@nohup $(SEAWEEDFS_BINARY) volume -port=$(VOLUME_PORT) -mserver=127.0.0.1:$(MASTER_PORT) -dir=/tmp/seaweedfs-test-sse-volume -max=$(VOLUME_MAX_COUNT) -ip=127.0.0.1 > /tmp/seaweedfs-sse-volume.log 2>&1 &
|
||||
@sleep 5
|
||||
|
||||
# Start filer server (using standard SeaweedFS gRPC port convention: HTTP port + 10000)
|
||||
@nohup $(SEAWEEDFS_BINARY) filer -port=$(FILER_PORT) -port.grpc=$$(( $(FILER_PORT) + 10000 )) -master=127.0.0.1:$(MASTER_PORT) -dataCenter=defaultDataCenter -ip=127.0.0.1 > /tmp/seaweedfs-sse-filer.log 2>&1 &
|
||||
@sleep 3
|
||||
|
||||
# Create S3 configuration with SSE-KMS support
|
||||
@printf '{"identities":[{"name":"%s","credentials":[{"accessKey":"%s","secretKey":"%s"}],"actions":["Admin","Read","Write"]}],"kms":{"type":"%s","configs":{"keyId":"%s","encryptionContext":{},"bucketKey":false}}}' "$(ACCESS_KEY)" "$(ACCESS_KEY)" "$(SECRET_KEY)" "$(KMS_TYPE)" "$(KMS_KEY_ID)" > /tmp/seaweedfs-sse-s3.json
|
||||
|
||||
# Start S3 server with KMS configuration
|
||||
@nohup $(SEAWEEDFS_BINARY) s3 -port=$(S3_PORT) -filer=127.0.0.1:$(FILER_PORT) -config=/tmp/seaweedfs-sse-s3.json -ip.bind=127.0.0.1 > /tmp/seaweedfs-sse-s3.log 2>&1 &
|
||||
@sleep 5
|
||||
|
||||
# Wait for S3 service to be ready
|
||||
@echo "$(YELLOW)Waiting for S3 service to be ready...$(NC)"
|
||||
@for i in $$(seq 1 30); do \
|
||||
if curl -s -f http://127.0.0.1:$(S3_PORT) > /dev/null 2>&1; then \
|
||||
echo "$(GREEN)S3 service is ready$(NC)"; \
|
||||
break; \
|
||||
fi; \
|
||||
echo "Waiting for S3 service... ($$i/30)"; \
|
||||
sleep 1; \
|
||||
done
|
||||
|
||||
# Additional wait for filer gRPC to be ready
|
||||
@echo "$(YELLOW)Waiting for filer gRPC to be ready...$(NC)"
|
||||
@sleep 2
|
||||
@echo "$(GREEN)SeaweedFS server started successfully for SSE testing$(NC)"
|
||||
@echo "Master: http://localhost:$(MASTER_PORT)"
|
||||
@echo "Volume: http://localhost:$(VOLUME_PORT)"
|
||||
@echo "Filer: http://localhost:$(FILER_PORT)"
|
||||
@echo "S3: http://localhost:$(S3_PORT)"
|
||||
@echo "Volume Max Size: $(VOLUME_MAX_SIZE_MB)MB"
|
||||
@echo "SSE-KMS Support: Enabled"
|
||||
|
||||
stop-seaweedfs:
|
||||
@echo "$(YELLOW)Stopping SeaweedFS server...$(NC)"
|
||||
@# Use port-based cleanup for consistency and safety
|
||||
@lsof -ti :$(MASTER_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(VOLUME_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(FILER_PORT) | xargs -r kill -TERM || true
|
||||
@lsof -ti :$(S3_PORT) | xargs -r kill -TERM || true
|
||||
@sleep 2
|
||||
@echo "$(GREEN)SeaweedFS server stopped$(NC)"
|
||||
|
||||
# CI-safe server stop that's more conservative
|
||||
stop-seaweedfs-safe:
|
||||
@echo "$(YELLOW)Safely stopping SeaweedFS server...$(NC)"
|
||||
@# Use port-based cleanup which is safer in CI
|
||||
@if command -v lsof >/dev/null 2>&1; then \
|
||||
echo "Using lsof for port-based cleanup..."; \
|
||||
lsof -ti :$(MASTER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
lsof -ti :$(VOLUME_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
lsof -ti :$(FILER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
lsof -ti :$(S3_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
else \
|
||||
echo "lsof not available, using netstat approach..."; \
|
||||
netstat -tlnp 2>/dev/null | grep :$(MASTER_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
netstat -tlnp 2>/dev/null | grep :$(VOLUME_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
netstat -tlnp 2>/dev/null | grep :$(FILER_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
netstat -tlnp 2>/dev/null | grep :$(S3_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
||||
fi
|
||||
@sleep 2
|
||||
@echo "$(GREEN)SeaweedFS server safely stopped$(NC)"
|
||||
|
||||
clean:
|
||||
@echo "$(YELLOW)Cleaning up SSE test artifacts...$(NC)"
|
||||
@rm -rf /tmp/seaweedfs-test-sse-*
|
||||
@rm -f /tmp/seaweedfs-sse-*.log
|
||||
@rm -f /tmp/seaweedfs-sse-s3.json
|
||||
@echo "$(GREEN)SSE test cleanup completed$(NC)"
|
||||
|
||||
test-basic: check-binary
|
||||
@echo "$(YELLOW)Running basic S3 SSE integration tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting basic SSE tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSECIntegrationBasic|TestSSEKMSIntegrationBasic" ./test/s3/sse || (echo "$(RED)Basic SSE tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)Basic SSE tests completed successfully!$(NC)"
|
||||
|
||||
test: test-basic
|
||||
@echo "$(YELLOW)Running all S3 SSE integration tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting comprehensive SSE tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSE.*Integration" ./test/s3/sse || (echo "$(RED)SSE tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)All SSE integration tests completed successfully!$(NC)"
|
||||
|
||||
test-ssec: check-binary
|
||||
@echo "$(YELLOW)Running SSE-C integration tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE-C tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSEC.*Integration" ./test/s3/sse || (echo "$(RED)SSE-C tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE-C tests completed successfully!$(NC)"
|
||||
|
||||
test-ssekms: check-binary
|
||||
@echo "$(YELLOW)Running SSE-KMS integration tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE-KMS tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSEKMS.*Integration" ./test/s3/sse || (echo "$(RED)SSE-KMS tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE-KMS tests completed successfully!$(NC)"
|
||||
|
||||
test-copy: check-binary
|
||||
@echo "$(YELLOW)Running SSE copy operation tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE copy tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run ".*CopyIntegration" ./test/s3/sse || (echo "$(RED)SSE copy tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE copy tests completed successfully!$(NC)"
|
||||
|
||||
test-multipart: check-binary
|
||||
@echo "$(YELLOW)Running SSE multipart upload tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE multipart tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSEMultipartUploadIntegration" ./test/s3/sse || (echo "$(RED)SSE multipart tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE multipart tests completed successfully!$(NC)"
|
||||
|
||||
test-errors: check-binary
|
||||
@echo "$(YELLOW)Running SSE error condition tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE error tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSEErrorConditions" ./test/s3/sse || (echo "$(RED)SSE error tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE error tests completed successfully!$(NC)"
|
||||
|
||||
test-quick: check-binary
|
||||
@echo "$(YELLOW)Running quick SSE tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting quick SSE tests...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=5m -run "TestSSECIntegrationBasic|TestSSEKMSIntegrationBasic" ./test/s3/sse || (echo "$(RED)Quick SSE tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)Quick SSE tests completed successfully!$(NC)"
|
||||
|
||||
benchmark: check-binary
|
||||
@echo "$(YELLOW)Running SSE performance benchmarks...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Starting SSE benchmarks...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=30m -bench=. -run=Benchmark ./test/s3/sse || (echo "$(RED)SSE benchmarks failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE benchmarks completed!$(NC)"
|
||||
|
||||
# Debug targets
|
||||
debug-logs:
|
||||
@echo "$(YELLOW)=== Master Log ===$(NC)"
|
||||
@tail -n 50 /tmp/seaweedfs-sse-master.log || echo "No master log found"
|
||||
@echo "$(YELLOW)=== Volume Log ===$(NC)"
|
||||
@tail -n 50 /tmp/seaweedfs-sse-volume.log || echo "No volume log found"
|
||||
@echo "$(YELLOW)=== Filer Log ===$(NC)"
|
||||
@tail -n 50 /tmp/seaweedfs-sse-filer.log || echo "No filer log found"
|
||||
@echo "$(YELLOW)=== S3 Log ===$(NC)"
|
||||
@tail -n 50 /tmp/seaweedfs-sse-s3.log || echo "No S3 log found"
|
||||
|
||||
debug-status:
|
||||
@echo "$(YELLOW)=== Process Status ===$(NC)"
|
||||
@ps aux | grep -E "(weed|seaweedfs)" | grep -v grep || echo "No SeaweedFS processes found"
|
||||
@echo "$(YELLOW)=== Port Status ===$(NC)"
|
||||
@netstat -an | grep -E "($(MASTER_PORT)|$(VOLUME_PORT)|$(FILER_PORT)|$(S3_PORT))" || echo "No ports in use"
|
||||
|
||||
# Manual test targets for development
|
||||
manual-start: start-seaweedfs
|
||||
@echo "$(GREEN)SeaweedFS with SSE support is now running for manual testing$(NC)"
|
||||
@echo "You can now run SSE tests manually or use S3 clients to test SSE functionality"
|
||||
@echo "Run 'make manual-stop' when finished"
|
||||
|
||||
manual-stop: stop-seaweedfs clean
|
||||
|
||||
# CI/CD targets
|
||||
ci-test: test-quick
|
||||
|
||||
# Stress test
|
||||
stress: check-binary
|
||||
@echo "$(YELLOW)Running SSE stress tests...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=60m -run="TestSSE.*Integration" -count=5 ./test/s3/sse || (echo "$(RED)SSE stress tests failed$(NC)" && $(MAKE) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE stress tests completed!$(NC)"
|
||||
|
||||
# Performance test with various data sizes
|
||||
perf: check-binary
|
||||
@echo "$(YELLOW)Running SSE performance tests with various data sizes...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=60m -run=".*VariousDataSizes" ./test/s3/sse || (echo "$(RED)SSE performance tests failed$(NC)" && $(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE performance tests completed!$(NC)"
|
||||
|
||||
# Test specific scenarios that would catch the metadata bug
|
||||
test-metadata-persistence: check-binary
|
||||
@echo "$(YELLOW)Running SSE metadata persistence tests (would catch filer metadata bugs)...$(NC)"
|
||||
@$(MAKE) start-seaweedfs-ci
|
||||
@sleep 5
|
||||
@echo "$(GREEN)Testing that SSE metadata survives full PUT/GET cycle...$(NC)"
|
||||
@cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSECIntegrationBasic" ./test/s3/sse || (echo "$(RED)SSE metadata persistence tests failed$(NC)" && $(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe && exit 1)
|
||||
@$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe
|
||||
@echo "$(GREEN)SSE metadata persistence tests completed successfully!$(NC)"
|
||||
@echo "$(GREEN)✅ These tests would have caught the filer metadata storage bug!$(NC)"
|
||||
|
||||
# GitHub Actions compatible test-with-server target that handles server lifecycle
|
||||
test-with-server: build-weed
|
||||
@echo "🚀 Starting SSE integration tests with automated server management..."
|
||||
@echo "Starting SeaweedFS cluster..."
|
||||
@# Use the CI-safe startup directly without aggressive cleanup
|
||||
@if $(MAKE) start-seaweedfs-ci > weed-test.log 2>&1; then \
|
||||
echo "✅ SeaweedFS cluster started successfully"; \
|
||||
echo "Running SSE integration tests..."; \
|
||||
trap '$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true' EXIT; \
|
||||
if [ -n "$(TEST_PATTERN)" ]; then \
|
||||
echo "🔍 Running tests matching pattern: $(TEST_PATTERN)"; \
|
||||
cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "$(TEST_PATTERN)" ./test/s3/sse || exit 1; \
|
||||
else \
|
||||
echo "🔍 Running all SSE integration tests"; \
|
||||
cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSE.*Integration" ./test/s3/sse || exit 1; \
|
||||
fi; \
|
||||
echo "✅ All tests completed successfully"; \
|
||||
$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true; \
|
||||
else \
|
||||
echo "❌ Failed to start SeaweedFS cluster"; \
|
||||
echo "=== Server startup logs ==="; \
|
||||
tail -100 weed-test.log 2>/dev/null || echo "No startup log available"; \
|
||||
echo "=== System information ==="; \
|
||||
ps aux | grep -E "weed|make" | grep -v grep || echo "No relevant processes found"; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
# CI-safe server startup that avoids process conflicts
|
||||
start-seaweedfs-ci: check-binary
|
||||
@echo "$(YELLOW)Starting SeaweedFS server for CI testing...$(NC)"
|
||||
|
||||
# Create necessary directories
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-master
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-volume
|
||||
@mkdir -p /tmp/seaweedfs-test-sse-filer
|
||||
|
||||
# Clean up any old server logs
|
||||
@rm -f /tmp/seaweedfs-sse-*.log || true
|
||||
|
||||
# Start master server with volume size limit and explicit gRPC port
|
||||
@echo "Starting master server..."
|
||||
@nohup $(SEAWEEDFS_BINARY) master -port=$(MASTER_PORT) -port.grpc=$$(( $(MASTER_PORT) + 10000 )) -mdir=/tmp/seaweedfs-test-sse-master -volumeSizeLimitMB=$(VOLUME_MAX_SIZE_MB) -ip=127.0.0.1 > /tmp/seaweedfs-sse-master.log 2>&1 &
|
||||
@sleep 3
|
||||
|
||||
# Start volume server with master HTTP port and increased capacity
|
||||
@echo "Starting volume server..."
|
||||
@nohup $(SEAWEEDFS_BINARY) volume -port=$(VOLUME_PORT) -mserver=127.0.0.1:$(MASTER_PORT) -dir=/tmp/seaweedfs-test-sse-volume -max=$(VOLUME_MAX_COUNT) -ip=127.0.0.1 > /tmp/seaweedfs-sse-volume.log 2>&1 &
|
||||
@sleep 5
|
||||
|
||||
# Start filer server (using standard SeaweedFS gRPC port convention: HTTP port + 10000)
|
||||
@echo "Starting filer server..."
|
||||
@nohup $(SEAWEEDFS_BINARY) filer -port=$(FILER_PORT) -port.grpc=$$(( $(FILER_PORT) + 10000 )) -master=127.0.0.1:$(MASTER_PORT) -dataCenter=defaultDataCenter -ip=127.0.0.1 > /tmp/seaweedfs-sse-filer.log 2>&1 &
|
||||
@sleep 3
|
||||
|
||||
# Create S3 configuration with SSE-KMS support
|
||||
@printf '{"identities":[{"name":"%s","credentials":[{"accessKey":"%s","secretKey":"%s"}],"actions":["Admin","Read","Write"]}],"kms":{"type":"%s","configs":{"keyId":"%s","encryptionContext":{},"bucketKey":false}}}' "$(ACCESS_KEY)" "$(ACCESS_KEY)" "$(SECRET_KEY)" "$(KMS_TYPE)" "$(KMS_KEY_ID)" > /tmp/seaweedfs-sse-s3.json
|
||||
|
||||
# Start S3 server with KMS configuration
|
||||
@echo "Starting S3 server..."
|
||||
@nohup $(SEAWEEDFS_BINARY) s3 -port=$(S3_PORT) -filer=127.0.0.1:$(FILER_PORT) -config=/tmp/seaweedfs-sse-s3.json -ip.bind=127.0.0.1 > /tmp/seaweedfs-sse-s3.log 2>&1 &
|
||||
@sleep 5
|
||||
|
||||
# Wait for S3 service to be ready - use port-based checking for reliability
|
||||
@echo "$(YELLOW)Waiting for S3 service to be ready...$(NC)"
|
||||
@for i in $$(seq 1 20); do \
|
||||
if netstat -an 2>/dev/null | grep -q ":$(S3_PORT).*LISTEN" || \
|
||||
ss -an 2>/dev/null | grep -q ":$(S3_PORT).*LISTEN" || \
|
||||
lsof -i :$(S3_PORT) >/dev/null 2>&1; then \
|
||||
echo "$(GREEN)S3 service is listening on port $(S3_PORT)$(NC)"; \
|
||||
sleep 1; \
|
||||
break; \
|
||||
fi; \
|
||||
if [ $$i -eq 20 ]; then \
|
||||
echo "$(RED)S3 service failed to start within 20 seconds$(NC)"; \
|
||||
echo "=== Detailed Logs ==="; \
|
||||
echo "Master log:"; tail -30 /tmp/seaweedfs-sse-master.log || true; \
|
||||
echo "Volume log:"; tail -30 /tmp/seaweedfs-sse-volume.log || true; \
|
||||
echo "Filer log:"; tail -30 /tmp/seaweedfs-sse-filer.log || true; \
|
||||
echo "S3 log:"; tail -30 /tmp/seaweedfs-sse-s3.log || true; \
|
||||
echo "=== Port Status ==="; \
|
||||
netstat -an 2>/dev/null | grep ":$(S3_PORT)" || \
|
||||
ss -an 2>/dev/null | grep ":$(S3_PORT)" || \
|
||||
echo "No port listening on $(S3_PORT)"; \
|
||||
echo "=== Process Status ==="; \
|
||||
ps aux | grep -E "weed.*s3.*$(S3_PORT)" | grep -v grep || echo "No S3 process found"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
echo "Waiting for S3 service... ($$i/20)"; \
|
||||
sleep 1; \
|
||||
done
|
||||
|
||||
# Additional wait for filer gRPC to be ready
|
||||
@echo "$(YELLOW)Waiting for filer gRPC to be ready...$(NC)"
|
||||
@sleep 2
|
||||
@echo "$(GREEN)SeaweedFS server started successfully for SSE testing$(NC)"
|
||||
@echo "Master: http://localhost:$(MASTER_PORT)"
|
||||
@echo "Volume: http://localhost:$(VOLUME_PORT)"
|
||||
@echo "Filer: http://localhost:$(FILER_PORT)"
|
||||
@echo "S3: http://localhost:$(S3_PORT)"
|
||||
@echo "Volume Max Size: $(VOLUME_MAX_SIZE_MB)MB"
|
||||
@echo "SSE-KMS Support: Enabled"
|
||||
|
||||
# GitHub Actions compatible quick test subset
|
||||
test-quick-with-server: build-weed
|
||||
@echo "🚀 Starting quick SSE tests with automated server management..."
|
||||
@trap 'make stop-seaweedfs-safe || true' EXIT; \
|
||||
echo "Starting SeaweedFS cluster..."; \
|
||||
if make start-seaweedfs-ci > weed-test.log 2>&1; then \
|
||||
echo "✅ SeaweedFS cluster started successfully"; \
|
||||
echo "Running quick SSE integration tests..."; \
|
||||
cd $(SEAWEEDFS_ROOT) && go test -v -timeout=$(TEST_TIMEOUT) -run "TestSSECIntegrationBasic|TestSSEKMSIntegrationBasic|TestSimpleSSECIntegration" ./test/s3/sse || exit 1; \
|
||||
echo "✅ Quick tests completed successfully"; \
|
||||
make stop-seaweedfs-safe || true; \
|
||||
else \
|
||||
echo "❌ Failed to start SeaweedFS cluster"; \
|
||||
echo "=== Server startup logs ==="; \
|
||||
tail -50 weed-test.log; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
# Help target - extended version
|
||||
help-extended:
|
||||
@echo "Available targets:"
|
||||
@echo " test - Run all SSE integration tests (requires running server)"
|
||||
@echo " test-with-server - Run all tests with automatic server management (GitHub Actions compatible)"
|
||||
@echo " test-quick-with-server - Run quick tests with automatic server management"
|
||||
@echo " test-ssec - Run only SSE-C tests"
|
||||
@echo " test-ssekms - Run only SSE-KMS tests"
|
||||
@echo " test-copy - Run only copy operation tests"
|
||||
@echo " test-multipart - Run only multipart upload tests"
|
||||
@echo " benchmark - Run performance benchmarks"
|
||||
@echo " perf - Run performance tests with various data sizes"
|
||||
@echo " test-metadata-persistence - Test metadata persistence (catches filer bugs)"
|
||||
@echo " build-weed - Build SeaweedFS binary"
|
||||
@echo " check-binary - Check if SeaweedFS binary exists"
|
||||
@echo " start-seaweedfs - Start SeaweedFS cluster"
|
||||
@echo " start-seaweedfs-ci - Start SeaweedFS cluster (CI-safe version)"
|
||||
@echo " stop-seaweedfs - Stop SeaweedFS cluster"
|
||||
@echo " stop-seaweedfs-safe - Stop SeaweedFS cluster (CI-safe version)"
|
||||
@echo " clean - Clean up test artifacts"
|
||||
@echo " debug-logs - Show recent logs from all services"
|
||||
@echo ""
|
||||
@echo "Environment Variables:"
|
||||
@echo " ACCESS_KEY - S3 access key (default: some_access_key1)"
|
||||
@echo " SECRET_KEY - S3 secret key (default: some_secret_key1)"
|
||||
@echo " KMS_KEY_ID - KMS key ID for SSE-KMS (default: test-key-123)"
|
||||
@echo " KMS_TYPE - KMS type (default: local)"
|
||||
@echo " VOLUME_MAX_SIZE_MB - Volume maximum size in MB (default: 50)"
|
||||
@echo " TEST_TIMEOUT - Test timeout (default: 15m)"
|
||||
234
test/s3/sse/README.md
Normal file
234
test/s3/sse/README.md
Normal file
@@ -0,0 +1,234 @@
|
||||
# S3 Server-Side Encryption (SSE) Integration Tests
|
||||
|
||||
This directory contains comprehensive integration tests for SeaweedFS S3 API Server-Side Encryption functionality. These tests validate the complete end-to-end encryption/decryption pipeline from S3 API requests through filer metadata storage.
|
||||
|
||||
## Overview
|
||||
|
||||
The SSE integration tests cover three main encryption methods:
|
||||
|
||||
- **SSE-C (Customer-Provided Keys)**: Client provides encryption keys via request headers
|
||||
- **SSE-KMS (Key Management Service)**: Server manages encryption keys through a KMS provider
|
||||
- **SSE-S3 (Server-Managed Keys)**: Server automatically manages encryption keys
|
||||
|
||||
## Why Integration Tests Matter
|
||||
|
||||
These integration tests were created to address a **critical gap in test coverage** that previously existed. While the SeaweedFS codebase had comprehensive unit tests for SSE components, it lacked integration tests that validated the complete request flow:
|
||||
|
||||
```
|
||||
Client Request → S3 API → Filer Storage → Metadata Persistence → Retrieval → Decryption
|
||||
```
|
||||
|
||||
### The Bug These Tests Would Have Caught
|
||||
|
||||
A critical bug was discovered where:
|
||||
- ✅ S3 API correctly encrypted data and sent metadata headers to the filer
|
||||
- ❌ **Filer did not process SSE metadata headers**, losing all encryption metadata
|
||||
- ❌ Objects could be encrypted but **never decrypted** (metadata was lost)
|
||||
|
||||
**Unit tests passed** because they tested components in isolation, but the **integration was broken**. These integration tests specifically validate that:
|
||||
|
||||
1. Encryption metadata is correctly sent to the filer
|
||||
2. Filer properly processes and stores the metadata
|
||||
3. Objects can be successfully retrieved and decrypted
|
||||
4. Copy operations preserve encryption metadata
|
||||
5. Multipart uploads maintain encryption consistency
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Core Integration Tests
|
||||
|
||||
#### Basic Functionality
|
||||
- `TestSSECIntegrationBasic` - Basic SSE-C PUT/GET cycle
|
||||
- `TestSSEKMSIntegrationBasic` - Basic SSE-KMS PUT/GET cycle
|
||||
|
||||
#### Data Size Validation
|
||||
- `TestSSECIntegrationVariousDataSizes` - SSE-C with various data sizes (0B to 1MB)
|
||||
- `TestSSEKMSIntegrationVariousDataSizes` - SSE-KMS with various data sizes
|
||||
|
||||
#### Object Copy Operations
|
||||
- `TestSSECObjectCopyIntegration` - SSE-C object copying (key rotation, encryption changes)
|
||||
- `TestSSEKMSObjectCopyIntegration` - SSE-KMS object copying
|
||||
|
||||
#### Multipart Uploads
|
||||
- `TestSSEMultipartUploadIntegration` - SSE multipart uploads for large objects
|
||||
|
||||
#### Error Conditions
|
||||
- `TestSSEErrorConditions` - Invalid keys, malformed requests, error handling
|
||||
|
||||
### Performance Tests
|
||||
- `BenchmarkSSECThroughput` - SSE-C performance benchmarking
|
||||
- `BenchmarkSSEKMSThroughput` - SSE-KMS performance benchmarking
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Build SeaweedFS**: Ensure the `weed` binary is built and available in PATH
|
||||
```bash
|
||||
cd /path/to/seaweedfs
|
||||
make
|
||||
```
|
||||
|
||||
2. **Dependencies**: Tests use AWS SDK Go v2 and testify - these are handled by Go modules
|
||||
|
||||
### Quick Test
|
||||
|
||||
Run basic SSE integration tests:
|
||||
```bash
|
||||
make test-basic
|
||||
```
|
||||
|
||||
### Comprehensive Testing
|
||||
|
||||
Run all SSE integration tests:
|
||||
```bash
|
||||
make test
|
||||
```
|
||||
|
||||
### Specific Test Categories
|
||||
|
||||
```bash
|
||||
make test-ssec # SSE-C tests only
|
||||
make test-ssekms # SSE-KMS tests only
|
||||
make test-copy # Copy operation tests
|
||||
make test-multipart # Multipart upload tests
|
||||
make test-errors # Error condition tests
|
||||
```
|
||||
|
||||
### Performance Testing
|
||||
|
||||
```bash
|
||||
make benchmark # Performance benchmarks
|
||||
make perf # Various data size performance tests
|
||||
```
|
||||
|
||||
### Development Testing
|
||||
|
||||
```bash
|
||||
make manual-start # Start SeaweedFS for manual testing
|
||||
# ... run manual tests ...
|
||||
make manual-stop # Stop and cleanup
|
||||
```
|
||||
|
||||
## Test Configuration
|
||||
|
||||
### Default Configuration
|
||||
|
||||
The tests use these default settings:
|
||||
- **S3 Endpoint**: `http://127.0.0.1:8333`
|
||||
- **Access Key**: `some_access_key1`
|
||||
- **Secret Key**: `some_secret_key1`
|
||||
- **Region**: `us-east-1`
|
||||
- **Bucket Prefix**: `test-sse-`
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
Override defaults via environment variables:
|
||||
```bash
|
||||
S3_PORT=8444 FILER_PORT=8889 make test
|
||||
```
|
||||
|
||||
### Test Environment
|
||||
|
||||
Each test run:
|
||||
1. Starts a complete SeaweedFS cluster (master, volume, filer, s3)
|
||||
2. Configures KMS support for SSE-KMS tests
|
||||
3. Creates temporary buckets with unique names
|
||||
4. Runs tests with real HTTP requests
|
||||
5. Cleans up all test artifacts
|
||||
|
||||
## Test Data Coverage
|
||||
|
||||
### Data Sizes Tested
|
||||
- **0 bytes**: Empty files (edge case)
|
||||
- **1 byte**: Minimal data
|
||||
- **16 bytes**: Single AES block
|
||||
- **31 bytes**: Just under two blocks
|
||||
- **32 bytes**: Exactly two blocks
|
||||
- **100 bytes**: Small file
|
||||
- **1 KB**: Small text file
|
||||
- **8 KB**: Medium file
|
||||
- **64 KB**: Large file
|
||||
- **1 MB**: Very large file
|
||||
|
||||
### Encryption Key Scenarios
|
||||
- **SSE-C**: Random 256-bit keys, key rotation, wrong keys
|
||||
- **SSE-KMS**: Various key IDs, encryption contexts, bucket keys
|
||||
- **Copy Operations**: Same key, different keys, encryption transitions
|
||||
|
||||
## Critical Test Scenarios
|
||||
|
||||
### Metadata Persistence Validation
|
||||
|
||||
The integration tests specifically validate scenarios that would catch metadata storage bugs:
|
||||
|
||||
```go
|
||||
// 1. Upload with SSE-C
|
||||
client.PutObject(..., SSECustomerKey: key) // ← Metadata sent to filer
|
||||
|
||||
// 2. Retrieve with SSE-C
|
||||
client.GetObject(..., SSECustomerKey: key) // ← Metadata retrieved from filer
|
||||
|
||||
// 3. Verify decryption works
|
||||
assert.Equal(originalData, decryptedData) // ← Would fail if metadata lost
|
||||
```
|
||||
|
||||
### Content-Length Validation
|
||||
|
||||
Tests verify that Content-Length headers are correct, which would catch bugs related to IV handling:
|
||||
|
||||
```go
|
||||
assert.Equal(int64(originalSize), resp.ContentLength) // ← Would catch IV-in-stream bugs
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
make debug-logs # Show recent log entries
|
||||
make debug-status # Show process and port status
|
||||
```
|
||||
|
||||
### Manual Testing
|
||||
```bash
|
||||
make manual-start # Start SeaweedFS
|
||||
# Test with S3 clients, curl, etc.
|
||||
make manual-stop # Cleanup
|
||||
```
|
||||
|
||||
## Integration Test Benefits
|
||||
|
||||
These integration tests provide:
|
||||
|
||||
1. **End-to-End Validation**: Complete request pipeline testing
|
||||
2. **Metadata Persistence**: Validates filer storage/retrieval of encryption metadata
|
||||
3. **Real Network Communication**: Uses actual HTTP requests and responses
|
||||
4. **Production-Like Environment**: Full SeaweedFS cluster with all components
|
||||
5. **Regression Protection**: Prevents critical integration bugs
|
||||
6. **Performance Baselines**: Benchmarking for performance monitoring
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
For CI/CD pipelines, use:
|
||||
```bash
|
||||
make ci-test # Quick tests suitable for CI
|
||||
make stress # Stress testing for stability validation
|
||||
```
|
||||
|
||||
## Key Differences from Unit Tests
|
||||
|
||||
| Aspect | Unit Tests | Integration Tests |
|
||||
|--------|------------|------------------|
|
||||
| **Scope** | Individual functions | Complete request pipeline |
|
||||
| **Dependencies** | Mocked/simulated | Real SeaweedFS cluster |
|
||||
| **Network** | None | Real HTTP requests |
|
||||
| **Storage** | In-memory | Real filer database |
|
||||
| **Metadata** | Manual simulation | Actual storage/retrieval |
|
||||
| **Speed** | Fast (milliseconds) | Slower (seconds) |
|
||||
| **Coverage** | Component logic | System integration |
|
||||
|
||||
## Conclusion
|
||||
|
||||
These integration tests ensure that SeaweedFS SSE functionality works correctly in production-like environments. They complement the existing unit tests by validating that all components work together properly, providing confidence that encryption/decryption operations will succeed for real users.
|
||||
|
||||
**Most importantly**, these tests would have immediately caught the critical filer metadata storage bug that was previously undetected, demonstrating the crucial importance of integration testing for distributed systems.
|
||||
1178
test/s3/sse/s3_sse_integration_test.go
Normal file
1178
test/s3/sse/s3_sse_integration_test.go
Normal file
File diff suppressed because it is too large
Load Diff
373
test/s3/sse/s3_sse_multipart_copy_test.go
Normal file
373
test/s3/sse/s3_sse_multipart_copy_test.go
Normal file
@@ -0,0 +1,373 @@
|
||||
package sse_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"fmt"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/aws/aws-sdk-go-v2/aws"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3/types"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestSSEMultipartCopy tests copying multipart encrypted objects
|
||||
func TestSSEMultipartCopy(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
client, err := createS3Client(ctx, defaultConfig)
|
||||
require.NoError(t, err, "Failed to create S3 client")
|
||||
|
||||
bucketName, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"sse-multipart-copy-")
|
||||
require.NoError(t, err, "Failed to create test bucket")
|
||||
defer cleanupTestBucket(ctx, client, bucketName)
|
||||
|
||||
// Generate test data for multipart upload (7.5MB)
|
||||
originalData := generateTestData(7*1024*1024 + 512*1024)
|
||||
originalMD5 := fmt.Sprintf("%x", md5.Sum(originalData))
|
||||
|
||||
t.Run("Copy SSE-C Multipart Object", func(t *testing.T) {
|
||||
testSSECMultipartCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
|
||||
t.Run("Copy SSE-KMS Multipart Object", func(t *testing.T) {
|
||||
testSSEKMSMultipartCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
|
||||
t.Run("Copy SSE-C to SSE-KMS", func(t *testing.T) {
|
||||
testSSECToSSEKMSCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
|
||||
t.Run("Copy SSE-KMS to SSE-C", func(t *testing.T) {
|
||||
testSSEKMSToSSECCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
|
||||
t.Run("Copy SSE-C to Unencrypted", func(t *testing.T) {
|
||||
testSSECToUnencryptedCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
|
||||
t.Run("Copy SSE-KMS to Unencrypted", func(t *testing.T) {
|
||||
testSSEKMSToUnencryptedCopy(t, ctx, client, bucketName, originalData, originalMD5)
|
||||
})
|
||||
}
|
||||
|
||||
// testSSECMultipartCopy tests copying SSE-C multipart objects with same key
|
||||
func testSSECMultipartCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
sseKey := generateSSECKey()
|
||||
|
||||
// Upload original multipart SSE-C object
|
||||
sourceKey := "source-ssec-multipart-object"
|
||||
err := uploadMultipartSSECObject(ctx, client, bucketName, sourceKey, originalData, *sseKey)
|
||||
require.NoError(t, err, "Failed to upload source SSE-C multipart object")
|
||||
|
||||
// Copy with same SSE-C key
|
||||
destKey := "dest-ssec-multipart-object"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
// Copy source SSE-C headers
|
||||
CopySourceSSECustomerAlgorithm: aws.String("AES256"),
|
||||
CopySourceSSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
CopySourceSSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
// Destination SSE-C headers (same key)
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-C multipart object")
|
||||
|
||||
// Verify copied object
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, sseKey, nil)
|
||||
}
|
||||
|
||||
// testSSEKMSMultipartCopy tests copying SSE-KMS multipart objects with same key
|
||||
func testSSEKMSMultipartCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
// Upload original multipart SSE-KMS object
|
||||
sourceKey := "source-ssekms-multipart-object"
|
||||
err := uploadMultipartSSEKMSObject(ctx, client, bucketName, sourceKey, "test-multipart-key", originalData)
|
||||
require.NoError(t, err, "Failed to upload source SSE-KMS multipart object")
|
||||
|
||||
// Copy with same SSE-KMS key
|
||||
destKey := "dest-ssekms-multipart-object"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
ServerSideEncryption: types.ServerSideEncryptionAwsKms,
|
||||
SSEKMSKeyId: aws.String("test-multipart-key"),
|
||||
BucketKeyEnabled: aws.Bool(false),
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-KMS multipart object")
|
||||
|
||||
// Verify copied object
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, nil, aws.String("test-multipart-key"))
|
||||
}
|
||||
|
||||
// testSSECToSSEKMSCopy tests copying SSE-C multipart objects to SSE-KMS
|
||||
func testSSECToSSEKMSCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
sseKey := generateSSECKey()
|
||||
|
||||
// Upload original multipart SSE-C object
|
||||
sourceKey := "source-ssec-multipart-for-kms"
|
||||
err := uploadMultipartSSECObject(ctx, client, bucketName, sourceKey, originalData, *sseKey)
|
||||
require.NoError(t, err, "Failed to upload source SSE-C multipart object")
|
||||
|
||||
// Copy to SSE-KMS
|
||||
destKey := "dest-ssekms-from-ssec"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
// Copy source SSE-C headers
|
||||
CopySourceSSECustomerAlgorithm: aws.String("AES256"),
|
||||
CopySourceSSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
CopySourceSSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
// Destination SSE-KMS headers
|
||||
ServerSideEncryption: types.ServerSideEncryptionAwsKms,
|
||||
SSEKMSKeyId: aws.String("test-multipart-key"),
|
||||
BucketKeyEnabled: aws.Bool(false),
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-C to SSE-KMS")
|
||||
|
||||
// Verify copied object as SSE-KMS
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, nil, aws.String("test-multipart-key"))
|
||||
}
|
||||
|
||||
// testSSEKMSToSSECCopy tests copying SSE-KMS multipart objects to SSE-C
|
||||
func testSSEKMSToSSECCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
sseKey := generateSSECKey()
|
||||
|
||||
// Upload original multipart SSE-KMS object
|
||||
sourceKey := "source-ssekms-multipart-for-ssec"
|
||||
err := uploadMultipartSSEKMSObject(ctx, client, bucketName, sourceKey, "test-multipart-key", originalData)
|
||||
require.NoError(t, err, "Failed to upload source SSE-KMS multipart object")
|
||||
|
||||
// Copy to SSE-C
|
||||
destKey := "dest-ssec-from-ssekms"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
// Destination SSE-C headers
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-KMS to SSE-C")
|
||||
|
||||
// Verify copied object as SSE-C
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, sseKey, nil)
|
||||
}
|
||||
|
||||
// testSSECToUnencryptedCopy tests copying SSE-C multipart objects to unencrypted
|
||||
func testSSECToUnencryptedCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
sseKey := generateSSECKey()
|
||||
|
||||
// Upload original multipart SSE-C object
|
||||
sourceKey := "source-ssec-multipart-for-plain"
|
||||
err := uploadMultipartSSECObject(ctx, client, bucketName, sourceKey, originalData, *sseKey)
|
||||
require.NoError(t, err, "Failed to upload source SSE-C multipart object")
|
||||
|
||||
// Copy to unencrypted
|
||||
destKey := "dest-plain-from-ssec"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
// Copy source SSE-C headers
|
||||
CopySourceSSECustomerAlgorithm: aws.String("AES256"),
|
||||
CopySourceSSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
CopySourceSSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
// No destination encryption headers
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-C to unencrypted")
|
||||
|
||||
// Verify copied object as unencrypted
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, nil, nil)
|
||||
}
|
||||
|
||||
// testSSEKMSToUnencryptedCopy tests copying SSE-KMS multipart objects to unencrypted
|
||||
func testSSEKMSToUnencryptedCopy(t *testing.T, ctx context.Context, client *s3.Client, bucketName string, originalData []byte, originalMD5 string) {
|
||||
// Upload original multipart SSE-KMS object
|
||||
sourceKey := "source-ssekms-multipart-for-plain"
|
||||
err := uploadMultipartSSEKMSObject(ctx, client, bucketName, sourceKey, "test-multipart-key", originalData)
|
||||
require.NoError(t, err, "Failed to upload source SSE-KMS multipart object")
|
||||
|
||||
// Copy to unencrypted
|
||||
destKey := "dest-plain-from-ssekms"
|
||||
_, err = client.CopyObject(ctx, &s3.CopyObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(destKey),
|
||||
CopySource: aws.String(fmt.Sprintf("%s/%s", bucketName, sourceKey)),
|
||||
// No destination encryption headers
|
||||
})
|
||||
require.NoError(t, err, "Failed to copy SSE-KMS to unencrypted")
|
||||
|
||||
// Verify copied object as unencrypted
|
||||
verifyEncryptedObject(t, ctx, client, bucketName, destKey, originalData, originalMD5, nil, nil)
|
||||
}
|
||||
|
||||
// uploadMultipartSSECObject uploads a multipart SSE-C object
|
||||
func uploadMultipartSSECObject(ctx context.Context, client *s3.Client, bucketName, objectKey string, data []byte, sseKey SSECKey) error {
|
||||
// Create multipart upload
|
||||
createResp, err := client.CreateMultipartUpload(ctx, &s3.CreateMultipartUploadInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
uploadID := aws.ToString(createResp.UploadId)
|
||||
|
||||
// Upload parts
|
||||
partSize := 5 * 1024 * 1024 // 5MB
|
||||
var completedParts []types.CompletedPart
|
||||
|
||||
for i := 0; i < len(data); i += partSize {
|
||||
end := i + partSize
|
||||
if end > len(data) {
|
||||
end = len(data)
|
||||
}
|
||||
|
||||
partNumber := int32(len(completedParts) + 1)
|
||||
partResp, err := client.UploadPart(ctx, &s3.UploadPartInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
PartNumber: aws.Int32(partNumber),
|
||||
UploadId: aws.String(uploadID),
|
||||
Body: bytes.NewReader(data[i:end]),
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
completedParts = append(completedParts, types.CompletedPart{
|
||||
ETag: partResp.ETag,
|
||||
PartNumber: aws.Int32(partNumber),
|
||||
})
|
||||
}
|
||||
|
||||
// Complete multipart upload
|
||||
_, err = client.CompleteMultipartUpload(ctx, &s3.CompleteMultipartUploadInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
UploadId: aws.String(uploadID),
|
||||
MultipartUpload: &types.CompletedMultipartUpload{
|
||||
Parts: completedParts,
|
||||
},
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// uploadMultipartSSEKMSObject uploads a multipart SSE-KMS object
|
||||
func uploadMultipartSSEKMSObject(ctx context.Context, client *s3.Client, bucketName, objectKey, keyID string, data []byte) error {
|
||||
// Create multipart upload
|
||||
createResp, err := client.CreateMultipartUpload(ctx, &s3.CreateMultipartUploadInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
ServerSideEncryption: types.ServerSideEncryptionAwsKms,
|
||||
SSEKMSKeyId: aws.String(keyID),
|
||||
BucketKeyEnabled: aws.Bool(false),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
uploadID := aws.ToString(createResp.UploadId)
|
||||
|
||||
// Upload parts
|
||||
partSize := 5 * 1024 * 1024 // 5MB
|
||||
var completedParts []types.CompletedPart
|
||||
|
||||
for i := 0; i < len(data); i += partSize {
|
||||
end := i + partSize
|
||||
if end > len(data) {
|
||||
end = len(data)
|
||||
}
|
||||
|
||||
partNumber := int32(len(completedParts) + 1)
|
||||
partResp, err := client.UploadPart(ctx, &s3.UploadPartInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
PartNumber: aws.Int32(partNumber),
|
||||
UploadId: aws.String(uploadID),
|
||||
Body: bytes.NewReader(data[i:end]),
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
completedParts = append(completedParts, types.CompletedPart{
|
||||
ETag: partResp.ETag,
|
||||
PartNumber: aws.Int32(partNumber),
|
||||
})
|
||||
}
|
||||
|
||||
// Complete multipart upload
|
||||
_, err = client.CompleteMultipartUpload(ctx, &s3.CompleteMultipartUploadInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
UploadId: aws.String(uploadID),
|
||||
MultipartUpload: &types.CompletedMultipartUpload{
|
||||
Parts: completedParts,
|
||||
},
|
||||
})
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// verifyEncryptedObject verifies that a copied object can be retrieved and matches the original data
|
||||
func verifyEncryptedObject(t *testing.T, ctx context.Context, client *s3.Client, bucketName, objectKey string, expectedData []byte, expectedMD5 string, sseKey *SSECKey, kmsKeyID *string) {
|
||||
var getInput *s3.GetObjectInput
|
||||
|
||||
if sseKey != nil {
|
||||
// SSE-C object
|
||||
getInput = &s3.GetObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(sseKey.KeyB64),
|
||||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5),
|
||||
}
|
||||
} else {
|
||||
// SSE-KMS or unencrypted object
|
||||
getInput = &s3.GetObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
}
|
||||
}
|
||||
|
||||
getResp, err := client.GetObject(ctx, getInput)
|
||||
require.NoError(t, err, "Failed to retrieve copied object %s", objectKey)
|
||||
defer getResp.Body.Close()
|
||||
|
||||
// Read and verify data
|
||||
retrievedData, err := io.ReadAll(getResp.Body)
|
||||
require.NoError(t, err, "Failed to read copied object data")
|
||||
|
||||
require.Equal(t, len(expectedData), len(retrievedData), "Data size mismatch for object %s", objectKey)
|
||||
|
||||
// Verify data using MD5
|
||||
retrievedMD5 := fmt.Sprintf("%x", md5.Sum(retrievedData))
|
||||
require.Equal(t, expectedMD5, retrievedMD5, "Data MD5 mismatch for object %s", objectKey)
|
||||
|
||||
// Verify encryption headers
|
||||
if sseKey != nil {
|
||||
require.Equal(t, "AES256", aws.ToString(getResp.SSECustomerAlgorithm), "SSE-C algorithm mismatch")
|
||||
require.Equal(t, sseKey.KeyMD5, aws.ToString(getResp.SSECustomerKeyMD5), "SSE-C key MD5 mismatch")
|
||||
} else if kmsKeyID != nil {
|
||||
require.Equal(t, types.ServerSideEncryptionAwsKms, getResp.ServerSideEncryption, "SSE-KMS encryption mismatch")
|
||||
require.Contains(t, aws.ToString(getResp.SSEKMSKeyId), *kmsKeyID, "SSE-KMS key ID mismatch")
|
||||
}
|
||||
|
||||
t.Logf("✅ Successfully verified copied object %s: %d bytes, MD5=%s", objectKey, len(retrievedData), retrievedMD5)
|
||||
}
|
||||
115
test/s3/sse/simple_sse_test.go
Normal file
115
test/s3/sse/simple_sse_test.go
Normal file
@@ -0,0 +1,115 @@
|
||||
package sse_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"crypto/rand"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go-v2/aws"
|
||||
"github.com/aws/aws-sdk-go-v2/config"
|
||||
"github.com/aws/aws-sdk-go-v2/credentials"
|
||||
"github.com/aws/aws-sdk-go-v2/service/s3"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestSimpleSSECIntegration tests basic SSE-C with a fixed bucket name
|
||||
func TestSimpleSSECIntegration(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Create S3 client
|
||||
customResolver := aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
|
||||
return aws.Endpoint{
|
||||
URL: "http://127.0.0.1:8333",
|
||||
HostnameImmutable: true,
|
||||
}, nil
|
||||
})
|
||||
|
||||
awsCfg, err := config.LoadDefaultConfig(ctx,
|
||||
config.WithRegion("us-east-1"),
|
||||
config.WithEndpointResolverWithOptions(customResolver),
|
||||
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
|
||||
"some_access_key1",
|
||||
"some_secret_key1",
|
||||
"",
|
||||
)),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
client := s3.NewFromConfig(awsCfg, func(o *s3.Options) {
|
||||
o.UsePathStyle = true
|
||||
})
|
||||
|
||||
bucketName := "test-debug-bucket"
|
||||
objectKey := fmt.Sprintf("test-object-prefixed-%d", time.Now().UnixNano())
|
||||
|
||||
// Generate SSE-C key
|
||||
key := make([]byte, 32)
|
||||
rand.Read(key)
|
||||
keyB64 := base64.StdEncoding.EncodeToString(key)
|
||||
keyMD5Hash := md5.Sum(key)
|
||||
keyMD5 := base64.StdEncoding.EncodeToString(keyMD5Hash[:])
|
||||
|
||||
testData := []byte("Hello, simple SSE-C integration test!")
|
||||
|
||||
// Ensure bucket exists
|
||||
_, err = client.CreateBucket(ctx, &s3.CreateBucketInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
})
|
||||
if err != nil {
|
||||
t.Logf("Bucket creation result: %v (might be OK if exists)", err)
|
||||
}
|
||||
|
||||
// Wait a moment for bucket to be ready
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
t.Run("PUT with SSE-C", func(t *testing.T) {
|
||||
_, err := client.PutObject(ctx, &s3.PutObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
Body: bytes.NewReader(testData),
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(keyB64),
|
||||
SSECustomerKeyMD5: aws.String(keyMD5),
|
||||
})
|
||||
require.NoError(t, err, "Failed to upload SSE-C object")
|
||||
t.Log("✅ SSE-C PUT succeeded!")
|
||||
})
|
||||
|
||||
t.Run("GET with SSE-C", func(t *testing.T) {
|
||||
resp, err := client.GetObject(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
SSECustomerAlgorithm: aws.String("AES256"),
|
||||
SSECustomerKey: aws.String(keyB64),
|
||||
SSECustomerKeyMD5: aws.String(keyMD5),
|
||||
})
|
||||
require.NoError(t, err, "Failed to retrieve SSE-C object")
|
||||
defer resp.Body.Close()
|
||||
|
||||
retrievedData, err := io.ReadAll(resp.Body)
|
||||
require.NoError(t, err, "Failed to read retrieved data")
|
||||
assert.Equal(t, testData, retrievedData, "Retrieved data doesn't match original")
|
||||
|
||||
// Verify SSE-C headers
|
||||
assert.Equal(t, "AES256", aws.ToString(resp.SSECustomerAlgorithm))
|
||||
assert.Equal(t, keyMD5, aws.ToString(resp.SSECustomerKeyMD5))
|
||||
|
||||
t.Log("✅ SSE-C GET succeeded and data matches!")
|
||||
})
|
||||
|
||||
t.Run("GET without key should fail", func(t *testing.T) {
|
||||
_, err := client.GetObject(ctx, &s3.GetObjectInput{
|
||||
Bucket: aws.String(bucketName),
|
||||
Key: aws.String(objectKey),
|
||||
})
|
||||
assert.Error(t, err, "Should fail to retrieve SSE-C object without key")
|
||||
t.Log("✅ GET without key correctly failed")
|
||||
})
|
||||
}
|
||||
1
test/s3/sse/test_single_ssec.txt
Normal file
1
test/s3/sse/test_single_ssec.txt
Normal file
@@ -0,0 +1 @@
|
||||
Test data for single object SSE-C
|
||||
Reference in New Issue
Block a user