S3 API: Add SSE-C (#7143)

* implement sse-c

* fix Content-Range

* adding tests

* Update s3_sse_c_test.go

* copy sse-c objects

* adding tests

* refactor

* multi reader

* remove extra write header call

* refactor

* SSE-C encrypted objects do not support HTTP Range requests

* robust

* fix server starts

* Update Makefile

* Update Makefile

* ci: remove SSE-C integration tests and workflows; delete test/s3/encryption/

* s3: SSE-C MD5 must be base64 (case-sensitive); fix validation, comparisons, metadata storage; update tests

* minor

* base64

* Update SSE-C_IMPLEMENTATION.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update weed/s3api/s3api_object_handlers.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update SSE-C_IMPLEMENTATION.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* address comments

* fix test

* fix compilation

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
Chris Lu
2025-08-19 08:19:30 -07:00
committed by GitHub
parent 6e56cac9e5
commit 2714b70955
12 changed files with 1267 additions and 23 deletions

View File

@@ -410,3 +410,5 @@ jobs:
name: s3-versioning-stress-logs
path: test/s3/versioning/weed-test*.log
retention-days: 7
# Removed SSE-C integration tests and compatibility job

1
.gitignore vendored
View File

@@ -116,3 +116,4 @@ test/s3/versioning/weed-test.log
docker/agent_pub_record
docker/admin_integration/weed-local
/seaweedfs-rdma-sidecar/bin
/test/s3/encryption/filerldb2

169
SSE-C_IMPLEMENTATION.md Normal file
View File

@@ -0,0 +1,169 @@
# Server-Side Encryption with Customer-Provided Keys (SSE-C) Implementation
This document describes the implementation of SSE-C support in SeaweedFS, addressing the feature request from [GitHub Discussion #5361](https://github.com/seaweedfs/seaweedfs/discussions/5361).
## Overview
SSE-C allows clients to provide their own encryption keys for server-side encryption of objects stored in SeaweedFS. The server encrypts the data using the customer-provided AES-256 key but does not store the key itself - only an MD5 hash of the key for validation purposes.
## Implementation Details
### Architecture
The SSE-C implementation follows a transparent encryption/decryption pattern:
1. **Upload (PUT/POST)**: Data is encrypted with the customer key before being stored
2. **Download (GET/HEAD)**: Encrypted data is decrypted on-the-fly using the customer key
3. **Metadata Storage**: Only the encryption algorithm and key MD5 are stored as metadata
### Key Components
#### 1. Constants and Headers (`weed/s3api/s3_constants/header.go`)
- Added AWS-compatible SSE-C header constants
- Support for both regular and copy-source SSE-C headers
#### 2. Core SSE-C Logic (`weed/s3api/s3_sse_c.go`)
- **SSECustomerKey**: Structure to hold customer encryption key and metadata
- **SSECEncryptedReader**: Streaming encryption with AES-256-CTR mode
- **SSECDecryptedReader**: Streaming decryption with IV extraction
- **validateAndParseSSECHeaders**: Shared validation logic (DRY principle)
- **ParseSSECHeaders**: Parse regular SSE-C headers
- **ParseSSECCopySourceHeaders**: Parse copy-source SSE-C headers
- Header validation and parsing functions
- Metadata extraction and response handling
#### 3. Error Handling (`weed/s3api/s3err/s3api_errors.go`)
- New error codes for SSE-C validation failures
- AWS-compatible error messages and HTTP status codes
#### 4. S3 API Integration
- **PUT Object Handler**: Encrypts data streams transparently
- **GET Object Handler**: Decrypts data streams transparently
- **HEAD Object Handler**: Validates keys and returns appropriate headers
- **Metadata Storage**: Integrates with existing `SaveAmzMetaData` function
### Encryption Scheme
- **Algorithm**: AES-256-CTR (Counter mode)
- **Key Size**: 256 bits (32 bytes)
- **IV Generation**: Random 16-byte IV per object
- **Storage Format**: `[IV][EncryptedData]` where IV is prepended to encrypted content
### Metadata Storage
SSE-C metadata is stored in the filer's extended attributes:
```
x-amz-server-side-encryption-customer-algorithm: "AES256"
x-amz-server-side-encryption-customer-key-md5: "<md5-hash-of-key>"
```
## API Compatibility
### Required Headers for Encryption (PUT/POST)
```
x-amz-server-side-encryption-customer-algorithm: AES256
x-amz-server-side-encryption-customer-key: <base64-encoded-256-bit-key>
x-amz-server-side-encryption-customer-key-md5: <md5-hash-of-key>
```
### Required Headers for Decryption (GET/HEAD)
Same headers as encryption - the server validates the key MD5 matches.
### Copy Operations
Support for copy-source SSE-C headers:
```
x-amz-copy-source-server-side-encryption-customer-algorithm
x-amz-copy-source-server-side-encryption-customer-key
x-amz-copy-source-server-side-encryption-customer-key-md5
```
## Error Handling
The implementation provides AWS-compatible error responses:
- **InvalidEncryptionAlgorithmError**: Non-AES256 algorithm specified
- **InvalidArgument**: Invalid key format, size, or MD5 mismatch
- **Missing customer key**: Object encrypted but no key provided
- **Unnecessary customer key**: Object not encrypted but key provided
## Security Considerations
1. **Key Management**: Customer keys are never stored - only MD5 hashes for validation
2. **IV Randomness**: Fresh random IV generated for each object
3. **Transparent Security**: Volume servers never see unencrypted data
4. **Key Validation**: Strict validation of key format, size, and MD5
## Testing
Comprehensive test suite covers:
- Header validation and parsing (regular and copy-source)
- Encryption/decryption round-trip
- Error condition handling
- Metadata extraction
- Code reuse validation (DRY principle)
- AWS S3 compatibility
Run tests with:
```bash
go test -v ./weed/s3api
## Usage Example
### Upload with SSE-C
```bash
# Generate a 256-bit key
KEY=$(openssl rand -base64 32)
KEY_MD5=$(echo -n "$KEY" | base64 -d | openssl dgst -md5 -binary | base64)
# Upload object with SSE-C
curl -X PUT "http://localhost:8333/bucket/object" \
-H "x-amz-server-side-encryption-customer-algorithm: AES256" \
-H "x-amz-server-side-encryption-customer-key: $KEY" \
-H "x-amz-server-side-encryption-customer-key-md5: $KEY_MD5" \
--data-binary @file.txt
```
### Download with SSE-C
```bash
# Download object with SSE-C (same key required)
curl "http://localhost:8333/bucket/object" \
-H "x-amz-server-side-encryption-customer-algorithm: AES256" \
-H "x-amz-server-side-encryption-customer-key: $KEY" \
-H "x-amz-server-side-encryption-customer-key-md5: $KEY_MD5"
```
## Integration Points
### Existing SeaweedFS Features
- **Filer Metadata**: Extends existing metadata storage
- **Volume Servers**: No changes required - store encrypted data transparently
- **S3 API**: Integrates seamlessly with existing handlers
- **Versioning**: Compatible with object versioning
- **Multipart Upload**: Ready for multipart upload integration
### Future Enhancements
- **SSE-S3**: Server-managed encryption keys
- **SSE-KMS**: External key management service integration
- **Performance Optimization**: Hardware acceleration for encryption
- **Compliance**: Enhanced audit logging for encrypted objects
## File Changes Summary
1. **`weed/s3api/s3_constants/header.go`** - Added SSE-C header constants
2. **`weed/s3api/s3_sse_c.go`** - Core SSE-C implementation (NEW)
3. **`weed/s3api/s3_sse_c_test.go`** - Comprehensive test suite (NEW)
4. **`weed/s3api/s3err/s3api_errors.go`** - Added SSE-C error codes
5. **`weed/s3api/s3api_object_handlers.go`** - GET/HEAD with SSE-C support
6. **`weed/s3api/s3api_object_handlers_put.go`** - PUT with SSE-C support
7. **`weed/server/filer_server_handlers_write_autochunk.go`** - Metadata storage
## Compliance
This implementation follows the [AWS S3 SSE-C specification](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html) for maximum compatibility with existing S3 clients and tools.
## Performance Impact
- **Encryption Overhead**: Minimal CPU impact with efficient AES-CTR streaming
- **Memory Usage**: Constant memory usage via streaming encryption/decryption
- **Storage Overhead**: 16 bytes per object for IV storage
- **Network**: No additional network overhead

View File

@@ -64,6 +64,17 @@ const (
AmzCopySourceIfUnmodifiedSince = "X-Amz-Copy-Source-If-Unmodified-Since"
AmzMpPartsCount = "X-Amz-Mp-Parts-Count"
// S3 Server-Side Encryption with Customer-provided Keys (SSE-C)
AmzServerSideEncryptionCustomerAlgorithm = "X-Amz-Server-Side-Encryption-Customer-Algorithm"
AmzServerSideEncryptionCustomerKey = "X-Amz-Server-Side-Encryption-Customer-Key"
AmzServerSideEncryptionCustomerKeyMD5 = "X-Amz-Server-Side-Encryption-Customer-Key-MD5"
AmzServerSideEncryptionContext = "X-Amz-Server-Side-Encryption-Context"
// S3 SSE-C copy source headers
AmzCopySourceServerSideEncryptionCustomerAlgorithm = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Algorithm"
AmzCopySourceServerSideEncryptionCustomerKey = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key"
AmzCopySourceServerSideEncryptionCustomerKeyMD5 = "X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key-MD5"
)
// Non-Standard S3 HTTP request constants

275
weed/s3api/s3_sse_c.go Normal file
View File

@@ -0,0 +1,275 @@
package s3api
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"crypto/md5"
"crypto/rand"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
)
const (
// SSE-C constants
SSECustomerAlgorithmAES256 = "AES256"
SSECustomerKeySize = 32 // 256 bits
AESBlockSize = 16 // AES block size in bytes
)
// SSE-C related errors
var (
ErrInvalidRequest = errors.New("invalid request")
ErrInvalidEncryptionAlgorithm = errors.New("invalid encryption algorithm")
ErrInvalidEncryptionKey = errors.New("invalid encryption key")
ErrSSECustomerKeyMD5Mismatch = errors.New("customer key MD5 mismatch")
ErrSSECustomerKeyMissing = errors.New("customer key missing")
ErrSSECustomerKeyNotNeeded = errors.New("customer key not needed")
)
// SSECustomerKey represents a customer-provided encryption key for SSE-C
type SSECustomerKey struct {
Algorithm string
Key []byte
KeyMD5 string
}
// SSECDecryptedReader wraps an io.Reader to provide SSE-C decryption
type SSECDecryptedReader struct {
reader io.Reader
cipher cipher.Stream
customerKey *SSECustomerKey
first bool
}
// IsSSECRequest checks if the request contains SSE-C headers
func IsSSECRequest(r *http.Request) bool {
return r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerAlgorithm) != ""
}
// validateAndParseSSECHeaders does the core validation and parsing logic
func validateAndParseSSECHeaders(algorithm, key, keyMD5 string) (*SSECustomerKey, error) {
if algorithm == "" && key == "" && keyMD5 == "" {
return nil, nil // No SSE-C headers
}
if algorithm == "" || key == "" || keyMD5 == "" {
return nil, ErrInvalidRequest
}
if algorithm != SSECustomerAlgorithmAES256 {
return nil, ErrInvalidEncryptionAlgorithm
}
// Decode and validate key
keyBytes, err := base64.StdEncoding.DecodeString(key)
if err != nil {
return nil, ErrInvalidEncryptionKey
}
if len(keyBytes) != SSECustomerKeySize {
return nil, ErrInvalidEncryptionKey
}
// Validate key MD5 (base64-encoded MD5 of the raw key bytes; case-sensitive)
sum := md5.Sum(keyBytes)
expectedMD5 := base64.StdEncoding.EncodeToString(sum[:])
if keyMD5 != expectedMD5 {
return nil, ErrSSECustomerKeyMD5Mismatch
}
return &SSECustomerKey{
Algorithm: algorithm,
Key: keyBytes,
KeyMD5: keyMD5,
}, nil
}
// ValidateSSECHeaders validates SSE-C headers in the request
func ValidateSSECHeaders(r *http.Request) error {
algorithm := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerAlgorithm)
key := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKey)
keyMD5 := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKeyMD5)
_, err := validateAndParseSSECHeaders(algorithm, key, keyMD5)
return err
}
// ParseSSECHeaders parses and validates SSE-C headers from the request
func ParseSSECHeaders(r *http.Request) (*SSECustomerKey, error) {
algorithm := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerAlgorithm)
key := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKey)
keyMD5 := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKeyMD5)
return validateAndParseSSECHeaders(algorithm, key, keyMD5)
}
// ParseSSECCopySourceHeaders parses and validates SSE-C copy source headers from the request
func ParseSSECCopySourceHeaders(r *http.Request) (*SSECustomerKey, error) {
algorithm := r.Header.Get(s3_constants.AmzCopySourceServerSideEncryptionCustomerAlgorithm)
key := r.Header.Get(s3_constants.AmzCopySourceServerSideEncryptionCustomerKey)
keyMD5 := r.Header.Get(s3_constants.AmzCopySourceServerSideEncryptionCustomerKeyMD5)
return validateAndParseSSECHeaders(algorithm, key, keyMD5)
}
// CreateSSECEncryptedReader creates a new encrypted reader for SSE-C
func CreateSSECEncryptedReader(r io.Reader, customerKey *SSECustomerKey) (io.Reader, error) {
if customerKey == nil {
return r, nil
}
// Create AES cipher
block, err := aes.NewCipher(customerKey.Key)
if err != nil {
return nil, fmt.Errorf("failed to create AES cipher: %v", err)
}
// Generate random IV
iv := make([]byte, AESBlockSize)
if _, err := io.ReadFull(rand.Reader, iv); err != nil {
return nil, fmt.Errorf("failed to generate IV: %v", err)
}
// Create CTR mode cipher
stream := cipher.NewCTR(block, iv)
// The encrypted stream is the IV (initialization vector) followed by the encrypted data.
// The IV is randomly generated for each encryption operation and must be unique and unpredictable.
// This is critical for the security of AES-CTR mode: reusing an IV with the same key breaks confidentiality.
// By prepending the IV to the ciphertext, the decryptor can extract the IV to initialize the cipher.
// Note: AES-CTR provides confidentiality only; use an additional MAC if integrity is required.
// We model this with an io.MultiReader (IV first) and a cipher.StreamReader (encrypted payload).
return io.MultiReader(bytes.NewReader(iv), &cipher.StreamReader{S: stream, R: r}), nil
}
// CreateSSECDecryptedReader creates a new decrypted reader for SSE-C
func CreateSSECDecryptedReader(r io.Reader, customerKey *SSECustomerKey) (io.Reader, error) {
if customerKey == nil {
return r, nil
}
return &SSECDecryptedReader{
reader: r,
customerKey: customerKey,
cipher: nil, // Will be initialized when we read the IV
first: true,
}, nil
}
// Read implements io.Reader for SSECDecryptedReader
func (r *SSECDecryptedReader) Read(p []byte) (n int, err error) {
if r.first {
// First read: extract IV and initialize cipher
r.first = false
iv := make([]byte, AESBlockSize)
// Read IV from the beginning of the data
_, err = io.ReadFull(r.reader, iv)
if err != nil {
return 0, fmt.Errorf("failed to read IV: %v", err)
}
// Create cipher with the extracted IV
block, err := aes.NewCipher(r.customerKey.Key)
if err != nil {
return 0, fmt.Errorf("failed to create AES cipher: %v", err)
}
r.cipher = cipher.NewCTR(block, iv)
}
// Decrypt data
n, err = r.reader.Read(p)
if n > 0 {
r.cipher.XORKeyStream(p[:n], p[:n])
}
return n, err
}
// GetSourceSSECInfo extracts SSE-C information from source object metadata
func GetSourceSSECInfo(metadata map[string][]byte) (algorithm string, keyMD5 string, isEncrypted bool) {
if alg, exists := metadata[s3_constants.AmzServerSideEncryptionCustomerAlgorithm]; exists {
algorithm = string(alg)
}
if md5, exists := metadata[s3_constants.AmzServerSideEncryptionCustomerKeyMD5]; exists {
keyMD5 = string(md5)
}
isEncrypted = algorithm != "" && keyMD5 != ""
return
}
// CanDirectCopySSEC determines if we can directly copy chunks without decrypt/re-encrypt
func CanDirectCopySSEC(srcMetadata map[string][]byte, copySourceKey *SSECustomerKey, destKey *SSECustomerKey) bool {
_, srcKeyMD5, srcEncrypted := GetSourceSSECInfo(srcMetadata)
// Case 1: Source unencrypted, destination unencrypted -> Direct copy
if !srcEncrypted && destKey == nil {
return true
}
// Case 2: Source encrypted, same key for decryption and destination -> Direct copy
if srcEncrypted && copySourceKey != nil && destKey != nil {
// Same key if MD5 matches exactly (base64 encoding is case-sensitive)
return copySourceKey.KeyMD5 == srcKeyMD5 &&
destKey.KeyMD5 == srcKeyMD5
}
// All other cases require decrypt/re-encrypt
return false
}
// SSECCopyStrategy represents the strategy for copying SSE-C objects
type SSECCopyStrategy int
const (
SSECCopyDirect SSECCopyStrategy = iota // Direct chunk copy (fast)
SSECCopyReencrypt // Decrypt and re-encrypt (slow)
)
// DetermineSSECCopyStrategy determines the optimal copy strategy
func DetermineSSECCopyStrategy(srcMetadata map[string][]byte, copySourceKey *SSECustomerKey, destKey *SSECustomerKey) (SSECCopyStrategy, error) {
_, srcKeyMD5, srcEncrypted := GetSourceSSECInfo(srcMetadata)
// Validate source key if source is encrypted
if srcEncrypted {
if copySourceKey == nil {
return SSECCopyReencrypt, ErrSSECustomerKeyMissing
}
if copySourceKey.KeyMD5 != srcKeyMD5 {
return SSECCopyReencrypt, ErrSSECustomerKeyMD5Mismatch
}
} else if copySourceKey != nil {
// Source not encrypted but copy source key provided
return SSECCopyReencrypt, ErrSSECustomerKeyNotNeeded
}
if CanDirectCopySSEC(srcMetadata, copySourceKey, destKey) {
return SSECCopyDirect, nil
}
return SSECCopyReencrypt, nil
}
// MapSSECErrorToS3Error maps SSE-C custom errors to S3 API error codes
func MapSSECErrorToS3Error(err error) s3err.ErrorCode {
switch err {
case ErrInvalidEncryptionAlgorithm:
return s3err.ErrInvalidEncryptionAlgorithm
case ErrInvalidEncryptionKey:
return s3err.ErrInvalidEncryptionKey
case ErrSSECustomerKeyMD5Mismatch:
return s3err.ErrSSECustomerKeyMD5Mismatch
case ErrSSECustomerKeyMissing:
return s3err.ErrSSECustomerKeyMissing
case ErrSSECustomerKeyNotNeeded:
return s3err.ErrSSECustomerKeyNotNeeded
default:
return s3err.ErrInvalidRequest
}
}

View File

@@ -0,0 +1,63 @@
package s3api
import (
"bytes"
"crypto/md5"
"encoding/base64"
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/gorilla/mux"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
)
// ResponseRecorder that also implements http.Flusher
type recorderFlusher struct{ *httptest.ResponseRecorder }
func (r recorderFlusher) Flush() {}
// TestSSECRangeRequestsNotSupported verifies that HTTP Range requests are rejected
// for SSE-C encrypted objects because the IV is required at the beginning of the stream
func TestSSECRangeRequestsNotSupported(t *testing.T) {
// Create a request with Range header and valid SSE-C headers
req := httptest.NewRequest(http.MethodGet, "/b/o", nil)
req.Header.Set("Range", "bytes=10-20")
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, "AES256")
key := make([]byte, 32)
for i := range key {
key[i] = byte(i)
}
s := md5.Sum(key)
keyMD5 := base64.StdEncoding.EncodeToString(s[:])
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKey, base64.StdEncoding.EncodeToString(key))
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKeyMD5, keyMD5)
// Attach mux vars to avoid panic in error writer
req = mux.SetURLVars(req, map[string]string{"bucket": "b", "object": "o"})
// Create a mock HTTP response that simulates SSE-C encrypted object metadata
proxyResponse := &http.Response{
StatusCode: 200,
Header: make(http.Header),
Body: io.NopCloser(bytes.NewReader([]byte("mock encrypted data"))),
}
proxyResponse.Header.Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, "AES256")
proxyResponse.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKeyMD5, keyMD5)
// Call the function under test
s3a := &S3ApiServer{}
rec := httptest.NewRecorder()
w := recorderFlusher{rec}
statusCode, _ := s3a.handleSSECResponse(req, proxyResponse, w)
if statusCode != http.StatusRequestedRangeNotSatisfiable {
t.Fatalf("expected status %d, got %d", http.StatusRequestedRangeNotSatisfiable, statusCode)
}
if rec.Result().StatusCode != http.StatusRequestedRangeNotSatisfiable {
t.Fatalf("writer status expected %d, got %d", http.StatusRequestedRangeNotSatisfiable, rec.Result().StatusCode)
}
}

412
weed/s3api/s3_sse_c_test.go Normal file
View File

@@ -0,0 +1,412 @@
package s3api
import (
"bytes"
"crypto/md5"
"encoding/base64"
"fmt"
"io"
"net/http"
"testing"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
)
func base64MD5(b []byte) string {
s := md5.Sum(b)
return base64.StdEncoding.EncodeToString(s[:])
}
func TestSSECHeaderValidation(t *testing.T) {
// Test valid SSE-C headers
req := &http.Request{Header: make(http.Header)}
key := make([]byte, 32) // 256-bit key
for i := range key {
key[i] = byte(i)
}
keyBase64 := base64.StdEncoding.EncodeToString(key)
md5sum := md5.Sum(key)
keyMD5 := base64.StdEncoding.EncodeToString(md5sum[:])
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, "AES256")
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKey, keyBase64)
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKeyMD5, keyMD5)
// Test validation
err := ValidateSSECHeaders(req)
if err != nil {
t.Errorf("Expected valid headers, got error: %v", err)
}
// Test parsing
customerKey, err := ParseSSECHeaders(req)
if err != nil {
t.Errorf("Expected successful parsing, got error: %v", err)
}
if customerKey == nil {
t.Error("Expected customer key, got nil")
}
if customerKey.Algorithm != "AES256" {
t.Errorf("Expected algorithm AES256, got %s", customerKey.Algorithm)
}
if !bytes.Equal(customerKey.Key, key) {
t.Error("Key doesn't match original")
}
if customerKey.KeyMD5 != keyMD5 {
t.Errorf("Expected key MD5 %s, got %s", keyMD5, customerKey.KeyMD5)
}
}
func TestSSECCopySourceHeaders(t *testing.T) {
// Test valid SSE-C copy source headers
req := &http.Request{Header: make(http.Header)}
key := make([]byte, 32) // 256-bit key
for i := range key {
key[i] = byte(i) + 1 // Different from regular test
}
keyBase64 := base64.StdEncoding.EncodeToString(key)
md5sum2 := md5.Sum(key)
keyMD5 := base64.StdEncoding.EncodeToString(md5sum2[:])
req.Header.Set(s3_constants.AmzCopySourceServerSideEncryptionCustomerAlgorithm, "AES256")
req.Header.Set(s3_constants.AmzCopySourceServerSideEncryptionCustomerKey, keyBase64)
req.Header.Set(s3_constants.AmzCopySourceServerSideEncryptionCustomerKeyMD5, keyMD5)
// Test parsing copy source headers
customerKey, err := ParseSSECCopySourceHeaders(req)
if err != nil {
t.Errorf("Expected successful copy source parsing, got error: %v", err)
}
if customerKey == nil {
t.Error("Expected customer key from copy source headers, got nil")
}
if customerKey.Algorithm != "AES256" {
t.Errorf("Expected algorithm AES256, got %s", customerKey.Algorithm)
}
if !bytes.Equal(customerKey.Key, key) {
t.Error("Copy source key doesn't match original")
}
// Test that regular headers don't interfere with copy source headers
regularKey, err := ParseSSECHeaders(req)
if err != nil {
t.Errorf("Regular header parsing should not fail: %v", err)
}
if regularKey != nil {
t.Error("Expected nil for regular headers when only copy source headers are present")
}
}
func TestSSECHeaderValidationErrors(t *testing.T) {
tests := []struct {
name string
algorithm string
key string
keyMD5 string
wantErr error
}{
{
name: "invalid algorithm",
algorithm: "AES128",
key: base64.StdEncoding.EncodeToString(make([]byte, 32)),
keyMD5: base64MD5(make([]byte, 32)),
wantErr: ErrInvalidEncryptionAlgorithm,
},
{
name: "invalid key length",
algorithm: "AES256",
key: base64.StdEncoding.EncodeToString(make([]byte, 16)),
keyMD5: base64MD5(make([]byte, 16)),
wantErr: ErrInvalidEncryptionKey,
},
{
name: "mismatched MD5",
algorithm: "AES256",
key: base64.StdEncoding.EncodeToString(make([]byte, 32)),
keyMD5: "wrong==md5",
wantErr: ErrSSECustomerKeyMD5Mismatch,
},
{
name: "incomplete headers",
algorithm: "AES256",
key: "",
keyMD5: "",
wantErr: ErrInvalidRequest,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := &http.Request{Header: make(http.Header)}
if tt.algorithm != "" {
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, tt.algorithm)
}
if tt.key != "" {
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKey, tt.key)
}
if tt.keyMD5 != "" {
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerKeyMD5, tt.keyMD5)
}
err := ValidateSSECHeaders(req)
if err != tt.wantErr {
t.Errorf("Expected error %v, got %v", tt.wantErr, err)
}
})
}
}
func TestSSECEncryptionDecryption(t *testing.T) {
// Create customer key
key := make([]byte, 32)
for i := range key {
key[i] = byte(i)
}
md5sumKey := md5.Sum(key)
customerKey := &SSECustomerKey{
Algorithm: "AES256",
Key: key,
KeyMD5: base64.StdEncoding.EncodeToString(md5sumKey[:]),
}
// Test data
testData := []byte("Hello, World! This is a test of SSE-C encryption.")
// Create encrypted reader
dataReader := bytes.NewReader(testData)
encryptedReader, err := CreateSSECEncryptedReader(dataReader, customerKey)
if err != nil {
t.Fatalf("Failed to create encrypted reader: %v", err)
}
// Read encrypted data
encryptedData, err := io.ReadAll(encryptedReader)
if err != nil {
t.Fatalf("Failed to read encrypted data: %v", err)
}
// Verify data is actually encrypted (different from original)
if bytes.Equal(encryptedData[16:], testData) { // Skip IV
t.Error("Data doesn't appear to be encrypted")
}
// Create decrypted reader
encryptedReader2 := bytes.NewReader(encryptedData)
decryptedReader, err := CreateSSECDecryptedReader(encryptedReader2, customerKey)
if err != nil {
t.Fatalf("Failed to create decrypted reader: %v", err)
}
// Read decrypted data
decryptedData, err := io.ReadAll(decryptedReader)
if err != nil {
t.Fatalf("Failed to read decrypted data: %v", err)
}
// Verify decrypted data matches original
if !bytes.Equal(decryptedData, testData) {
t.Errorf("Decrypted data doesn't match original.\nOriginal: %s\nDecrypted: %s", testData, decryptedData)
}
}
func TestSSECIsSSECRequest(t *testing.T) {
// Test with SSE-C headers
req := &http.Request{Header: make(http.Header)}
req.Header.Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, "AES256")
if !IsSSECRequest(req) {
t.Error("Expected IsSSECRequest to return true when SSE-C headers are present")
}
// Test without SSE-C headers
req2 := &http.Request{Header: make(http.Header)}
if IsSSECRequest(req2) {
t.Error("Expected IsSSECRequest to return false when no SSE-C headers are present")
}
}
// Test encryption with different data sizes (similar to s3tests)
func TestSSECEncryptionVariousSizes(t *testing.T) {
sizes := []int{1, 13, 1024, 1024 * 1024} // 1B, 13B, 1KB, 1MB
for _, size := range sizes {
t.Run(fmt.Sprintf("size_%d", size), func(t *testing.T) {
// Create customer key
key := make([]byte, 32)
for i := range key {
key[i] = byte(i + size) // Make key unique per test
}
md5sumDyn := md5.Sum(key)
customerKey := &SSECustomerKey{
Algorithm: "AES256",
Key: key,
KeyMD5: base64.StdEncoding.EncodeToString(md5sumDyn[:]),
}
// Create test data of specified size
testData := make([]byte, size)
for i := range testData {
testData[i] = byte('A' + (i % 26)) // Pattern of A-Z
}
// Encrypt
dataReader := bytes.NewReader(testData)
encryptedReader, err := CreateSSECEncryptedReader(dataReader, customerKey)
if err != nil {
t.Fatalf("Failed to create encrypted reader: %v", err)
}
encryptedData, err := io.ReadAll(encryptedReader)
if err != nil {
t.Fatalf("Failed to read encrypted data: %v", err)
}
// Verify IV is present and data is encrypted
if len(encryptedData) < AESBlockSize {
t.Fatalf("Encrypted data too short, missing IV")
}
if len(encryptedData) != size+AESBlockSize {
t.Errorf("Expected encrypted data length %d, got %d", size+AESBlockSize, len(encryptedData))
}
// Decrypt
encryptedReader2 := bytes.NewReader(encryptedData)
decryptedReader, err := CreateSSECDecryptedReader(encryptedReader2, customerKey)
if err != nil {
t.Fatalf("Failed to create decrypted reader: %v", err)
}
decryptedData, err := io.ReadAll(decryptedReader)
if err != nil {
t.Fatalf("Failed to read decrypted data: %v", err)
}
// Verify decrypted data matches original
if !bytes.Equal(decryptedData, testData) {
t.Errorf("Decrypted data doesn't match original for size %d", size)
}
})
}
}
func TestSSECEncryptionWithNilKey(t *testing.T) {
testData := []byte("test data")
dataReader := bytes.NewReader(testData)
// Test encryption with nil key (should pass through)
encryptedReader, err := CreateSSECEncryptedReader(dataReader, nil)
if err != nil {
t.Fatalf("Failed to create encrypted reader with nil key: %v", err)
}
result, err := io.ReadAll(encryptedReader)
if err != nil {
t.Fatalf("Failed to read from pass-through reader: %v", err)
}
if !bytes.Equal(result, testData) {
t.Error("Data should pass through unchanged when key is nil")
}
// Test decryption with nil key (should pass through)
dataReader2 := bytes.NewReader(testData)
decryptedReader, err := CreateSSECDecryptedReader(dataReader2, nil)
if err != nil {
t.Fatalf("Failed to create decrypted reader with nil key: %v", err)
}
result2, err := io.ReadAll(decryptedReader)
if err != nil {
t.Fatalf("Failed to read from pass-through reader: %v", err)
}
if !bytes.Equal(result2, testData) {
t.Error("Data should pass through unchanged when key is nil")
}
}
// TestSSECEncryptionSmallBuffers tests the fix for the critical bug where small buffers
// could corrupt the data stream when reading in chunks smaller than the IV size
func TestSSECEncryptionSmallBuffers(t *testing.T) {
testData := []byte("This is a test message for small buffer reads")
// Create customer key
key := make([]byte, 32)
for i := range key {
key[i] = byte(i)
}
md5sumKey3 := md5.Sum(key)
customerKey := &SSECustomerKey{
Algorithm: "AES256",
Key: key,
KeyMD5: base64.StdEncoding.EncodeToString(md5sumKey3[:]),
}
// Create encrypted reader
dataReader := bytes.NewReader(testData)
encryptedReader, err := CreateSSECEncryptedReader(dataReader, customerKey)
if err != nil {
t.Fatalf("Failed to create encrypted reader: %v", err)
}
// Read with very small buffers (smaller than IV size of 16 bytes)
var encryptedData []byte
smallBuffer := make([]byte, 5) // Much smaller than 16-byte IV
for {
n, err := encryptedReader.Read(smallBuffer)
if n > 0 {
encryptedData = append(encryptedData, smallBuffer[:n]...)
}
if err == io.EOF {
break
}
if err != nil {
t.Fatalf("Error reading encrypted data: %v", err)
}
}
// Verify the encrypted data starts with 16-byte IV
if len(encryptedData) < 16 {
t.Fatalf("Encrypted data too short, expected at least 16 bytes for IV, got %d", len(encryptedData))
}
// Expected total size: 16 bytes (IV) + len(testData)
expectedSize := 16 + len(testData)
if len(encryptedData) != expectedSize {
t.Errorf("Expected encrypted data size %d, got %d", expectedSize, len(encryptedData))
}
// Decrypt and verify
encryptedReader2 := bytes.NewReader(encryptedData)
decryptedReader, err := CreateSSECDecryptedReader(encryptedReader2, customerKey)
if err != nil {
t.Fatalf("Failed to create decrypted reader: %v", err)
}
decryptedData, err := io.ReadAll(decryptedReader)
if err != nil {
t.Fatalf("Failed to read decrypted data: %v", err)
}
if !bytes.Equal(decryptedData, testData) {
t.Errorf("Decrypted data doesn't match original.\nOriginal: %s\nDecrypted: %s", testData, decryptedData)
}
}

View File

@@ -328,7 +328,10 @@ func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request)
destUrl = s3a.toFilerUrl(bucket, object)
}
s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
s3a.proxyToFiler(w, r, destUrl, false, func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int, bytesTransferred int64) {
// Handle SSE-C decryption if needed
return s3a.handleSSECResponse(r, proxyResponse, w)
})
}
func (s3a *S3ApiServer) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
@@ -423,7 +426,10 @@ func (s3a *S3ApiServer) HeadObjectHandler(w http.ResponseWriter, r *http.Request
destUrl = s3a.toFilerUrl(bucket, object)
}
s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
s3a.proxyToFiler(w, r, destUrl, false, func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int, bytesTransferred int64) {
// Handle SSE-C validation for HEAD requests
return s3a.handleSSECResponse(r, proxyResponse, w)
})
}
func (s3a *S3ApiServer) proxyToFiler(w http.ResponseWriter, r *http.Request, destUrl string, isWrite bool, responseFn func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int, bytesTransferred int64)) {
@@ -555,6 +561,29 @@ func restoreCORSHeaders(w http.ResponseWriter, capturedCORSHeaders map[string]st
}
}
// writeFinalResponse handles the common response writing logic shared between
// passThroughResponse and handleSSECResponse
func writeFinalResponse(w http.ResponseWriter, proxyResponse *http.Response, bodyReader io.Reader, capturedCORSHeaders map[string]string) (statusCode int, bytesTransferred int64) {
// Restore CORS headers that were set by middleware
restoreCORSHeaders(w, capturedCORSHeaders)
if proxyResponse.Header.Get("Content-Range") != "" && proxyResponse.StatusCode == 200 {
statusCode = http.StatusPartialContent
} else {
statusCode = proxyResponse.StatusCode
}
w.WriteHeader(statusCode)
// Stream response data
buf := mem.Allocate(128 * 1024)
defer mem.Free(buf)
bytesTransferred, err := io.CopyBuffer(w, bodyReader, buf)
if err != nil {
glog.V(1).Infof("response read %d bytes: %v", bytesTransferred, err)
}
return statusCode, bytesTransferred
}
func passThroughResponse(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int, bytesTransferred int64) {
// Capture existing CORS headers that may have been set by middleware
capturedCORSHeaders := captureCORSHeaders(w, corsHeaders)
@@ -564,23 +593,100 @@ func passThroughResponse(proxyResponse *http.Response, w http.ResponseWriter) (s
w.Header()[k] = v
}
// Restore CORS headers that were set by middleware
restoreCORSHeaders(w, capturedCORSHeaders)
return writeFinalResponse(w, proxyResponse, proxyResponse.Body, capturedCORSHeaders)
}
if proxyResponse.Header.Get("Content-Range") != "" && proxyResponse.StatusCode == 200 {
w.WriteHeader(http.StatusPartialContent)
statusCode = http.StatusPartialContent
} else {
statusCode = proxyResponse.StatusCode
}
w.WriteHeader(statusCode)
buf := mem.Allocate(128 * 1024)
defer mem.Free(buf)
bytesTransferred, err := io.CopyBuffer(w, proxyResponse.Body, buf)
// handleSSECResponse handles SSE-C decryption and response processing
func (s3a *S3ApiServer) handleSSECResponse(r *http.Request, proxyResponse *http.Response, w http.ResponseWriter) (statusCode int, bytesTransferred int64) {
// Check if the object has SSE-C metadata
sseAlgorithm := proxyResponse.Header.Get(s3_constants.AmzServerSideEncryptionCustomerAlgorithm)
sseKeyMD5 := proxyResponse.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKeyMD5)
isObjectEncrypted := sseAlgorithm != "" && sseKeyMD5 != ""
// Parse SSE-C headers from request once (avoid duplication)
customerKey, err := ParseSSECHeaders(r)
if err != nil {
glog.V(1).Infof("passthrough response read %d bytes: %v", bytesTransferred, err)
errCode := MapSSECErrorToS3Error(err)
s3err.WriteErrorResponse(w, r, errCode)
return http.StatusBadRequest, 0
}
if isObjectEncrypted {
// This object was encrypted with SSE-C, validate customer key
if customerKey == nil {
s3err.WriteErrorResponse(w, r, s3err.ErrSSECustomerKeyMissing)
return http.StatusBadRequest, 0
}
// SSE-C MD5 is base64 and case-sensitive
if customerKey.KeyMD5 != sseKeyMD5 {
// For GET/HEAD requests, AWS S3 returns 403 Forbidden for a key mismatch.
s3err.WriteErrorResponse(w, r, s3err.ErrAccessDenied)
return http.StatusForbidden, 0
}
// SSE-C encrypted objects do not support HTTP Range requests because the 16-byte IV
// is required at the beginning of the stream for proper decryption
if r.Header.Get("Range") != "" {
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidRange)
return http.StatusRequestedRangeNotSatisfiable, 0
}
// Create decrypted reader
decryptedReader, decErr := CreateSSECDecryptedReader(proxyResponse.Body, customerKey)
if decErr != nil {
glog.Errorf("Failed to create SSE-C decrypted reader: %v", decErr)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
return http.StatusInternalServerError, 0
}
// Capture existing CORS headers that may have been set by middleware
capturedCORSHeaders := captureCORSHeaders(w, corsHeaders)
// Copy headers from proxy response (excluding body-related headers that might change)
for k, v := range proxyResponse.Header {
if k != "Content-Length" && k != "Content-Encoding" {
w.Header()[k] = v
}
}
// Set correct Content-Length for SSE-C (only for full object requests)
// Range requests are complex with SSE-C because the entire object needs decryption
if proxyResponse.Header.Get("Content-Range") == "" {
// Full object request: subtract 16-byte IV from encrypted length
if contentLengthStr := proxyResponse.Header.Get("Content-Length"); contentLengthStr != "" {
encryptedLength, err := strconv.ParseInt(contentLengthStr, 10, 64)
if err != nil {
glog.Errorf("Invalid Content-Length header for SSE-C object: %v", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
return http.StatusInternalServerError, 0
}
originalLength := encryptedLength - 16
if originalLength < 0 {
glog.Errorf("Encrypted object length (%d) is less than IV size (16 bytes)", encryptedLength)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
return http.StatusInternalServerError, 0
}
w.Header().Set("Content-Length", strconv.FormatInt(originalLength, 10))
}
}
// For range requests, let the actual bytes transferred determine the response length
// Add SSE-C response headers
w.Header().Set(s3_constants.AmzServerSideEncryptionCustomerAlgorithm, sseAlgorithm)
w.Header().Set(s3_constants.AmzServerSideEncryptionCustomerKeyMD5, sseKeyMD5)
return writeFinalResponse(w, proxyResponse, decryptedReader, capturedCORSHeaders)
} else {
// Object is not encrypted, but check if customer provided SSE-C headers unnecessarily
if customerKey != nil {
s3err.WriteErrorResponse(w, r, s3err.ErrSSECustomerKeyNotNeeded)
return http.StatusBadRequest, 0
}
// Normal pass-through response
return passThroughResponse(proxyResponse, w)
}
return statusCode, bytesTransferred
}
// addObjectLockHeadersToResponse extracts object lock metadata from entry Extended attributes

View File

@@ -1,8 +1,10 @@
package s3api
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
@@ -160,11 +162,17 @@ func (s3a *S3ApiServer) CopyObjectHandler(w http.ResponseWriter, r *http.Request
// Just copy the entry structure without chunks for zero-size files
dstEntry.Chunks = nil
} else {
// Replicate chunks for files with content
dstChunks, err := s3a.copyChunks(entry, r.URL.Path)
// Handle SSE-C copy with smart fast/slow path selection
dstChunks, err := s3a.copyChunksWithSSEC(entry, r)
if err != nil {
glog.Errorf("CopyObjectHandler copy chunks error: %v", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
glog.Errorf("CopyObjectHandler copy chunks with SSE-C error: %v", err)
// Use shared error mapping helper
errCode := MapSSECErrorToS3Error(err)
// For copy operations, if the error is not recognized, use InternalError
if errCode == s3err.ErrInvalidRequest {
errCode = s3err.ErrInternalError
}
s3err.WriteErrorResponse(w, r, errCode)
return
}
dstEntry.Chunks = dstChunks
@@ -591,7 +599,8 @@ func processMetadataBytes(reqHeader http.Header, existing map[string][]byte, rep
// copyChunks replicates chunks from source entry to destination entry
func (s3a *S3ApiServer) copyChunks(entry *filer_pb.Entry, dstPath string) ([]*filer_pb.FileChunk, error) {
dstChunks := make([]*filer_pb.FileChunk, len(entry.GetChunks()))
executor := util.NewLimitedConcurrentExecutor(4) // Limit to 4 concurrent operations
const defaultChunkCopyConcurrency = 4
executor := util.NewLimitedConcurrentExecutor(defaultChunkCopyConcurrency) // Limit to configurable concurrent operations
errChan := make(chan error, len(entry.GetChunks()))
for i, chunk := range entry.GetChunks() {
@@ -777,7 +786,8 @@ func (s3a *S3ApiServer) copyChunksForRange(entry *filer_pb.Entry, startOffset, e
// Copy the relevant chunks using a specialized method for range copies
dstChunks := make([]*filer_pb.FileChunk, len(relevantChunks))
executor := util.NewLimitedConcurrentExecutor(4)
const defaultChunkCopyConcurrency = 4
executor := util.NewLimitedConcurrentExecutor(defaultChunkCopyConcurrency)
errChan := make(chan error, len(relevantChunks))
// Create a map to track original chunks for each relevant chunk
@@ -997,3 +1007,136 @@ func (s3a *S3ApiServer) downloadChunkData(srcUrl string, offset, size int64) ([]
}
return chunkData, nil
}
// copyChunksWithSSEC handles SSE-C aware copying with smart fast/slow path selection
func (s3a *S3ApiServer) copyChunksWithSSEC(entry *filer_pb.Entry, r *http.Request) ([]*filer_pb.FileChunk, error) {
// Parse SSE-C headers
copySourceKey, err := ParseSSECCopySourceHeaders(r)
if err != nil {
return nil, err
}
destKey, err := ParseSSECHeaders(r)
if err != nil {
return nil, err
}
// Determine copy strategy
strategy, err := DetermineSSECCopyStrategy(entry.Extended, copySourceKey, destKey)
if err != nil {
return nil, err
}
glog.V(2).Infof("SSE-C copy strategy for %s: %v", r.URL.Path, strategy)
switch strategy {
case SSECCopyDirect:
// FAST PATH: Direct chunk copy
glog.V(2).Infof("Using fast path: direct chunk copy for %s", r.URL.Path)
return s3a.copyChunks(entry, r.URL.Path)
case SSECCopyReencrypt:
// SLOW PATH: Decrypt and re-encrypt
glog.V(2).Infof("Using slow path: decrypt/re-encrypt for %s", r.URL.Path)
return s3a.copyChunksWithReencryption(entry, copySourceKey, destKey, r.URL.Path)
default:
return nil, fmt.Errorf("unknown SSE-C copy strategy: %v", strategy)
}
}
// copyChunksWithReencryption handles the slow path: decrypt source and re-encrypt for destination
func (s3a *S3ApiServer) copyChunksWithReencryption(entry *filer_pb.Entry, copySourceKey *SSECustomerKey, destKey *SSECustomerKey, dstPath string) ([]*filer_pb.FileChunk, error) {
dstChunks := make([]*filer_pb.FileChunk, len(entry.GetChunks()))
const defaultChunkCopyConcurrency = 4
executor := util.NewLimitedConcurrentExecutor(defaultChunkCopyConcurrency) // Limit to configurable concurrent operations
errChan := make(chan error, len(entry.GetChunks()))
for i, chunk := range entry.GetChunks() {
chunkIndex := i
executor.Execute(func() {
dstChunk, err := s3a.copyChunkWithReencryption(chunk, copySourceKey, destKey, dstPath)
if err != nil {
errChan <- fmt.Errorf("chunk %d: %v", chunkIndex, err)
return
}
dstChunks[chunkIndex] = dstChunk
errChan <- nil
})
}
// Wait for all operations to complete and check for errors
for i := 0; i < len(entry.GetChunks()); i++ {
if err := <-errChan; err != nil {
return nil, err
}
}
return dstChunks, nil
}
// copyChunkWithReencryption copies a single chunk with decrypt/re-encrypt
func (s3a *S3ApiServer) copyChunkWithReencryption(chunk *filer_pb.FileChunk, copySourceKey *SSECustomerKey, destKey *SSECustomerKey, dstPath string) (*filer_pb.FileChunk, error) {
// Create destination chunk
dstChunk := s3a.createDestinationChunk(chunk, chunk.Offset, chunk.Size)
// Prepare chunk copy (assign new volume and get source URL)
assignResult, srcUrl, err := s3a.prepareChunkCopy(chunk.GetFileIdString(), dstPath)
if err != nil {
return nil, err
}
// Set file ID on destination chunk
if err := s3a.setChunkFileId(dstChunk, assignResult); err != nil {
return nil, err
}
// Download encrypted chunk data
encryptedData, err := s3a.downloadChunkData(srcUrl, 0, int64(chunk.Size))
if err != nil {
return nil, fmt.Errorf("download encrypted chunk data: %w", err)
}
var finalData []byte
// Decrypt if source is encrypted
if copySourceKey != nil {
decryptedReader, decErr := CreateSSECDecryptedReader(bytes.NewReader(encryptedData), copySourceKey)
if decErr != nil {
return nil, fmt.Errorf("create decrypted reader: %w", decErr)
}
decryptedData, readErr := io.ReadAll(decryptedReader)
if readErr != nil {
return nil, fmt.Errorf("decrypt chunk data: %w", readErr)
}
finalData = decryptedData
} else {
// Source is unencrypted
finalData = encryptedData
}
// Re-encrypt if destination should be encrypted
if destKey != nil {
encryptedReader, encErr := CreateSSECEncryptedReader(bytes.NewReader(finalData), destKey)
if encErr != nil {
return nil, fmt.Errorf("create encrypted reader: %w", encErr)
}
reencryptedData, readErr := io.ReadAll(encryptedReader)
if readErr != nil {
return nil, fmt.Errorf("re-encrypt chunk data: %w", readErr)
}
finalData = reencryptedData
// Update chunk size to include IV
dstChunk.Size = uint64(len(finalData))
}
// Upload the processed data
if err := s3a.uploadChunkData(finalData, assignResult); err != nil {
return nil, fmt.Errorf("upload processed chunk data: %w", err)
}
return dstChunk, nil
}

View File

@@ -190,6 +190,25 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader io.Reader, destination string, bucket string) (etag string, code s3err.ErrorCode) {
// Handle SSE-C encryption if requested
customerKey, err := ParseSSECHeaders(r)
if err != nil {
glog.Errorf("SSE-C header validation failed: %v", err)
// Use shared error mapping helper
errCode := MapSSECErrorToS3Error(err)
return "", errCode
}
// Apply SSE-C encryption if customer key is provided
if customerKey != nil {
encryptedReader, encErr := CreateSSECEncryptedReader(dataReader, customerKey)
if encErr != nil {
glog.Errorf("Failed to create SSE-C encrypted reader: %v", encErr)
return "", s3err.ErrInternalError
}
dataReader = encryptedReader
}
hash := md5.New()
var body = io.TeeReader(dataReader, hash)

View File

@@ -116,6 +116,13 @@ const (
ErrInvalidRetentionPeriod
ErrObjectLockConfigurationNotFoundError
ErrInvalidUnorderedWithDelimiter
// SSE-C related errors
ErrInvalidEncryptionAlgorithm
ErrInvalidEncryptionKey
ErrSSECustomerKeyMD5Mismatch
ErrSSECustomerKeyMissing
ErrSSECustomerKeyNotNeeded
)
// Error message constants for checksum validation
@@ -471,6 +478,33 @@ var errorCodeResponse = map[ErrorCode]APIError{
Description: "Unordered listing cannot be used with delimiter",
HTTPStatusCode: http.StatusBadRequest,
},
// SSE-C related error mappings
ErrInvalidEncryptionAlgorithm: {
Code: "InvalidEncryptionAlgorithmError",
Description: "The encryption algorithm specified is not valid.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrInvalidEncryptionKey: {
Code: "InvalidArgument",
Description: "Invalid encryption key. Encryption key must be 256-bit AES256.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSECustomerKeyMD5Mismatch: {
Code: "InvalidArgument",
Description: "The provided customer encryption key MD5 does not match the key.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSECustomerKeyMissing: {
Code: "InvalidArgument",
Description: "Requests specifying Server Side Encryption with Customer provided keys must provide the customer key.",
HTTPStatusCode: http.StatusBadRequest,
},
ErrSSECustomerKeyNotNeeded: {
Code: "InvalidArgument",
Description: "The object was not encrypted with customer provided keys.",
HTTPStatusCode: http.StatusBadRequest,
},
}
// GetAPIError provides API Error for input API error code.

View File

@@ -488,6 +488,15 @@ func SaveAmzMetaData(r *http.Request, existing map[string][]byte, isReplace bool
}
}
// Handle SSE-C headers
if algorithm := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerAlgorithm); algorithm != "" {
metadata[s3_constants.AmzServerSideEncryptionCustomerAlgorithm] = []byte(algorithm)
}
if keyMD5 := r.Header.Get(s3_constants.AmzServerSideEncryptionCustomerKeyMD5); keyMD5 != "" {
// Store as-is; SSE-C MD5 is base64 and case-sensitive
metadata[s3_constants.AmzServerSideEncryptionCustomerKeyMD5] = []byte(keyMD5)
}
//acp-owner
acpOwner := r.Header.Get(s3_constants.ExtAmzOwnerKey)
if len(acpOwner) > 0 {