Files
seaweedfs/weed/s3api/s3_list_parts_action_test.go
Chris Lu c5a9c27449 Migrate from deprecated azure-storage-blob-go to modern Azure SDK (#7310)
* Migrate from deprecated azure-storage-blob-go to modern Azure SDK

Migrates Azure Blob Storage integration from the deprecated
github.com/Azure/azure-storage-blob-go to the modern
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob SDK.

## Changes

### Removed Files
- weed/remote_storage/azure/azure_highlevel.go
  - Custom upload helper no longer needed with new SDK

### Updated Files
- weed/remote_storage/azure/azure_storage_client.go
  - Migrated from ServiceURL/ContainerURL/BlobURL to Client-based API
  - Updated client creation using NewClientWithSharedKeyCredential
  - Replaced ListBlobsFlatSegment with NewListBlobsFlatPager
  - Updated Download to DownloadStream with proper HTTPRange
  - Replaced custom uploadReaderAtToBlockBlob with UploadStream
  - Updated GetProperties, SetMetadata, Delete to use new client methods
  - Fixed metadata conversion to return map[string]*string

- weed/replication/sink/azuresink/azure_sink.go
  - Migrated from ContainerURL to Client-based API
  - Updated client initialization
  - Replaced AppendBlobURL with AppendBlobClient
  - Updated error handling to use azcore.ResponseError
  - Added streaming.NopCloser for AppendBlock

### New Test Files
- weed/remote_storage/azure/azure_storage_client_test.go
  - Comprehensive unit tests for all client operations
  - Tests for Traverse, ReadFile, WriteFile, UpdateMetadata, Delete
  - Tests for metadata conversion function
  - Benchmark tests
  - Integration tests (skippable without credentials)

- weed/replication/sink/azuresink/azure_sink_test.go
  - Unit tests for Azure sink operations
  - Tests for CreateEntry, UpdateEntry, DeleteEntry
  - Tests for cleanKey function
  - Tests for configuration-based initialization
  - Integration tests (skippable without credentials)
  - Benchmark tests

### Dependency Updates
- go.mod: Removed github.com/Azure/azure-storage-blob-go v0.15.0
- go.mod: Made github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.2 direct dependency
- All deprecated dependencies automatically cleaned up

## API Migration Summary

Old SDK → New SDK mappings:
- ServiceURL → Client (service-level operations)
- ContainerURL → ContainerClient
- BlobURL → BlobClient
- BlockBlobURL → BlockBlobClient
- AppendBlobURL → AppendBlobClient
- ListBlobsFlatSegment() → NewListBlobsFlatPager()
- Download() → DownloadStream()
- Upload() → UploadStream()
- Marker-based pagination → Pager-based pagination
- azblob.ResponseError → azcore.ResponseError

## Testing

All tests pass:
-  Unit tests for metadata conversion
-  Unit tests for helper functions (cleanKey)
-  Interface implementation tests
-  Build successful
-  No compilation errors
-  Integration tests available (require Azure credentials)

## Benefits

-  Uses actively maintained SDK
-  Better performance with modern API design
-  Improved error handling
-  Removes ~200 lines of custom upload code
-  Reduces dependency count
-  Better async/streaming support
-  Future-proof against SDK deprecation

## Backward Compatibility

The changes are transparent to users:
- Same configuration parameters (account name, account key)
- Same functionality and behavior
- No changes to SeaweedFS API or user-facing features
- Existing Azure storage configurations continue to work

## Breaking Changes

None - this is an internal implementation change only.

* Address Gemini Code Assist review comments

Fixed three issues identified by Gemini Code Assist:

1. HIGH: ReadFile now uses blob.CountToEnd when size is 0
   - Old SDK: size=0 meant "read to end"
   - New SDK: size=0 means "read 0 bytes"
   - Fix: Use blob.CountToEnd (-1) to read entire blob from offset

2. MEDIUM: Use to.Ptr() instead of slice trick for DeleteSnapshots
   - Replaced &[]Type{value}[0] with to.Ptr(value)
   - Cleaner, more idiomatic Azure SDK pattern
   - Applied to both azure_storage_client.go and azure_sink.go

3. Added missing imports:
   - github.com/Azure/azure-sdk-for-go/sdk/azcore/to

These changes improve code clarity and correctness while following
Azure SDK best practices.

* Address second round of Gemini Code Assist review comments

Fixed all issues identified in the second review:

1. MEDIUM: Added constants for hardcoded values
   - Defined defaultBlockSize (4 MB) and defaultConcurrency (16)
   - Applied to WriteFile UploadStream options
   - Improves maintainability and readability

2. MEDIUM: Made DeleteFile idempotent
   - Now returns nil (no error) if blob doesn't exist
   - Uses bloberror.HasCode(err, bloberror.BlobNotFound)
   - Consistent with idempotent operation expectations

3. Fixed TestToMetadata test failures
   - Test was using lowercase 'x-amz-meta-' but constant is 'X-Amz-Meta-'
   - Updated test to use s3_constants.AmzUserMetaPrefix
   - All tests now pass

Changes:
- Added import: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/bloberror
- Added constants: defaultBlockSize, defaultConcurrency
- Updated WriteFile to use constants
- Updated DeleteFile to be idempotent
- Fixed test to use correct S3 metadata prefix constant

All tests pass. Build succeeds. Code follows Azure SDK best practices.

* Address third round of Gemini Code Assist review comments

Fixed all issues identified in the third review:

1. MEDIUM: Use bloberror.HasCode for ContainerAlreadyExists
   - Replaced fragile string check with bloberror.HasCode()
   - More robust and aligned with Azure SDK best practices
   - Applied to CreateBucket test

2. MEDIUM: Use bloberror.HasCode for BlobNotFound in test
   - Replaced generic error check with specific BlobNotFound check
   - Makes test more precise and verifies correct error returned
   - Applied to VerifyDeleted test

3. MEDIUM: Made DeleteEntry idempotent in azure_sink.go
   - Now returns nil (no error) if blob doesn't exist
   - Uses bloberror.HasCode(err, bloberror.BlobNotFound)
   - Consistent with DeleteFile implementation
   - Makes replication sink more robust to retries

Changes:
- Added import to azure_storage_client_test.go: bloberror
- Added import to azure_sink.go: bloberror
- Updated CreateBucket test to use bloberror.HasCode
- Updated VerifyDeleted test to use bloberror.HasCode
- Updated DeleteEntry to be idempotent

All tests pass. Build succeeds. Code uses Azure SDK best practices.

* Address fourth round of Gemini Code Assist review comments

Fixed two critical issues identified in the fourth review:

1. HIGH: Handle BlobAlreadyExists in append blob creation
   - Problem: If append blob already exists, Create() fails causing replication failure
   - Fix: Added bloberror.HasCode(err, bloberror.BlobAlreadyExists) check
   - Behavior: Existing append blobs are now acceptable, appends can proceed
   - Impact: Makes replication sink more robust, prevents unnecessary failures
   - Location: azure_sink.go CreateEntry function

2. MEDIUM: Configure custom retry policy for download resiliency
   - Problem: Old SDK had MaxRetryRequests: 20, new SDK defaults to 3 retries
   - Fix: Configured policy.RetryOptions with MaxRetries: 10
   - Settings: TryTimeout=1min, RetryDelay=2s, MaxRetryDelay=1min
   - Impact: Maintains similar resiliency in unreliable network conditions
   - Location: azure_storage_client.go client initialization

Changes:
- Added import: github.com/Azure/azure-sdk-for-go/sdk/azcore/policy
- Updated NewClientWithSharedKeyCredential to include ClientOptions with retry policy
- Updated CreateEntry error handling to allow BlobAlreadyExists

Technical details:
- Retry policy uses exponential backoff (default SDK behavior)
- MaxRetries=10 provides good balance (was 20 in old SDK, default is 3)
- TryTimeout prevents individual requests from hanging indefinitely
- BlobAlreadyExists handling allows idempotent append operations

All tests pass. Build succeeds. Code is more resilient and robust.

* Update weed/replication/sink/azuresink/azure_sink.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Revert "Update weed/replication/sink/azuresink/azure_sink.go"

This reverts commit 605e41cadf.

* Address fifth round of Gemini Code Assist review comment

Added retry policy to azure_sink.go for consistency and resiliency:

1. MEDIUM: Configure retry policy in azure_sink.go client
   - Problem: azure_sink.go was using default retry policy (3 retries) while
     azure_storage_client.go had custom policy (10 retries)
   - Fix: Added same retry policy configuration for consistency
   - Settings: MaxRetries=10, TryTimeout=1min, RetryDelay=2s, MaxRetryDelay=1min
   - Impact: Replication sink now has same resiliency as storage client
   - Rationale: Replication sink needs to be robust against transient network errors

Changes:
- Added import: github.com/Azure/azure-sdk-for-go/sdk/azcore/policy
- Updated NewClientWithSharedKeyCredential call in initialize() function
- Both azure_storage_client.go and azure_sink.go now have identical retry policies

Benefits:
- Consistency: Both Azure clients now use same retry configuration
- Resiliency: Replication operations more robust to network issues
- Best practices: Follows Azure SDK recommended patterns for production use

All tests pass. Build succeeds. Code is consistent and production-ready.

* fmt

* Address sixth round of Gemini Code Assist review comment

Fixed HIGH priority metadata key validation for Azure compliance:

1. HIGH: Handle metadata keys starting with digits
   - Problem: Azure Blob Storage requires metadata keys to be valid C# identifiers
   - Constraint: C# identifiers cannot start with a digit (0-9)
   - Issue: S3 metadata like 'x-amz-meta-123key' would fail with InvalidInput error
   - Fix: Prefix keys starting with digits with underscore '_'
   - Example: '123key' becomes '_123key', '456-test' becomes '_456_test'

2. Code improvement: Use strings.ReplaceAll for better readability
   - Changed from: strings.Replace(str, "-", "_", -1)
   - Changed to: strings.ReplaceAll(str, "-", "_")
   - Both are functionally equivalent, ReplaceAll is more readable

Changes:
- Updated toMetadata() function in azure_storage_client.go
- Added digit prefix check: if key[0] >= '0' && key[0] <= '9'
- Added comprehensive test case 'keys starting with digits'
- Tests cover: '123key' -> '_123key', '456-test' -> '_456_test', '789' -> '_789'

Technical details:
- Azure SDK validates metadata keys as C# identifiers
- C# identifier rules: must start with letter or underscore
- Digits allowed in identifiers but not as first character
- This prevents SetMetadata() and UploadStream() failures

All tests pass including new test case. Build succeeds.
Code is now fully compliant with Azure metadata requirements.

* Address seventh round of Gemini Code Assist review comment

Normalize metadata keys to lowercase for S3 compatibility:

1. MEDIUM: Convert metadata keys to lowercase
   - Rationale: S3 specification stores user-defined metadata keys in lowercase
   - Consistency: Azure Blob Storage metadata is case-insensitive
   - Best practice: Normalizing to lowercase ensures consistent behavior
   - Example: 'x-amz-meta-My-Key' -> 'my_key' (not 'My_Key')

Changes:
- Updated toMetadata() to apply strings.ToLower() to keys
- Added comment explaining S3 lowercase normalization
- Order of operations: strip prefix -> lowercase -> replace dashes -> check digits

Test coverage:
- Added new test case 'uppercase and mixed case keys'
- Tests: 'My-Key' -> 'my_key', 'UPPERCASE' -> 'uppercase', 'MiXeD-CaSe' -> 'mixed_case'
- All 6 test cases pass

Benefits:
- S3 compatibility: Matches S3 metadata key behavior
- Azure consistency: Case-insensitive keys work predictably
- Cross-platform: Same metadata keys work identically on both S3 and Azure
- Prevents issues: No surprises from case-sensitive key handling

Implementation:
```go
key := strings.ReplaceAll(strings.ToLower(k[len(s3_constants.AmzUserMetaPrefix):]), "-", "_")
```

All tests pass. Build succeeds. Metadata handling is now fully S3-compatible.

* Address eighth round of Gemini Code Assist review comments

Use %w instead of %v for error wrapping across both files:

1. MEDIUM: Error wrapping in azure_storage_client.go
   - Problem: Using %v in fmt.Errorf loses error type information
   - Modern Go practice: Use %w to preserve error chains
   - Benefit: Enables errors.Is() and errors.As() for callers
   - Example: Can check for bloberror.BlobNotFound after wrapping

2. MEDIUM: Error wrapping in azure_sink.go
   - Applied same improvement for consistency
   - All error wrapping now preserves underlying errors
   - Improved debugging and error handling capabilities

Changes applied to all fmt.Errorf calls:
- azure_storage_client.go: 10 instances changed from %v to %w
  - Invalid credential error
  - Client creation error
  - Traverse errors
  - Download errors (2)
  - Upload error
  - Delete error
  - Create/Delete bucket errors (2)

- azure_sink.go: 3 instances changed from %v to %w
  - Credential creation error
  - Client creation error
  - Delete entry error
  - Create append blob error

Benefits:
- Error inspection: Callers can use errors.Is(err, target)
- Error unwrapping: Callers can use errors.As(err, &target)
- Type preservation: Original error types maintained through wraps
- Better debugging: Full error chain available for inspection
- Modern Go: Follows Go 1.13+ error wrapping best practices

Example usage after this change:
```go
err := client.ReadFile(...)
if errors.Is(err, bloberror.BlobNotFound) {
    // Can detect specific Azure errors even after wrapping
}
```

All tests pass. Build succeeds. Error handling is now modern and robust.

* Address ninth round of Gemini Code Assist review comment

Improve metadata key sanitization with comprehensive character validation:

1. MEDIUM: Complete Azure C# identifier validation
   - Problem: Previous implementation only handled dashes, not all invalid chars
   - Issue: Keys like 'my.key', 'key+plus', 'key@symbol' would cause InvalidMetadata
   - Azure requirement: Metadata keys must be valid C# identifiers
   - Valid characters: letters (a-z, A-Z), digits (0-9), underscore (_) only

2. Implemented robust regex-based sanitization
   - Added package-level regex: `[^a-zA-Z0-9_]`
   - Matches ANY character that's not alphanumeric or underscore
   - Replaces all invalid characters with underscore
   - Compiled once at package init for performance

Implementation details:
- Regex declared at package level: var invalidMetadataChars = regexp.MustCompile(`[^a-zA-Z0-9_]`)
- Avoids recompiling regex on every toMetadata() call
- Efficient single-pass replacement of all invalid characters
- Processing order: lowercase -> regex replace -> digit check

Examples of character transformations:
- Dots: 'my.key' -> 'my_key'
- Plus: 'key+plus' -> 'key_plus'
- At symbol: 'key@symbol' -> 'key_symbol'
- Mixed: 'key-with.' -> 'key_with_'
- Slash: 'key/slash' -> 'key_slash'
- Combined: '123-key.value+test' -> '_123_key_value_test'

Test coverage:
- Added comprehensive test case 'keys with invalid characters'
- Tests: dot, plus, at-symbol, dash+dot, slash
- All 7 test cases pass (was 6, now 7)

Benefits:
- Complete Azure compliance: Handles ALL invalid characters
- Robust: Works with any S3 metadata key format
- Performant: Regex compiled once, reused efficiently
- Maintainable: Single source of truth for valid characters
- Prevents errors: No more InvalidMetadata errors during upload

All tests pass. Build succeeds. Metadata sanitization is now bulletproof.

* Address tenth round review - HIGH: Fix metadata key collision issue

Prevent metadata loss by using hex encoding for invalid characters:

1. HIGH PRIORITY: Metadata key collision prevention
   - Critical Issue: Different S3 keys mapping to same Azure key causes data loss
   - Example collisions (BEFORE):
     * 'my-key' -> 'my_key'
     * 'my.key' -> 'my_key'   COLLISION! Second overwrites first
     * 'my_key' -> 'my_key'   All three map to same key!

   - Fixed with hex encoding (AFTER):
     * 'my-key' -> 'my_2d_key' (dash = 0x2d)
     * 'my.key' -> 'my_2e_key' (dot = 0x2e)
     * 'my_key' -> 'my_key'    (underscore is valid)
      All three are now unique!

2. Implemented collision-proof hex encoding
   - Pattern: Invalid chars -> _XX_ where XX is hex code
   - Dash (0x2d): 'content-type' -> 'content_2d_type'
   - Dot (0x2e): 'my.key' -> 'my_2e_key'
   - Plus (0x2b): 'key+plus' -> 'key_2b_plus'
   - At (0x40): 'key@symbol' -> 'key_40_symbol'
   - Slash (0x2f): 'key/slash' -> 'key_2f_slash'

3. Created sanitizeMetadataKey() function
   - Encapsulates hex encoding logic
   - Uses ReplaceAllStringFunc for efficient transformation
   - Maintains digit prefix check for Azure C# identifier rules
   - Clear documentation with examples

Implementation details:
```go
func sanitizeMetadataKey(key string) string {
    // Replace each invalid character with _XX_ where XX is the hex code
    result := invalidMetadataChars.ReplaceAllStringFunc(key, func(s string) string {
        return fmt.Sprintf("_%02x_", s[0])
    })

    // Azure metadata keys cannot start with a digit
    if len(result) > 0 && result[0] >= '0' && result[0] <= '9' {
        result = "_" + result
    }

    return result
}
```

Why hex encoding solves the collision problem:
- Each invalid character gets unique hex representation
- Two-digit hex ensures no confusion (always _XX_ format)
- Preserves all information from original key
- Reversible (though not needed for this use case)
- Azure-compliant (hex codes don't introduce new invalid chars)

Test coverage:
- Updated all test expectations to match hex encoding
- Added 'collision prevention' test case demonstrating uniqueness:
  * Tests my-key, my.key, my_key all produce different results
  * Proves metadata from different S3 keys won't collide
- Total test cases: 8 (was 7, added collision prevention)

Examples from tests:
- 'content-type' -> 'content_2d_type' (0x2d = dash)
- '456-test' -> '_456_2d_test' (digit prefix + dash)
- 'My-Key' -> 'my_2d_key' (lowercase + hex encode dash)
- 'key-with.' -> 'key_2d_with_2e_' (multiple chars: dash, dot, trailing dot)

Benefits:
-  Zero collision risk: Every unique S3 key -> unique Azure key
-  Data integrity: No metadata loss from overwrites
-  Complete info preservation: Original key distinguishable
-  Azure compliant: Hex-encoded keys are valid C# identifiers
-  Maintainable: Clean function with clear purpose
-  Testable: Collision prevention explicitly tested

All tests pass. Build succeeds. Metadata integrity is now guaranteed.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-08 23:12:03 -07:00

287 lines
10 KiB
Go

package s3api
import (
"net/http"
"net/url"
"testing"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/stretchr/testify/assert"
)
// TestListPartsActionMapping tests the fix for the missing s3:ListParts action mapping
// when GET requests include an uploadId query parameter
func TestListPartsActionMapping(t *testing.T) {
testCases := []struct {
name string
method string
bucket string
objectKey string
queryParams map[string]string
fallbackAction Action
expectedAction string
description string
}{
{
name: "get_object_without_uploadId",
method: "GET",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{},
fallbackAction: s3_constants.ACTION_READ,
expectedAction: "s3:GetObject",
description: "GET request without uploadId should map to s3:GetObject",
},
{
name: "get_object_with_uploadId",
method: "GET",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{"uploadId": "test-upload-id"},
fallbackAction: s3_constants.ACTION_READ,
expectedAction: "s3:ListParts",
description: "GET request with uploadId should map to s3:ListParts (this was the missing mapping)",
},
{
name: "get_object_with_uploadId_and_other_params",
method: "GET",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{
"uploadId": "test-upload-id-123",
"max-parts": "100",
"part-number-marker": "50",
},
fallbackAction: s3_constants.ACTION_READ,
expectedAction: "s3:ListParts",
description: "GET request with uploadId plus other multipart params should map to s3:ListParts",
},
{
name: "get_object_versions",
method: "GET",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{"versions": ""},
fallbackAction: s3_constants.ACTION_READ,
expectedAction: "s3:GetObjectVersion",
description: "GET request with versions should still map to s3:GetObjectVersion (precedence check)",
},
{
name: "get_object_acl_without_uploadId",
method: "GET",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{"acl": ""},
fallbackAction: s3_constants.ACTION_READ_ACP,
expectedAction: "s3:GetObjectAcl",
description: "GET request with acl should map to s3:GetObjectAcl (not affected by uploadId fix)",
},
{
name: "post_multipart_upload_without_uploadId",
method: "POST",
bucket: "test-bucket",
objectKey: "test-object.txt",
queryParams: map[string]string{"uploads": ""},
fallbackAction: s3_constants.ACTION_WRITE,
expectedAction: "s3:CreateMultipartUpload",
description: "POST request to initiate multipart upload should not be affected by uploadId fix",
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
// Create HTTP request with query parameters
req := &http.Request{
Method: tc.method,
URL: &url.URL{Path: "/" + tc.bucket + "/" + tc.objectKey},
}
// Add query parameters
query := req.URL.Query()
for key, value := range tc.queryParams {
query.Set(key, value)
}
req.URL.RawQuery = query.Encode()
// Call the granular action determination function
action := determineGranularS3Action(req, tc.fallbackAction, tc.bucket, tc.objectKey)
// Verify the action mapping
assert.Equal(t, tc.expectedAction, action,
"Test case: %s - %s", tc.name, tc.description)
})
}
}
// TestListPartsActionMappingSecurityScenarios tests security scenarios for the ListParts fix
func TestListPartsActionMappingSecurityScenarios(t *testing.T) {
t.Run("privilege_separation_listparts_vs_getobject", func(t *testing.T) {
// Scenario: User has permission to list multipart upload parts but NOT to get the actual object content
// This is a common enterprise pattern where users can manage uploads but not read final objects
// Test request 1: List parts with uploadId
req1 := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/secure-bucket/confidential-document.pdf"},
}
query1 := req1.URL.Query()
query1.Set("uploadId", "active-upload-123")
req1.URL.RawQuery = query1.Encode()
action1 := determineGranularS3Action(req1, s3_constants.ACTION_READ, "secure-bucket", "confidential-document.pdf")
// Test request 2: Get object without uploadId
req2 := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/secure-bucket/confidential-document.pdf"},
}
action2 := determineGranularS3Action(req2, s3_constants.ACTION_READ, "secure-bucket", "confidential-document.pdf")
// These should be different actions, allowing different permissions
assert.Equal(t, "s3:ListParts", action1, "Listing multipart parts should require s3:ListParts permission")
assert.Equal(t, "s3:GetObject", action2, "Reading object content should require s3:GetObject permission")
assert.NotEqual(t, action1, action2, "ListParts and GetObject should be separate permissions for security")
})
t.Run("policy_enforcement_precision", func(t *testing.T) {
// This test documents the security improvement - before the fix, both operations
// would incorrectly map to s3:GetObject, preventing fine-grained access control
testCases := []struct {
description string
queryParams map[string]string
expectedAction string
securityNote string
}{
{
description: "List multipart upload parts",
queryParams: map[string]string{"uploadId": "upload-abc123"},
expectedAction: "s3:ListParts",
securityNote: "FIXED: Now correctly maps to s3:ListParts instead of s3:GetObject",
},
{
description: "Get actual object content",
queryParams: map[string]string{},
expectedAction: "s3:GetObject",
securityNote: "UNCHANGED: Still correctly maps to s3:GetObject",
},
{
description: "Get object with complex upload ID",
queryParams: map[string]string{"uploadId": "complex-upload-id-with-hyphens-123-abc-def"},
expectedAction: "s3:ListParts",
securityNote: "FIXED: Complex upload IDs now correctly detected",
},
}
for _, tc := range testCases {
req := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/test-bucket/test-object"},
}
query := req.URL.Query()
for key, value := range tc.queryParams {
query.Set(key, value)
}
req.URL.RawQuery = query.Encode()
action := determineGranularS3Action(req, s3_constants.ACTION_READ, "test-bucket", "test-object")
assert.Equal(t, tc.expectedAction, action,
"%s - %s", tc.description, tc.securityNote)
}
})
}
// TestListPartsActionRealWorldScenarios tests realistic enterprise multipart upload scenarios
func TestListPartsActionRealWorldScenarios(t *testing.T) {
t.Run("large_file_upload_workflow", func(t *testing.T) {
// Simulate a large file upload workflow where users need different permissions for each step
// Step 1: Initiate multipart upload (POST with uploads query)
req1 := &http.Request{
Method: "POST",
URL: &url.URL{Path: "/data/large-dataset.csv"},
}
query1 := req1.URL.Query()
query1.Set("uploads", "")
req1.URL.RawQuery = query1.Encode()
action1 := determineGranularS3Action(req1, s3_constants.ACTION_WRITE, "data", "large-dataset.csv")
// Step 2: List existing parts (GET with uploadId query) - THIS WAS THE MISSING MAPPING
req2 := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/data/large-dataset.csv"},
}
query2 := req2.URL.Query()
query2.Set("uploadId", "dataset-upload-20240827-001")
req2.URL.RawQuery = query2.Encode()
action2 := determineGranularS3Action(req2, s3_constants.ACTION_READ, "data", "large-dataset.csv")
// Step 3: Upload a part (PUT with uploadId and partNumber)
req3 := &http.Request{
Method: "PUT",
URL: &url.URL{Path: "/data/large-dataset.csv"},
}
query3 := req3.URL.Query()
query3.Set("uploadId", "dataset-upload-20240827-001")
query3.Set("partNumber", "5")
req3.URL.RawQuery = query3.Encode()
action3 := determineGranularS3Action(req3, s3_constants.ACTION_WRITE, "data", "large-dataset.csv")
// Step 4: Complete multipart upload (POST with uploadId)
req4 := &http.Request{
Method: "POST",
URL: &url.URL{Path: "/data/large-dataset.csv"},
}
query4 := req4.URL.Query()
query4.Set("uploadId", "dataset-upload-20240827-001")
req4.URL.RawQuery = query4.Encode()
action4 := determineGranularS3Action(req4, s3_constants.ACTION_WRITE, "data", "large-dataset.csv")
// Verify each step has the correct action mapping
assert.Equal(t, "s3:CreateMultipartUpload", action1, "Step 1: Initiate upload")
assert.Equal(t, "s3:ListParts", action2, "Step 2: List parts (FIXED by this PR)")
assert.Equal(t, "s3:UploadPart", action3, "Step 3: Upload part")
assert.Equal(t, "s3:CompleteMultipartUpload", action4, "Step 4: Complete upload")
// Verify that each step requires different permissions (security principle)
actions := []string{action1, action2, action3, action4}
for i, action := range actions {
for j, otherAction := range actions {
if i != j {
assert.NotEqual(t, action, otherAction,
"Each multipart operation step should require different permissions for fine-grained control")
}
}
}
})
t.Run("edge_case_upload_ids", func(t *testing.T) {
// Test various upload ID formats to ensure the fix works with real AWS-compatible upload IDs
testUploadIds := []string{
"simple123",
"complex-upload-id-with-hyphens",
"upload_with_underscores_123",
"2VmVGvGhqM0sXnVeBjMNCqtRvr.ygGz0pWPLKAj.YW3zK7VmpFHYuLKVR8OOXnHEhP3WfwlwLKMYJxoHgkGYYv",
"very-long-upload-id-that-might-be-generated-by-aws-s3-or-compatible-services-abcd1234",
"uploadId-with.dots.and-dashes_and_underscores123",
}
for _, uploadId := range testUploadIds {
req := &http.Request{
Method: "GET",
URL: &url.URL{Path: "/test-bucket/test-file.bin"},
}
query := req.URL.Query()
query.Set("uploadId", uploadId)
req.URL.RawQuery = query.Encode()
action := determineGranularS3Action(req, s3_constants.ACTION_READ, "test-bucket", "test-file.bin")
assert.Equal(t, "s3:ListParts", action,
"Upload ID format %s should be correctly detected and mapped to s3:ListParts", uploadId)
}
})
}