Files
seaweedfs/weed/s3api/auth_credentials.go
Chris Lu b7b73016dd S3 API: Add SSE-KMS (#7144)
* implement sse-c

* fix Content-Range

* adding tests

* Update s3_sse_c_test.go

* copy sse-c objects

* adding tests

* refactor

* multi reader

* remove extra write header call

* refactor

* SSE-C encrypted objects do not support HTTP Range requests

* robust

* fix server starts

* Update Makefile

* Update Makefile

* ci: remove SSE-C integration tests and workflows; delete test/s3/encryption/

* s3: SSE-C MD5 must be base64 (case-sensitive); fix validation, comparisons, metadata storage; update tests

* minor

* base64

* Update SSE-C_IMPLEMENTATION.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update weed/s3api/s3api_object_handlers.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update SSE-C_IMPLEMENTATION.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* address comments

* fix test

* fix compilation

* Bucket Default Encryption

To complete the SSE-KMS implementation for production use:
Add AWS KMS Provider - Implement weed/kms/aws/aws_kms.go using AWS SDK
Integrate with S3 Handlers - Update PUT/GET object handlers to use SSE-KMS
Add Multipart Upload Support - Extend SSE-KMS to multipart uploads
Configuration Integration - Add KMS configuration to filer.toml
Documentation - Update SeaweedFS wiki with SSE-KMS usage examples

* store bucket sse config in proto

* add more tests

* Update SSE-C_IMPLEMENTATION.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Fix rebase errors and restore structured BucketMetadata API

Merge Conflict Fixes:
- Fixed merge conflicts in header.go (SSE-C and SSE-KMS headers)
- Fixed merge conflicts in s3api_errors.go (SSE-C and SSE-KMS error codes)
- Fixed merge conflicts in s3_sse_c.go (copy strategy constants)
- Fixed merge conflicts in s3api_object_handlers_copy.go (copy strategy usage)

API Restoration:
- Restored BucketMetadata struct with Tags, CORS, and Encryption fields
- Restored structured API functions: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata
- Restored helper functions: UpdateBucketTags, UpdateBucketCORS, UpdateBucketEncryption
- Restored clear functions: ClearBucketTags, ClearBucketCORS, ClearBucketEncryption

Handler Updates:
- Updated GetBucketTaggingHandler to use GetBucketMetadata() directly
- Updated PutBucketTaggingHandler to use UpdateBucketTags()
- Updated DeleteBucketTaggingHandler to use ClearBucketTags()
- Updated CORS handlers to use UpdateBucketCORS() and ClearBucketCORS()
- Updated loadCORSFromBucketContent to use GetBucketMetadata()

Internal Function Updates:
- Updated getBucketMetadata() to return *BucketMetadata struct
- Updated setBucketMetadata() to accept *BucketMetadata struct
- Updated getBucketEncryptionMetadata() to use GetBucketMetadata()
- Updated setBucketEncryptionMetadata() to use SetBucketMetadata()

Benefits:
- Resolved all rebase conflicts while preserving both SSE-C and SSE-KMS functionality
- Maintained consistent structured API throughout the codebase
- Eliminated intermediate wrapper functions for cleaner code
- Proper error handling with better granularity
- All tests passing and build successful

The bucket metadata system now uses a unified, type-safe, structured API
that supports tags, CORS, and encryption configuration consistently.

* Fix updateEncryptionConfiguration for first-time bucket encryption setup

- Change getBucketEncryptionMetadata to getBucketMetadata to avoid failures when no encryption config exists
- Change setBucketEncryptionMetadata to setBucketMetadataWithEncryption for consistency
- This fixes the critical issue where bucket encryption configuration failed for buckets without existing encryption

Fixes: https://github.com/seaweedfs/seaweedfs/pull/7144#discussion_r2285669572

* Fix rebase conflicts and maintain structured BucketMetadata API

Resolved Conflicts:
- Fixed merge conflicts in s3api_bucket_config.go between structured API (HEAD) and old intermediate functions
- Kept modern structured API approach: UpdateBucketCORS, ClearBucketCORS, UpdateBucketEncryption
- Removed old intermediate functions: setBucketTags, deleteBucketTags, setBucketMetadataWithEncryption

API Consistency Maintained:
- updateCORSConfiguration: Uses UpdateBucketCORS() directly
- removeCORSConfiguration: Uses ClearBucketCORS() directly
- updateEncryptionConfiguration: Uses UpdateBucketEncryption() directly
- All structured API functions preserved: GetBucketMetadata, SetBucketMetadata, UpdateBucketMetadata

Benefits:
- Maintains clean separation between API layers
- Preserves atomic metadata updates with proper error handling
- Eliminates function indirection for better performance
- Consistent API usage pattern throughout codebase
- All tests passing and build successful

The bucket metadata system continues to use the unified, type-safe, structured API
that properly handles tags, CORS, and encryption configuration without any
intermediate wrapper functions.

* Fix complex rebase conflicts and maintain clean structured BucketMetadata API

Resolved Complex Conflicts:
- Fixed merge conflicts between modern structured API (HEAD) and mixed approach
- Removed duplicate function declarations that caused compilation errors
- Consistently chose structured API approach over intermediate functions

Fixed Functions:
- BucketMetadata struct: Maintained clean field alignment
- loadCORSFromBucketContent: Uses GetBucketMetadata() directly
- updateCORSConfiguration: Uses UpdateBucketCORS() directly
- removeCORSConfiguration: Uses ClearBucketCORS() directly
- getBucketMetadata: Returns *BucketMetadata struct consistently
- setBucketMetadata: Accepts *BucketMetadata struct consistently

Removed Duplicates:
- Eliminated duplicate GetBucketMetadata implementations
- Eliminated duplicate SetBucketMetadata implementations
- Eliminated duplicate UpdateBucketMetadata implementations
- Eliminated duplicate helper functions (UpdateBucketTags, etc.)

API Consistency Achieved:
- Single, unified BucketMetadata struct for all operations
- Atomic updates through UpdateBucketMetadata with function callbacks
- Type-safe operations with proper error handling
- No intermediate wrapper functions cluttering the API

Benefits:
- Clean, maintainable codebase with no function duplication
- Consistent structured API usage throughout all bucket operations
- Proper error handling and type safety
- Build successful and all tests passing

The bucket metadata system now has a completely clean, structured API
without any conflicts, duplicates, or inconsistencies.

* Update remaining functions to use new structured BucketMetadata APIs directly

Updated functions to follow the pattern established in bucket config:
- getEncryptionConfiguration() -> Uses GetBucketMetadata() directly
- removeEncryptionConfiguration() -> Uses ClearBucketEncryption() directly

Benefits:
- Consistent API usage pattern across all bucket metadata operations
- Simpler, more readable code that leverages the structured API
- Eliminates calls to intermediate legacy functions
- Better error handling and logging consistency
- All tests pass with improved functionality

This completes the transition to using the new structured BucketMetadata API
throughout the entire bucket configuration and encryption subsystem.

* Fix GitHub PR #7144 code review comments

Address all code review comments from Gemini Code Assist bot:

1. **High Priority - SSE-KMS Key Validation**: Fixed ValidateSSEKMSKey to allow empty KMS key ID
   - Empty key ID now indicates use of default KMS key (consistent with AWS behavior)
   - Updated ParseSSEKMSHeaders to call validation after parsing
   - Enhanced isValidKMSKeyID to reject keys with spaces and invalid characters

2. **Medium Priority - KMS Registry Error Handling**: Improved error collection in CloseAll
   - Now collects all provider close errors instead of only returning the last one
   - Uses proper error formatting with %w verb for error wrapping
   - Returns single error for one failure, combined message for multiple failures

3. **Medium Priority - Local KMS Aliases Consistency**: Fixed alias handling in CreateKey
   - Now updates the aliases slice in-place to maintain consistency
   - Ensures both p.keys map and key.Aliases slice use the same prefixed format

All changes maintain backward compatibility and improve error handling robustness.
Tests updated and passing for all scenarios including edge cases.

* Use errors.Join for KMS registry error handling

Replace manual string building with the more idiomatic errors.Join function:

- Removed manual error message concatenation with strings.Builder
- Simplified error handling logic by using errors.Join(allErrors...)
- Removed unnecessary string import
- Added errors import for errors.Join

This approach is cleaner, more idiomatic, and automatically handles:
- Returning nil for empty error slice
- Returning single error for one-element slice
- Properly formatting multiple errors with newlines

The errors.Join function was introduced in Go 1.20 and is the
recommended way to combine multiple errors.

* Update registry.go

* Fix GitHub PR #7144 latest review comments

Address all new code review comments from Gemini Code Assist bot:

1. **High Priority - SSE-KMS Detection Logic**: Tightened IsSSEKMSEncrypted function
   - Now relies only on the canonical x-amz-server-side-encryption header
   - Removed redundant check for x-amz-encrypted-data-key metadata
   - Prevents misinterpretation of objects with inconsistent metadata state
   - Updated test case to reflect correct behavior (encrypted data key only = false)

2. **Medium Priority - UUID Validation**: Enhanced KMS key ID validation
   - Replaced simplistic length/hyphen count check with proper regex validation
   - Added regexp import for robust UUID format checking
   - Regex pattern: ^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}$
   - Prevents invalid formats like '------------------------------------' from passing

3. **Medium Priority - Alias Mutation Fix**: Avoided input slice modification
   - Changed CreateKey to not mutate the input aliases slice in-place
   - Uses local variable for modified alias to prevent side effects
   - Maintains backward compatibility while being safer for callers

All changes improve code robustness and follow AWS S3 standards more closely.
Tests updated and passing for all scenarios including edge cases.

* Fix failing SSE tests

Address two failing test cases:

1. **TestSSEHeaderConflicts**: Fixed SSE-C and SSE-KMS mutual exclusion
   - Modified IsSSECRequest to return false if SSE-KMS headers are present
   - Modified IsSSEKMSRequest to return false if SSE-C headers are present
   - This prevents both detection functions from returning true simultaneously
   - Aligns with AWS S3 behavior where SSE-C and SSE-KMS are mutually exclusive

2. **TestBucketEncryptionEdgeCases**: Fixed XML namespace validation
   - Added namespace validation in encryptionConfigFromXMLBytes function
   - Now rejects XML with invalid namespaces (only allows empty or AWS standard namespace)
   - Validates XMLName.Space to ensure proper XML structure
   - Prevents acceptance of malformed XML with incorrect namespaces

Both fixes improve compliance with AWS S3 standards and prevent invalid
configurations from being accepted. All SSE and bucket encryption tests
now pass successfully.

* Fix GitHub PR #7144 latest review comments

Address two new code review comments from Gemini Code Assist bot:

1. **High Priority - Race Condition in UpdateBucketMetadata**: Fixed thread safety issue
   - Added per-bucket locking mechanism to prevent race conditions
   - Introduced bucketMetadataLocks map with RWMutex for each bucket
   - Added getBucketMetadataLock helper with double-checked locking pattern
   - UpdateBucketMetadata now uses bucket-specific locks to serialize metadata updates
   - Prevents last-writer-wins scenarios when concurrent requests update different metadata parts

2. **Medium Priority - KMS Key ARN Validation**: Improved robustness of ARN validation
   - Enhanced isValidKMSKeyID function to strictly validate ARN structure
   - Changed from 'len(parts) >= 6' to 'len(parts) != 6' for exact part count
   - Added proper resource validation for key/ and alias/ prefixes
   - Prevents malformed ARNs with incorrect structure from being accepted
   - Now validates: arn:aws:kms:region:account:key/keyid or arn:aws:kms:region:account:alias/aliasname

Both fixes improve system reliability and prevent edge cases that could cause
data corruption or security issues. All existing tests continue to pass.

* format

* address comments

* Configuration Adapter

* Regex Optimization

* Caching Integration

* add negative cache for non-existent buckets

* remove bucketMetadataLocks

* address comments

* address comments

* copying objects with sse-kms

* copying strategy

* store IV in entry metadata

* implement compression reader

* extract json map as sse kms context

* bucket key

* comments

* rotate sse chunks

* KMS Data Keys use AES-GCM + nonce

* add comments

* Update weed/s3api/s3_sse_kms.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update s3api_object_handlers_put.go

* get IV from response header

* set sse headers

* Update s3api_object_handlers.go

* deterministic JSON marshaling

* store iv in entry metadata

* address comments

* not used

* store iv in destination metadata

ensures that SSE-C copy operations with re-encryption (decrypt/re-encrypt scenario) now properly store the destination encryption metadata

* add todo

* address comments

* SSE-S3 Deserialization

* add BucketKMSCache to BucketConfig

* fix test compilation

* already not empty

* use constants

* fix: critical metadata (encrypted data keys, encryption context, etc.) was never stored during PUT/copy operations

* address comments

* fix tests

* Fix SSE-KMS Copy Re-encryption

* Cache now persists across requests

* fix test

* iv in metadata only

* SSE-KMS copy operations should follow the same pattern as SSE-C

* fix size overhead calculation

* Filer-Side SSE Metadata Processing

* SSE Integration Tests

* fix tests

* clean up

* Update s3_sse_multipart_test.go

* add s3 sse tests

* unused

* add logs

* Update Makefile

* Update Makefile

* s3 health check

* The tests were failing because they tried to run both SSE-C and SSE-KMS tests

* Update weed/s3api/s3_sse_c.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update Makefile

* add back

* Update Makefile

* address comments

* fix tests

* Update s3-sse-tests.yml

* Update s3-sse-tests.yml

* fix sse-kms for PUT operation

* IV

* Update auth_credentials.go

* fix multipart with kms

* constants

* multipart sse kms

Modified handleSSEKMSResponse to detect multipart SSE-KMS objects
Added createMultipartSSEKMSDecryptedReader to handle each chunk independently
Each chunk now gets its own decrypted reader before combining into the final stream

* validate key id

* add SSEType

* permissive kms key format

* Update s3_sse_kms_test.go

* format

* assert equal

* uploading SSE-KMS metadata per chunk

* persist sse type and metadata

* avoid re-chunk multipart uploads

* decryption process to use stored PartOffset values

* constants

* sse-c multipart upload

* Unified Multipart SSE Copy

* purge

* fix fatalf

* avoid io.MultiReader which does not close underlying readers

* unified cross-encryption

* fix Single-object SSE-C

* adjust constants

* range read sse files

* remove debug logs

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-21 08:28:07 -07:00

618 lines
18 KiB
Go

package s3api
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"slices"
"strings"
"sync"
"github.com/seaweedfs/seaweedfs/weed/credential"
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/kms"
"github.com/seaweedfs/seaweedfs/weed/kms/local"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/iam_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
"github.com/seaweedfs/seaweedfs/weed/util"
"google.golang.org/grpc"
)
type Action string
type Iam interface {
Check(f http.HandlerFunc, actions ...Action) http.HandlerFunc
}
type IdentityAccessManagement struct {
m sync.RWMutex
identities []*Identity
accessKeyIdent map[string]*Identity
accounts map[string]*Account
emailAccount map[string]*Account
hashes map[string]*sync.Pool
hashCounters map[string]*int32
identityAnonymous *Identity
hashMu sync.RWMutex
domain string
isAuthEnabled bool
credentialManager *credential.CredentialManager
filerClient filer_pb.SeaweedFilerClient
grpcDialOption grpc.DialOption
}
type Identity struct {
Name string
Account *Account
Credentials []*Credential
Actions []Action
}
// Account represents a system user, a system user can
// configure multiple IAM-Users, IAM-Users can configure
// permissions respectively, and each IAM-User can
// configure multiple security credentials
type Account struct {
//Name is also used to display the "DisplayName" as the owner of the bucket or object
DisplayName string
EmailAddress string
//Id is used to identify an Account when granting cross-account access(ACLs) to buckets and objects
Id string
}
// Predefined Accounts
var (
// AccountAdmin is used as the default account for IAM-Credentials access without Account configured
AccountAdmin = Account{
DisplayName: "admin",
EmailAddress: "admin@example.com",
Id: s3_constants.AccountAdminId,
}
// AccountAnonymous is used to represent the account for anonymous access
AccountAnonymous = Account{
DisplayName: "anonymous",
EmailAddress: "anonymous@example.com",
Id: s3_constants.AccountAnonymousId,
}
)
type Credential struct {
AccessKey string
SecretKey string
}
// "Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP"
func (action Action) getPermission() Permission {
switch act := strings.Split(string(action), ":")[0]; act {
case s3_constants.ACTION_ADMIN:
return Permission("FULL_CONTROL")
case s3_constants.ACTION_WRITE:
return Permission("WRITE")
case s3_constants.ACTION_WRITE_ACP:
return Permission("WRITE_ACP")
case s3_constants.ACTION_READ:
return Permission("READ")
case s3_constants.ACTION_READ_ACP:
return Permission("READ_ACP")
default:
return Permission("")
}
}
func NewIdentityAccessManagement(option *S3ApiServerOption) *IdentityAccessManagement {
return NewIdentityAccessManagementWithStore(option, "")
}
func NewIdentityAccessManagementWithStore(option *S3ApiServerOption, explicitStore string) *IdentityAccessManagement {
iam := &IdentityAccessManagement{
domain: option.DomainName,
hashes: make(map[string]*sync.Pool),
hashCounters: make(map[string]*int32),
}
// Always initialize credential manager with fallback to defaults
credentialManager, err := credential.NewCredentialManagerWithDefaults(credential.CredentialStoreTypeName(explicitStore))
if err != nil {
glog.Fatalf("failed to initialize credential manager: %v", err)
}
// For stores that need filer client details, set them
if store := credentialManager.GetStore(); store != nil {
if filerClientSetter, ok := store.(interface {
SetFilerClient(string, grpc.DialOption)
}); ok {
filerClientSetter.SetFilerClient(string(option.Filer), option.GrpcDialOption)
}
}
iam.credentialManager = credentialManager
// Track whether any configuration was successfully loaded
configLoaded := false
// First, try to load configurations from file or filer
if option.Config != "" {
glog.V(3).Infof("loading static config file %s", option.Config)
if err := iam.loadS3ApiConfigurationFromFile(option.Config); err != nil {
glog.Fatalf("fail to load config file %s: %v", option.Config, err)
}
configLoaded = true
} else {
glog.V(3).Infof("no static config file specified... loading config from credential manager")
if err := iam.loadS3ApiConfigurationFromFiler(option); err != nil {
glog.Warningf("fail to load config: %v", err)
} else {
// Check if any identities were actually loaded from filer
iam.m.RLock()
if len(iam.identities) > 0 {
configLoaded = true
}
iam.m.RUnlock()
}
}
// Only use environment variables as fallback if no configuration was loaded
if !configLoaded {
accessKeyId := os.Getenv("AWS_ACCESS_KEY_ID")
secretAccessKey := os.Getenv("AWS_SECRET_ACCESS_KEY")
if accessKeyId != "" && secretAccessKey != "" {
glog.V(0).Infof("No S3 configuration found, using AWS environment variables as fallback")
// Create environment variable identity name
identityNameSuffix := accessKeyId
if len(accessKeyId) > 8 {
identityNameSuffix = accessKeyId[:8]
}
// Create admin identity with environment variable credentials
envIdentity := &Identity{
Name: "admin-" + identityNameSuffix,
Account: &AccountAdmin,
Credentials: []*Credential{
{
AccessKey: accessKeyId,
SecretKey: secretAccessKey,
},
},
Actions: []Action{
s3_constants.ACTION_ADMIN,
},
}
// Set as the only configuration
iam.m.Lock()
if len(iam.identities) == 0 {
iam.identities = []*Identity{envIdentity}
iam.accessKeyIdent = map[string]*Identity{accessKeyId: envIdentity}
iam.isAuthEnabled = true
}
iam.m.Unlock()
glog.V(0).Infof("Added admin identity from AWS environment variables: %s", envIdentity.Name)
}
}
return iam
}
func (iam *IdentityAccessManagement) loadS3ApiConfigurationFromFiler(option *S3ApiServerOption) error {
return iam.LoadS3ApiConfigurationFromCredentialManager()
}
func (iam *IdentityAccessManagement) loadS3ApiConfigurationFromFile(fileName string) error {
content, readErr := os.ReadFile(fileName)
if readErr != nil {
glog.Warningf("fail to read %s : %v", fileName, readErr)
return fmt.Errorf("fail to read %s : %v", fileName, readErr)
}
// Initialize KMS if configuration contains KMS settings
if err := iam.initializeKMSFromConfig(content); err != nil {
glog.Warningf("KMS initialization failed: %v", err)
}
return iam.LoadS3ApiConfigurationFromBytes(content)
}
func (iam *IdentityAccessManagement) LoadS3ApiConfigurationFromBytes(content []byte) error {
s3ApiConfiguration := &iam_pb.S3ApiConfiguration{}
if err := filer.ParseS3ConfigurationFromBytes(content, s3ApiConfiguration); err != nil {
glog.Warningf("unmarshal error: %v", err)
return fmt.Errorf("unmarshal error: %w", err)
}
if err := filer.CheckDuplicateAccessKey(s3ApiConfiguration); err != nil {
return err
}
if err := iam.loadS3ApiConfiguration(s3ApiConfiguration); err != nil {
return err
}
return nil
}
func (iam *IdentityAccessManagement) loadS3ApiConfiguration(config *iam_pb.S3ApiConfiguration) error {
var identities []*Identity
var identityAnonymous *Identity
accessKeyIdent := make(map[string]*Identity)
accounts := make(map[string]*Account)
emailAccount := make(map[string]*Account)
foundAccountAdmin := false
foundAccountAnonymous := false
for _, account := range config.Accounts {
glog.V(3).Infof("loading account name=%s, id=%s", account.DisplayName, account.Id)
switch account.Id {
case AccountAdmin.Id:
AccountAdmin = Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &AccountAdmin
foundAccountAdmin = true
case AccountAnonymous.Id:
AccountAnonymous = Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &AccountAnonymous
foundAccountAnonymous = true
default:
t := Account{
Id: account.Id,
DisplayName: account.DisplayName,
EmailAddress: account.EmailAddress,
}
accounts[account.Id] = &t
}
if account.EmailAddress != "" {
emailAccount[account.EmailAddress] = accounts[account.Id]
}
}
if !foundAccountAdmin {
accounts[AccountAdmin.Id] = &AccountAdmin
emailAccount[AccountAdmin.EmailAddress] = &AccountAdmin
}
if !foundAccountAnonymous {
accounts[AccountAnonymous.Id] = &AccountAnonymous
emailAccount[AccountAnonymous.EmailAddress] = &AccountAnonymous
}
for _, ident := range config.Identities {
glog.V(3).Infof("loading identity %s", ident.Name)
t := &Identity{
Name: ident.Name,
Credentials: nil,
Actions: nil,
}
switch {
case ident.Name == AccountAnonymous.Id:
t.Account = &AccountAnonymous
identityAnonymous = t
case ident.Account == nil:
t.Account = &AccountAdmin
default:
if account, ok := accounts[ident.Account.Id]; ok {
t.Account = account
} else {
t.Account = &AccountAdmin
glog.Warningf("identity %s is associated with a non exist account ID, the association is invalid", ident.Name)
}
}
for _, action := range ident.Actions {
t.Actions = append(t.Actions, Action(action))
}
for _, cred := range ident.Credentials {
t.Credentials = append(t.Credentials, &Credential{
AccessKey: cred.AccessKey,
SecretKey: cred.SecretKey,
})
accessKeyIdent[cred.AccessKey] = t
}
identities = append(identities, t)
}
iam.m.Lock()
// atomically switch
iam.identities = identities
iam.identityAnonymous = identityAnonymous
iam.accounts = accounts
iam.emailAccount = emailAccount
iam.accessKeyIdent = accessKeyIdent
if !iam.isAuthEnabled { // one-directional, no toggling
iam.isAuthEnabled = len(identities) > 0
}
iam.m.Unlock()
return nil
}
func (iam *IdentityAccessManagement) isEnabled() bool {
return iam.isAuthEnabled
}
func (iam *IdentityAccessManagement) lookupByAccessKey(accessKey string) (identity *Identity, cred *Credential, found bool) {
iam.m.RLock()
defer iam.m.RUnlock()
if ident, ok := iam.accessKeyIdent[accessKey]; ok {
for _, credential := range ident.Credentials {
if credential.AccessKey == accessKey {
return ident, credential, true
}
}
}
glog.V(1).Infof("could not find accessKey %s", accessKey)
return nil, nil, false
}
func (iam *IdentityAccessManagement) lookupAnonymous() (identity *Identity, found bool) {
iam.m.RLock()
defer iam.m.RUnlock()
if iam.identityAnonymous != nil {
return iam.identityAnonymous, true
}
return nil, false
}
func (iam *IdentityAccessManagement) GetAccountNameById(canonicalId string) string {
iam.m.RLock()
defer iam.m.RUnlock()
if account, ok := iam.accounts[canonicalId]; ok {
return account.DisplayName
}
return ""
}
func (iam *IdentityAccessManagement) GetAccountIdByEmail(email string) string {
iam.m.RLock()
defer iam.m.RUnlock()
if account, ok := iam.emailAccount[email]; ok {
return account.Id
}
return ""
}
func (iam *IdentityAccessManagement) Auth(f http.HandlerFunc, action Action) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !iam.isEnabled() {
f(w, r)
return
}
identity, errCode := iam.authRequest(r, action)
glog.V(3).Infof("auth error: %v", errCode)
if errCode == s3err.ErrNone {
if identity != nil && identity.Name != "" {
r.Header.Set(s3_constants.AmzIdentityId, identity.Name)
}
f(w, r)
return
}
s3err.WriteErrorResponse(w, r, errCode)
}
}
// check whether the request has valid access keys
func (iam *IdentityAccessManagement) authRequest(r *http.Request, action Action) (*Identity, s3err.ErrorCode) {
var identity *Identity
var s3Err s3err.ErrorCode
var found bool
var authType string
switch getRequestAuthType(r) {
case authTypeUnknown:
glog.V(3).Infof("unknown auth type")
r.Header.Set(s3_constants.AmzAuthType, "Unknown")
return identity, s3err.ErrAccessDenied
case authTypePresignedV2, authTypeSignedV2:
glog.V(3).Infof("v2 auth type")
identity, s3Err = iam.isReqAuthenticatedV2(r)
authType = "SigV2"
case authTypeStreamingSigned, authTypeSigned, authTypePresigned:
glog.V(3).Infof("v4 auth type")
identity, s3Err = iam.reqSignatureV4Verify(r)
authType = "SigV4"
case authTypePostPolicy:
glog.V(3).Infof("post policy auth type")
r.Header.Set(s3_constants.AmzAuthType, "PostPolicy")
return identity, s3err.ErrNone
case authTypeStreamingUnsigned:
glog.V(3).Infof("unsigned streaming upload")
return identity, s3err.ErrNone
case authTypeJWT:
glog.V(3).Infof("jwt auth type")
r.Header.Set(s3_constants.AmzAuthType, "Jwt")
return identity, s3err.ErrNotImplemented
case authTypeAnonymous:
authType = "Anonymous"
if identity, found = iam.lookupAnonymous(); !found {
r.Header.Set(s3_constants.AmzAuthType, authType)
return identity, s3err.ErrAccessDenied
}
default:
return identity, s3err.ErrNotImplemented
}
if len(authType) > 0 {
r.Header.Set(s3_constants.AmzAuthType, authType)
}
if s3Err != s3err.ErrNone {
return identity, s3Err
}
glog.V(3).Infof("user name: %v actions: %v, action: %v", identity.Name, identity.Actions, action)
bucket, object := s3_constants.GetBucketAndObject(r)
prefix := s3_constants.GetPrefix(r)
// For List operations, use prefix for permission checking if available
if action == s3_constants.ACTION_LIST && object == "" && prefix != "" {
// List operation with prefix - check permission for the prefix path
object = prefix
} else if (object == "/" || object == "") && prefix != "" {
// Using the aws cli with s3, and s3api, and with boto3, the object is often set to "/" or empty
// but the prefix is set to the actual object key for permission checking
object = prefix
}
// For ListBuckets, authorization is performed in the handler by iterating
// through buckets and checking permissions for each. Skip the global check here.
if action == s3_constants.ACTION_LIST && bucket == "" {
// ListBuckets operation - authorization handled per-bucket in the handler
} else {
if !identity.canDo(action, bucket, object) {
return identity, s3err.ErrAccessDenied
}
}
r.Header.Set(s3_constants.AmzAccountId, identity.Account.Id)
return identity, s3err.ErrNone
}
func (identity *Identity) canDo(action Action, bucket string, objectKey string) bool {
if identity.isAdmin() {
return true
}
for _, a := range identity.Actions {
// Case where the Resource provided is
// "Resource": [
// "arn:aws:s3:::*"
// ]
if a == action {
return true
}
}
if bucket == "" {
glog.V(3).Infof("identity %s is not allowed to perform action %s on %s -- bucket is empty", identity.Name, action, bucket+objectKey)
return false
}
glog.V(3).Infof("checking if %s can perform %s on bucket '%s'", identity.Name, action, bucket+objectKey)
target := string(action) + ":" + bucket + objectKey
adminTarget := s3_constants.ACTION_ADMIN + ":" + bucket + objectKey
limitedByBucket := string(action) + ":" + bucket
adminLimitedByBucket := s3_constants.ACTION_ADMIN + ":" + bucket
for _, a := range identity.Actions {
act := string(a)
if strings.HasSuffix(act, "*") {
if strings.HasPrefix(target, act[:len(act)-1]) {
return true
}
if strings.HasPrefix(adminTarget, act[:len(act)-1]) {
return true
}
} else {
if act == limitedByBucket {
return true
}
if act == adminLimitedByBucket {
return true
}
}
}
//log error
glog.V(3).Infof("identity %s is not allowed to perform action %s on %s", identity.Name, action, bucket+objectKey)
return false
}
func (identity *Identity) isAdmin() bool {
return slices.Contains(identity.Actions, s3_constants.ACTION_ADMIN)
}
// GetCredentialManager returns the credential manager instance
func (iam *IdentityAccessManagement) GetCredentialManager() *credential.CredentialManager {
return iam.credentialManager
}
// LoadS3ApiConfigurationFromCredentialManager loads configuration using the credential manager
func (iam *IdentityAccessManagement) LoadS3ApiConfigurationFromCredentialManager() error {
s3ApiConfiguration, err := iam.credentialManager.LoadConfiguration(context.Background())
if err != nil {
return fmt.Errorf("failed to load configuration from credential manager: %w", err)
}
return iam.loadS3ApiConfiguration(s3ApiConfiguration)
}
// initializeKMSFromConfig parses JSON configuration and initializes KMS provider if present
func (iam *IdentityAccessManagement) initializeKMSFromConfig(configContent []byte) error {
// Parse JSON to extract KMS configuration
var config map[string]interface{}
if err := json.Unmarshal(configContent, &config); err != nil {
return fmt.Errorf("failed to parse config JSON: %v", err)
}
// Check if KMS configuration exists
kmsConfig, exists := config["kms"]
if !exists {
glog.V(2).Infof("No KMS configuration found in S3 config - SSE-KMS will not be available")
return nil
}
kmsConfigMap, ok := kmsConfig.(map[string]interface{})
if !ok {
return fmt.Errorf("invalid KMS configuration format")
}
// Extract KMS type (default to "local" for testing)
kmsType, ok := kmsConfigMap["type"].(string)
if !ok || kmsType == "" {
kmsType = "local"
}
glog.V(1).Infof("Initializing KMS provider: type=%s", kmsType)
// Initialize KMS provider based on type
switch kmsType {
case "local":
return iam.initializeLocalKMS(kmsConfigMap)
default:
return fmt.Errorf("unsupported KMS provider type: %s", kmsType)
}
}
// initializeLocalKMS initializes the local KMS provider for development/testing
func (iam *IdentityAccessManagement) initializeLocalKMS(kmsConfig map[string]interface{}) error {
// Register local KMS provider factory if not already registered
kms.RegisterProvider("local", func(config util.Configuration) (kms.KMSProvider, error) {
// Create local KMS provider
provider, err := local.NewLocalKMSProvider(config)
if err != nil {
return nil, fmt.Errorf("failed to create local KMS provider: %v", err)
}
// Create the test keys that our tests expect with specific keyIDs
// Note: Local KMS provider now creates keys on-demand
// No need to pre-create test keys in production code
glog.V(1).Infof("Local KMS provider created successfully")
return provider, nil
})
// Create KMS configuration
kmsConfigObj := &kms.KMSConfig{
Provider: "local",
Config: nil, // Local provider uses defaults
}
// Initialize global KMS
if err := kms.InitializeGlobalKMS(kmsConfigObj); err != nil {
return fmt.Errorf("failed to initialize global KMS: %v", err)
}
glog.V(0).Infof("✅ KMS provider initialized successfully - SSE-KMS is now available")
return nil
}