Commit Graph

87 Commits

Author SHA1 Message Date
chrislu
1de251d575 sum from all shards 2025-08-13 18:48:05 -07:00
chrislu
66c688c656 collects metrics from ALL 6 servers/14 shards instead of just one 2025-08-13 18:11:38 -07:00
chrislu
b2d45f1f2d fix overflow 2025-08-13 18:04:48 -07:00
chrislu
256b1c9c28 async persistence 2025-08-13 09:55:48 -07:00
chrislu
1f8ba5c958 task ask admin for master address 2025-08-12 23:47:12 -07:00
chrislu
c0e6d00bd3 collect ec volume total volume size 2025-08-12 22:47:10 -07:00
chrislu
c8c758e639 fix hanging task detail page 2025-08-11 23:55:24 -07:00
chrislu
0d6ea416a7 fix retry link 2025-08-11 23:37:17 -07:00
chrislu
4f9f9ec971 fix routing 2025-08-11 22:36:48 -07:00
chrislu
3bbe4481f2 remove unused code 2025-08-11 21:54:21 -07:00
chrislu
7780a7d652 ec vacuum workflow is correct now 2025-08-11 21:41:18 -07:00
chrislu
2a61cda2cd adding retry button 2025-08-11 18:28:02 -07:00
chrislu
7cf5ddf8ac more self contained tasks 2025-08-11 17:16:16 -07:00
chrislu
7622d14580 remove unused config 2025-08-11 17:08:48 -07:00
chrislu
9df006b49d fix min volume age 2025-08-11 01:54:10 -07:00
chrislu
97d58e77c6 getting ec volume deletions 2025-08-11 01:03:21 -07:00
chrislu
553a229fd3 reduce lock scope 2025-08-11 00:21:04 -07:00
chrislu
57d025910d cancel context 2025-08-11 00:20:52 -07:00
chrislu
91d641e685 avoid dead lock 2025-08-10 20:23:18 -07:00
chrislu
0e649f710a collect deletion for ec shards 2025-08-10 20:08:45 -07:00
chrislu
772ee0f967 address comments 2025-08-10 19:58:52 -07:00
chrislu
06c012ea60 Update maintenance_scanner.go 2025-08-10 19:37:54 -07:00
chrislu
5a6954be1b sort 2025-08-10 18:42:27 -07:00
chrislu
0f1ca16457 more accurate estimation 2025-08-10 18:42:22 -07:00
chrislu
72f0a47563 CRITICAL: Check ALL task states for volume conflicts
Fix major scheduling bug where only active tasks were checked for conflicts.

Changes:
- Check PENDING tasks: Prevent scheduling if task is queued for same volume
- Check ASSIGNED/ACTIVE tasks: Prevent scheduling if task is running on same volume
- Check RECENT tasks: Prevent immediate re-scheduling on same volume after completion

This prevents dangerous scenarios like:
 Scheduling vacuum while another vacuum is pending on same volume
 Scheduling balance while erasure coding is queued for same volume
 Immediately re-scheduling failed tasks without cooldown period

Critical safety improvement ensuring comprehensive volume-level task isolation.
2025-08-10 18:04:31 -07:00
chrislu
751cfac7d7 Implement volume-aware task conflict checking
MAJOR IMPROVEMENT: Tasks now conflict by volume ID, not globally by task type

Changes:
- PRIMARY RULE: Tasks on the same volume ID always conflict (prevents race conditions)
- SECONDARY RULE: Minimal global task type conflicts (currently none)
- Add isDiskAvailableForVolume() for volume-specific availability checking
- Add GetAvailableDisksForVolume() and GetDisksWithEffectiveCapacityForVolume()
- Remove overly restrictive global task type conflicts
- Update planning functions to focus on capacity, not conflicts

Benefits:
 Multiple vacuum tasks can run on different volumes simultaneously
 Balance and erasure coding can run on different volumes
 Still prevents dangerous concurrent operations on same volume
 Much more efficient resource utilization
 Maintains data integrity and prevents race conditions

This addresses the user feedback that task conflicts should be volume-specific,
not global task type restrictions.
2025-08-10 18:02:42 -07:00
chrislu
5c1e6e904d CRITICAL: Restore task conflict definitions to prevent data integrity issues
- Restore conflicts between vacuum, balance, erasure_coding, and ec_vacuum tasks
- Prevent dangerous concurrent operations on same volumes/resources
- Add comprehensive task conflict matrix to avoid race conditions
- This addresses a serious safety regression where all conflicts were removed

Critical conflicts restored:
- vacuum ↔ balance, erasure_coding, ec_vacuum
- balance ↔ vacuum, erasure_coding, ec_vacuum
- erasure_coding ↔ vacuum, balance, ec_vacuum
- ec_vacuum ↔ vacuum, balance, erasure_coding
- replication ↔ vacuum, balance (destructive ops)
2025-08-10 17:56:13 -07:00
chrislu
c220ad1e69 Replace bubble sort with idiomatic sort.Slice in EC shard management
- Replace O(n²) bubble sort implementation with efficient sort.Slice
- More concise, readable, and performant for larger slices
- Uses idiomatic Go sorting pattern
2025-08-10 17:52:30 -07:00
chrislu
4ec743583d Address PR #7116 review comments
- Fix CodeQL security issue: Add bounds checking for int64 to uint8 conversion in disk_location_ec.go
- Replace goto with idiomatic map approach in ec_shard_management.go
- Fix EC volume handling in maintenance_scanner.go: add support for EC-only volumes
- Fix test failures in master_grpc_ec_generation_test.go: handle raft leadership issues
2025-08-10 17:50:10 -07:00
chrislu
cd75202da8 Replaced goto with Idiomatic Map Approach 2025-08-10 17:48:18 -07:00
chrislu
2e51e1dab2 ec volume UI rendering version 2025-08-10 16:54:44 -07:00
chrislu
1b41544f97 detecting ec volumes 2025-08-10 14:41:06 -07:00
chrislu
fc666e2e48 collect ec volume deleted bytes 2025-08-10 12:37:18 -07:00
chrislu
05a0cc156b Self-Contained Design
To prove the system is truly self-contained:
To add a new task:
Create a task package (e.g., worker/tasks/compression/)
Import it: _ "github.com/.../worker/tasks/compression"
That's it! No other changes needed.
To remove a task:
Delete the task package directory
Remove the import line
That's it! No other changes needed.
2025-08-10 00:15:26 -07:00
chrislu
96d6d27607 remove Vacuum - Completely removed Balance - Completely removed 2025-08-10 00:00:46 -07:00
Chris Lu
25bbf4c3d4 Admin UI: Fetch task logs (#7114)
* show task details

* loading tasks

* task UI works

* generic rendering

* rendering the export link

* removing placementConflicts from task parameters

* remove TaskSourceLocation

* remove "Server ID" column

* rendering balance task source

* sources and targets

* fix ec task generation

* move info

* render timeline

* simplified worker id

* simplify

* read task logs from worker

* isValidTaskID

* address comments

* Update weed/worker/tasks/balance/execution.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/worker/tasks/erasure_coding/ec_task.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/worker/tasks/task_log_handler.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* fix shard ids

* plan distributing shard id

* rendering planned shards in task details

* remove Conflicts

* worker logs correctly

* pass in dc and rack

* task logging

* Update weed/admin/maintenance/maintenance_queue.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* display log details

* logs have fields now

* sort field keys

* fix link

* fix collection filtering

* avoid hard coded ec shard counts

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-09 21:47:29 -07:00
Chris Lu
0ecb466eda Admin: refactoring active topology (#7073)
* refactoring

* add ec shard size

* address comments

* passing task id

There seems to be a disconnect between the pending tasks created in ActiveTopology and the TaskDetectionResult returned by this function. A taskID is generated locally and used to create pending tasks via AddPendingECShardTask, but this taskID is not stored in the TaskDetectionResult or passed along in any way.

This makes it impossible for the worker that eventually executes the task to know which pending task in ActiveTopology it corresponds to. Without the correct taskID, the worker cannot call AssignTask or CompleteTask on the master, breaking the entire task lifecycle and capacity management feature.

A potential solution is to add a TaskID field to TaskDetectionResult and worker_pb.TaskParams, ensuring the ID is propagated from detection to execution.

* 1 source multiple destinations

* task supports multi source and destination

* ec needs to clean up previous shards

* use erasure coding constants

* getPlanningCapacityUnsafe getEffectiveAvailableCapacityUnsafe  should return StorageSlotChange for calculation

* use CanAccommodate to calculate

* remove dead code

* address comments

* fix Mutex Copying in Protobuf Structs

* use constants

* fix estimatedSize

The calculation for estimatedSize only considers source.EstimatedSize and dest.StorageChange, but omits dest.EstimatedSize. The TaskDestination struct has an EstimatedSize field, which seems to be ignored here. This could lead to an incorrect estimation of the total size of data involved in tasks on a disk. The loop should probably also include estimatedSize += dest.EstimatedSize.

* at.assignTaskToDisk(task)

* refactoring

* Update weed/admin/topology/internal.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* fail fast

* fix compilation

* Update weed/worker/tasks/erasure_coding/detection.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* indexes for volume and shard locations

* dedup with ToVolumeSlots

* return an additional boolean to indicate success, or an error

* Update abstract_sql_store.go

* fix

* Update weed/worker/tasks/erasure_coding/detection.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update weed/admin/topology/task_management.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* faster findVolumeDisk

* Update weed/worker/tasks/erasure_coding/detection.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/admin/topology/storage_slot_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* refactor

* simplify

* remove unused GetDiskStorageImpact function

* refactor

* add comments

* Update weed/admin/topology/storage_impact.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update weed/admin/topology/storage_slot_test.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Update storage_impact.go

* AddPendingTask

The unified AddPendingTask function now serves as the single entry point for all task creation, successfully consolidating the previously separate functions while maintaining full functionality and improving code organization.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-03 01:35:38 -07:00
Chris Lu
9d013ea9b8 Admin UI: include ec shard sizes into volume server info (#7071)
* show ec shards on dashboard, show max in its own column

* master collect shard size info

* master send shard size via VolumeList

* change to more efficient shard sizes slice

* include ec shard sizes into volume server info

* Eliminated Redundant gRPC Calls

* much more efficient

* Efficient Counting: bits.OnesCount32() uses CPU-optimized instructions to count set bits in O(1)

* avoid extra volume list call

* simplify

* preserve existing shard sizes

* avoid hard coded value

* Update weed/storage/erasure_coding/ec_volume_info.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/admin/dash/volume_management.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update ec_volume_info.go

* address comments

* avoid duplicated functions

* Update weed/admin/dash/volume_management.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* simplify

* refactoring

* fix compilation

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-08-02 02:16:49 -07:00
Chris Lu
0975968e71 admin: Refactor task destination planning (#7063)
* refactor planning into task detection

* refactoring worker tasks

* refactor

* compiles, but only balance task is registered

* compiles, but has nil exception

* avoid nil logger

* add back ec task

* setting ec log directory

* implement balance and vacuum tasks

* EC tasks will no longer fail with "file not found" errors

* Use ReceiveFile API to send locally generated shards

* distributing shard files and ecx,ecj,vif files

* generate .ecx files correctly

* do not mount all possible EC shards (0-13) on every destination

* use constants

* delete all replicas

* rename files

* pass in volume size to tasks
2025-08-01 11:18:32 -07:00
chrislu
1cba609bfa fix checking 2025-07-31 23:23:20 -07:00
chrislu
027829f3b3 optionally set the default retention when creating buckets 2025-07-31 22:45:58 -07:00
Chris Lu
891a2fb6eb Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design

* added simulation as tests

* reorganized the codebase to move the simulation framework and tests into their own dedicated package

* integration test. ec worker task

* remove "enhanced" reference

* start master, volume servers, filer

Current Status
 Master: Healthy and running (port 9333)
 Filer: Healthy and running (port 8888)
 Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready

* generate write load

* tasks are assigned

* admin start wtih grpc port. worker has its own working directory

* Update .gitignore

* working worker and admin. Task detection is not working yet.

* compiles, detection uses volumeSizeLimitMB from master

* compiles

* worker retries connecting to admin

* build and restart

* rendering pending tasks

* skip task ID column

* sticky worker id

* test canScheduleTaskNow

* worker reconnect to admin

* clean up logs

* worker register itself first

* worker can run ec work and report status

but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.

* move ec task logic

* listing ec shards

* local copy, ec. Need to distribute.

* ec is mostly working now

* distribution of ec shards needs improvement
* need configuration to enable ec

* show ec volumes

* interval field UI component

* rename

* integration test with vauuming

* garbage percentage threshold

* fix warning

* display ec shard sizes

* fix ec volumes list

* Update ui.go

* show default values

* ensure correct default value

* MaintenanceConfig use ConfigField

* use schema defined defaults

* config

* reduce duplication

* refactor to use BaseUIProvider

* each task register its schema

* checkECEncodingCandidate use ecDetector

* use vacuumDetector

* use volumeSizeLimitMB

* remove

remove

* remove unused

* refactor

* use new framework

* remove v2 reference

* refactor

* left menu can scroll now

* The maintenance manager was not being initialized when no data directory was configured for persistent storage.

* saving config

* Update task_config_schema_templ.go

* enable/disable tasks

* protobuf encoded task configurations

* fix system settings

* use ui component

* remove logs

* interface{} Reduction

* reduce interface{}

* reduce interface{}

* avoid from/to map

* reduce interface{}

* refactor

* keep it DRY

* added logging

* debug messages

* debug level

* debug

* show the log caller line

* use configured task policy

* log level

* handle admin heartbeat response

* Update worker.go

* fix EC rack and dc count

* Report task status to admin server

* fix task logging, simplify interface checking, use erasure_coding constants

* factor in empty volume server during task planning

* volume.list adds disk id

* track disk id also

* fix locking scheduled and manual scanning

* add active topology

* simplify task detector

* ec task completed, but shards are not showing up

* implement ec in ec_typed.go

* adjust log level

* dedup

* implementing ec copying shards and only ecx files

* use disk id when distributing ec shards

🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned

* Delete original volume from all locations

* clean up existing shard locations

* local encoding and distributing

* Update docker/admin_integration/EC-TESTING-README.md

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* check volume id range

* simplify

* fix tests

* fix types

* clean up logs and tests

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-07-30 12:38:03 -07:00
chrislu
470d450f17 admin server: fix tls setting
fix https://github.com/seaweedfs/seaweedfs/issues/7038#issuecomment-3124471878
2025-07-27 13:59:43 -07:00
Chris Lu
c6a22ce43a Fix get object lock configuration handler (#6996)
* fix GetObjectLockConfigurationHandler

* cache and use bucket object lock config

* subscribe to bucket configuration changes

* increase bucket config cache TTL

* refactor

* Update weed/s3api/s3api_server.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* avoid duplidated work

* rename variable

* Update s3api_object_handlers_put.go

* fix routing

* admin ui and api handler are consistent now

* use fields instead of xml

* fix test

* address comments

* Update weed/s3api/s3api_object_handlers_put.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update test/s3/retention/s3_retention_test.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Update weed/s3api/object_lock_utils.go

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* change error style

* errorf

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-18 02:19:50 -07:00
Chris Lu
69553e5ba6 convert error fromating to %w everywhere (#6995) 2025-07-16 23:39:27 -07:00
chrislu
64c5dde2f3 support multiple masters
fix https://github.com/seaweedfs/seaweedfs/issues/6988
2025-07-15 10:51:07 -07:00
chrislu
406aaf7c14 increase upload limit via browser 2025-07-14 08:42:15 -07:00
chrislu
e7dfc3552c admin ui adds object lock permissions 2025-07-13 20:29:25 -07:00
Chris Lu
7cb1ca1308 Add policy engine (#6970) 2025-07-13 16:21:36 -07:00
Chris Lu
687a6a6c1d Admin UI: Add policies (#6968)
* add policies to UI, accessing filer directly

* view, edit policies

* add back buttons for "users" page

* remove unused

* fix ui dark mode when modal is closed

* bucket view details button

* fix browser buttons

* filer action button works

* clean up masters page

* fix volume servers action buttons

* fix collections page action button

* fix properties page

* more obvious

* fix directory creation file mode

* Update file_browser_handlers.go

* directory permission
2025-07-12 01:13:11 -07:00