mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2026-01-09 19:24:45 +08:00
Admin: misc improvements on admin server and workers. EC now works. (#7055)
* initial design * added simulation as tests * reorganized the codebase to move the simulation framework and tests into their own dedicated package * integration test. ec worker task * remove "enhanced" reference * start master, volume servers, filer Current Status ✅ Master: Healthy and running (port 9333) ✅ Filer: Healthy and running (port 8888) ✅ Volume Servers: All 6 servers running (ports 8080-8085) 🔄 Admin/Workers: Will start when dependencies are ready * generate write load * tasks are assigned * admin start wtih grpc port. worker has its own working directory * Update .gitignore * working worker and admin. Task detection is not working yet. * compiles, detection uses volumeSizeLimitMB from master * compiles * worker retries connecting to admin * build and restart * rendering pending tasks * skip task ID column * sticky worker id * test canScheduleTaskNow * worker reconnect to admin * clean up logs * worker register itself first * worker can run ec work and report status but: 1. one volume should not be repeatedly worked on. 2. ec shards needs to be distributed and source data should be deleted. * move ec task logic * listing ec shards * local copy, ec. Need to distribute. * ec is mostly working now * distribution of ec shards needs improvement * need configuration to enable ec * show ec volumes * interval field UI component * rename * integration test with vauuming * garbage percentage threshold * fix warning * display ec shard sizes * fix ec volumes list * Update ui.go * show default values * ensure correct default value * MaintenanceConfig use ConfigField * use schema defined defaults * config * reduce duplication * refactor to use BaseUIProvider * each task register its schema * checkECEncodingCandidate use ecDetector * use vacuumDetector * use volumeSizeLimitMB * remove remove * remove unused * refactor * use new framework * remove v2 reference * refactor * left menu can scroll now * The maintenance manager was not being initialized when no data directory was configured for persistent storage. * saving config * Update task_config_schema_templ.go * enable/disable tasks * protobuf encoded task configurations * fix system settings * use ui component * remove logs * interface{} Reduction * reduce interface{} * reduce interface{} * avoid from/to map * reduce interface{} * refactor * keep it DRY * added logging * debug messages * debug level * debug * show the log caller line * use configured task policy * log level * handle admin heartbeat response * Update worker.go * fix EC rack and dc count * Report task status to admin server * fix task logging, simplify interface checking, use erasure_coding constants * factor in empty volume server during task planning * volume.list adds disk id * track disk id also * fix locking scheduled and manual scanning * add active topology * simplify task detector * ec task completed, but shards are not showing up * implement ec in ec_typed.go * adjust log level * dedup * implementing ec copying shards and only ecx files * use disk id when distributing ec shards 🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk 📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId 🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest 💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId]) 📂 File System: EC shards and metadata land in the exact disk directory planned * Delete original volume from all locations * clean up existing shard locations * local encoding and distributing * Update docker/admin_integration/EC-TESTING-README.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * check volume id range * simplify * fix tests * fix types * clean up logs and tests --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This commit is contained in:
@@ -8,7 +8,7 @@ cgo ?= 0
|
||||
binary:
|
||||
export SWCOMMIT=$(shell git rev-parse --short HEAD)
|
||||
export SWLDFLAGS="-X github.com/seaweedfs/seaweedfs/weed/util/version.COMMIT=$(SWCOMMIT)"
|
||||
cd ../weed && CGO_ENABLED=$(cgo) GOOS=linux go build $(options) -tags "$(tags)" -ldflags "-s -w -extldflags -static $(SWLDFLAGS)" && mv weed ../docker/
|
||||
cd ../weed && CGO_ENABLED=$(cgo) GOOS=linux go build $(options) -tags "$(tags)" -ldflags "-s -w -extldflags -static $(SWLDFLAGS)" -o weed_binary && mv weed_binary ../docker/weed
|
||||
cd ../other/mq_client_example/agent_pub_record && CGO_ENABLED=$(cgo) GOOS=linux go build && mv agent_pub_record ../../../docker/
|
||||
cd ../other/mq_client_example/agent_sub_record && CGO_ENABLED=$(cgo) GOOS=linux go build && mv agent_sub_record ../../../docker/
|
||||
|
||||
|
||||
18
docker/admin_integration/Dockerfile.local
Normal file
18
docker/admin_integration/Dockerfile.local
Normal file
@@ -0,0 +1,18 @@
|
||||
FROM alpine:latest
|
||||
|
||||
# Install required packages
|
||||
RUN apk add --no-cache \
|
||||
ca-certificates \
|
||||
fuse \
|
||||
curl \
|
||||
jq
|
||||
|
||||
# Copy our locally built binary
|
||||
COPY weed-local /usr/bin/weed
|
||||
RUN chmod +x /usr/bin/weed
|
||||
|
||||
# Create working directory
|
||||
WORKDIR /data
|
||||
|
||||
# Default command
|
||||
ENTRYPOINT ["/usr/bin/weed"]
|
||||
438
docker/admin_integration/EC-TESTING-README.md
Normal file
438
docker/admin_integration/EC-TESTING-README.md
Normal file
@@ -0,0 +1,438 @@
|
||||
# SeaweedFS EC Worker Testing Environment
|
||||
|
||||
This Docker Compose setup provides a comprehensive testing environment for SeaweedFS Erasure Coding (EC) workers using **official SeaweedFS commands**.
|
||||
|
||||
## 📂 Directory Structure
|
||||
|
||||
The testing environment is located in `docker/admin_integration/` and includes:
|
||||
|
||||
```
|
||||
docker/admin_integration/
|
||||
├── Makefile # Main management interface
|
||||
├── docker-compose-ec-test.yml # Docker compose configuration
|
||||
├── EC-TESTING-README.md # This documentation
|
||||
└── run-ec-test.sh # Quick start script
|
||||
```
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
The testing environment uses **official SeaweedFS commands** and includes:
|
||||
|
||||
- **1 Master Server** (port 9333) - Coordinates the cluster with 50MB volume size limit
|
||||
- **6 Volume Servers** (ports 8080-8085) - Distributed across 2 data centers and 3 racks for diversity
|
||||
- **1 Filer** (port 8888) - Provides file system interface
|
||||
- **1 Admin Server** (port 23646) - Detects volumes needing EC and manages workers using official `admin` command
|
||||
- **3 EC Workers** - Execute erasure coding tasks using official `worker` command with task-specific working directories
|
||||
- **1 Load Generator** - Continuously writes and deletes files using SeaweedFS shell commands
|
||||
- **1 Monitor** - Tracks cluster health and EC progress using shell scripts
|
||||
|
||||
## ✨ New Features
|
||||
|
||||
### **Task-Specific Working Directories**
|
||||
Each worker now creates dedicated subdirectories for different task types:
|
||||
- `/work/erasure_coding/` - For EC encoding tasks
|
||||
- `/work/vacuum/` - For vacuum cleanup tasks
|
||||
- `/work/balance/` - For volume balancing tasks
|
||||
|
||||
This provides:
|
||||
- **Organization**: Each task type gets isolated working space
|
||||
- **Debugging**: Easy to find files/logs related to specific task types
|
||||
- **Cleanup**: Can clean up task-specific artifacts easily
|
||||
- **Concurrent Safety**: Different task types won't interfere with each other's files
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed
|
||||
- GNU Make installed
|
||||
- At least 4GB RAM available for containers
|
||||
- Ports 8080-8085, 8888, 9333, 23646 available
|
||||
|
||||
### Start the Environment
|
||||
|
||||
```bash
|
||||
# Navigate to the admin integration directory
|
||||
cd docker/admin_integration/
|
||||
|
||||
# Show available commands
|
||||
make help
|
||||
|
||||
# Start the complete testing environment
|
||||
make start
|
||||
```
|
||||
|
||||
The `make start` command will:
|
||||
1. Start all services using official SeaweedFS images
|
||||
2. Configure workers with task-specific working directories
|
||||
3. Wait for services to be ready
|
||||
4. Display monitoring URLs and run health checks
|
||||
|
||||
### Alternative Commands
|
||||
|
||||
```bash
|
||||
# Quick start aliases
|
||||
make up # Same as 'make start'
|
||||
|
||||
# Development mode (higher load for faster testing)
|
||||
make dev-start
|
||||
|
||||
# Build images without starting
|
||||
make build
|
||||
```
|
||||
|
||||
## 📋 Available Make Targets
|
||||
|
||||
Run `make help` to see all available targets:
|
||||
|
||||
### **🚀 Main Operations**
|
||||
- `make start` - Start the complete EC testing environment
|
||||
- `make stop` - Stop all services
|
||||
- `make restart` - Restart all services
|
||||
- `make clean` - Complete cleanup (containers, volumes, images)
|
||||
|
||||
### **📊 Monitoring & Status**
|
||||
- `make health` - Check health of all services
|
||||
- `make status` - Show status of all containers
|
||||
- `make urls` - Display all monitoring URLs
|
||||
- `make monitor` - Open monitor dashboard in browser
|
||||
- `make monitor-status` - Show monitor status via API
|
||||
- `make volume-status` - Show volume status from master
|
||||
- `make admin-status` - Show admin server status
|
||||
- `make cluster-status` - Show complete cluster status
|
||||
|
||||
### **📋 Logs Management**
|
||||
- `make logs` - Show logs from all services
|
||||
- `make logs-admin` - Show admin server logs
|
||||
- `make logs-workers` - Show all worker logs
|
||||
- `make logs-worker1/2/3` - Show specific worker logs
|
||||
- `make logs-load` - Show load generator logs
|
||||
- `make logs-monitor` - Show monitor logs
|
||||
- `make backup-logs` - Backup all logs to files
|
||||
|
||||
### **⚖️ Scaling & Testing**
|
||||
- `make scale-workers WORKERS=5` - Scale workers to 5 instances
|
||||
- `make scale-load RATE=25` - Increase load generation rate
|
||||
- `make test-ec` - Run focused EC test scenario
|
||||
|
||||
### **🔧 Development & Debug**
|
||||
- `make shell-admin` - Open shell in admin container
|
||||
- `make shell-worker1` - Open shell in worker container
|
||||
- `make debug` - Show debug information
|
||||
- `make troubleshoot` - Run troubleshooting checks
|
||||
|
||||
## 📊 Monitoring URLs
|
||||
|
||||
| Service | URL | Description |
|
||||
|---------|-----|-------------|
|
||||
| Master UI | http://localhost:9333 | Cluster status and topology |
|
||||
| Filer | http://localhost:8888 | File operations |
|
||||
| Admin Server | http://localhost:23646/ | Task management |
|
||||
| Monitor | http://localhost:9999/status | Complete cluster monitoring |
|
||||
| Volume Servers | http://localhost:8080-8085/status | Individual volume server stats |
|
||||
|
||||
Quick access: `make urls` or `make monitor`
|
||||
|
||||
## 🔄 How EC Testing Works
|
||||
|
||||
### 1. Continuous Load Generation
|
||||
- **Write Rate**: 10 files/second (1-5MB each)
|
||||
- **Delete Rate**: 2 files/second
|
||||
- **Target**: Fill volumes to 50MB limit quickly
|
||||
|
||||
### 2. Volume Detection
|
||||
- Admin server scans master every 30 seconds
|
||||
- Identifies volumes >40MB (80% of 50MB limit)
|
||||
- Queues EC tasks for eligible volumes
|
||||
|
||||
### 3. EC Worker Assignment
|
||||
- **Worker 1**: EC specialist (max 2 concurrent tasks)
|
||||
- **Worker 2**: EC + Vacuum hybrid (max 2 concurrent tasks)
|
||||
- **Worker 3**: EC + Vacuum hybrid (max 1 concurrent task)
|
||||
|
||||
### 4. Comprehensive EC Process
|
||||
Each EC task follows 6 phases:
|
||||
1. **Copy Volume Data** (5-15%) - Stream .dat/.idx files locally
|
||||
2. **Mark Read-Only** (20-25%) - Ensure data consistency
|
||||
3. **Local Encoding** (30-60%) - Create 14 shards (10+4 Reed-Solomon)
|
||||
4. **Calculate Placement** (65-70%) - Smart rack-aware distribution
|
||||
5. **Distribute Shards** (75-90%) - Upload to optimal servers
|
||||
6. **Verify & Cleanup** (95-100%) - Validate and clean temporary files
|
||||
|
||||
### 5. Real-Time Monitoring
|
||||
- Volume analysis and EC candidate detection
|
||||
- Worker health and task progress
|
||||
- No data loss verification
|
||||
- Performance metrics
|
||||
|
||||
## 📋 Key Features Tested
|
||||
|
||||
### ✅ EC Implementation Features
|
||||
- [x] Local volume data copying with progress tracking
|
||||
- [x] Local Reed-Solomon encoding (10+4 shards)
|
||||
- [x] Intelligent shard placement with rack awareness
|
||||
- [x] Load balancing across available servers
|
||||
- [x] Backup server selection for redundancy
|
||||
- [x] Detailed step-by-step progress tracking
|
||||
- [x] Comprehensive error handling and recovery
|
||||
|
||||
### ✅ Infrastructure Features
|
||||
- [x] Multi-datacenter topology (dc1, dc2)
|
||||
- [x] Rack diversity (rack1, rack2, rack3)
|
||||
- [x] Volume size limits (50MB)
|
||||
- [x] Worker capability matching
|
||||
- [x] Health monitoring and alerting
|
||||
- [x] Continuous workload simulation
|
||||
|
||||
## 🛠️ Common Usage Patterns
|
||||
|
||||
### Basic Testing Workflow
|
||||
```bash
|
||||
# Start environment
|
||||
make start
|
||||
|
||||
# Watch progress
|
||||
make monitor-status
|
||||
|
||||
# Check for EC candidates
|
||||
make volume-status
|
||||
|
||||
# View worker activity
|
||||
make logs-workers
|
||||
|
||||
# Stop when done
|
||||
make stop
|
||||
```
|
||||
|
||||
### High-Load Testing
|
||||
```bash
|
||||
# Start with higher load
|
||||
make dev-start
|
||||
|
||||
# Scale up workers and load
|
||||
make scale-workers WORKERS=5
|
||||
make scale-load RATE=50
|
||||
|
||||
# Monitor intensive EC activity
|
||||
make logs-admin
|
||||
```
|
||||
|
||||
### Debugging Issues
|
||||
```bash
|
||||
# Check port conflicts and system state
|
||||
make troubleshoot
|
||||
|
||||
# View specific service logs
|
||||
make logs-admin
|
||||
make logs-worker1
|
||||
|
||||
# Get shell access for debugging
|
||||
make shell-admin
|
||||
make shell-worker1
|
||||
|
||||
# Check detailed status
|
||||
make debug
|
||||
```
|
||||
|
||||
### Development Iteration
|
||||
```bash
|
||||
# Quick restart after code changes
|
||||
make restart
|
||||
|
||||
# Rebuild and restart
|
||||
make clean
|
||||
make start
|
||||
|
||||
# Monitor specific components
|
||||
make logs-monitor
|
||||
```
|
||||
|
||||
## 📈 Expected Results
|
||||
|
||||
### Successful EC Testing Shows:
|
||||
1. **Volume Growth**: Steady increase in volume sizes toward 50MB limit
|
||||
2. **EC Detection**: Admin server identifies volumes >40MB for EC
|
||||
3. **Task Assignment**: Workers receive and execute EC tasks
|
||||
4. **Shard Distribution**: 14 shards distributed across 6 volume servers
|
||||
5. **No Data Loss**: All files remain accessible during and after EC
|
||||
6. **Performance**: EC tasks complete within estimated timeframes
|
||||
|
||||
### Sample Monitor Output:
|
||||
```bash
|
||||
# Check current status
|
||||
make monitor-status
|
||||
|
||||
# Output example:
|
||||
{
|
||||
"monitor": {
|
||||
"uptime": "15m30s",
|
||||
"master_addr": "master:9333",
|
||||
"admin_addr": "admin:9900"
|
||||
},
|
||||
"stats": {
|
||||
"VolumeCount": 12,
|
||||
"ECTasksDetected": 3,
|
||||
"WorkersActive": 3
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
You can customize the environment by setting variables:
|
||||
|
||||
```bash
|
||||
# High load testing
|
||||
WRITE_RATE=25 DELETE_RATE=5 make start
|
||||
|
||||
# Extended test duration
|
||||
TEST_DURATION=7200 make start # 2 hours
|
||||
```
|
||||
|
||||
### Scaling Examples
|
||||
|
||||
```bash
|
||||
# Scale workers
|
||||
make scale-workers WORKERS=6
|
||||
|
||||
# Increase load generation
|
||||
make scale-load RATE=30
|
||||
|
||||
# Combined scaling
|
||||
make scale-workers WORKERS=4
|
||||
make scale-load RATE=40
|
||||
```
|
||||
|
||||
## 🧹 Cleanup Options
|
||||
|
||||
```bash
|
||||
# Stop services only
|
||||
make stop
|
||||
|
||||
# Remove containers but keep volumes
|
||||
make down
|
||||
|
||||
# Remove data volumes only
|
||||
make clean-volumes
|
||||
|
||||
# Remove built images only
|
||||
make clean-images
|
||||
|
||||
# Complete cleanup (everything)
|
||||
make clean
|
||||
```
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Quick Diagnostics
|
||||
```bash
|
||||
# Run complete troubleshooting
|
||||
make troubleshoot
|
||||
|
||||
# Check specific components
|
||||
make health
|
||||
make debug
|
||||
make status
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Services not starting:**
|
||||
```bash
|
||||
# Check port availability
|
||||
make troubleshoot
|
||||
|
||||
# View startup logs
|
||||
make logs-master
|
||||
make logs-admin
|
||||
```
|
||||
|
||||
**No EC tasks being created:**
|
||||
```bash
|
||||
# Check volume status
|
||||
make volume-status
|
||||
|
||||
# Increase load to fill volumes faster
|
||||
make scale-load RATE=30
|
||||
|
||||
# Check admin detection
|
||||
make logs-admin
|
||||
```
|
||||
|
||||
**Workers not responding:**
|
||||
```bash
|
||||
# Check worker registration
|
||||
make admin-status
|
||||
|
||||
# View worker logs
|
||||
make logs-workers
|
||||
|
||||
# Restart workers
|
||||
make restart
|
||||
```
|
||||
|
||||
### Performance Tuning
|
||||
|
||||
**For faster testing:**
|
||||
```bash
|
||||
make dev-start # Higher default load
|
||||
make scale-load RATE=50 # Very high load
|
||||
```
|
||||
|
||||
**For stress testing:**
|
||||
```bash
|
||||
make scale-workers WORKERS=8
|
||||
make scale-load RATE=100
|
||||
```
|
||||
|
||||
## 📚 Technical Details
|
||||
|
||||
### Network Architecture
|
||||
- Custom bridge network (172.20.0.0/16)
|
||||
- Service discovery via container names
|
||||
- Health checks for all services
|
||||
|
||||
### Storage Layout
|
||||
- Each volume server: max 100 volumes
|
||||
- Data centers: dc1, dc2
|
||||
- Racks: rack1, rack2, rack3
|
||||
- Volume limit: 50MB per volume
|
||||
|
||||
### EC Algorithm
|
||||
- Reed-Solomon RS(10,4)
|
||||
- 10 data shards + 4 parity shards
|
||||
- Rack-aware distribution
|
||||
- Backup server redundancy
|
||||
|
||||
### Make Integration
|
||||
- Color-coded output for better readability
|
||||
- Comprehensive help system (`make help`)
|
||||
- Parallel execution support
|
||||
- Error handling and cleanup
|
||||
- Cross-platform compatibility
|
||||
|
||||
## 🎯 Quick Reference
|
||||
|
||||
```bash
|
||||
# Essential commands
|
||||
make help # Show all available targets
|
||||
make start # Start complete environment
|
||||
make health # Check all services
|
||||
make monitor # Open dashboard
|
||||
make logs-admin # View admin activity
|
||||
make clean # Complete cleanup
|
||||
|
||||
# Monitoring
|
||||
make volume-status # Check for EC candidates
|
||||
make admin-status # Check task queue
|
||||
make monitor-status # Full cluster status
|
||||
|
||||
# Scaling & Testing
|
||||
make test-ec # Run focused EC test
|
||||
make scale-load RATE=X # Increase load
|
||||
make troubleshoot # Diagnose issues
|
||||
```
|
||||
|
||||
This environment provides a realistic testing scenario for SeaweedFS EC workers with actual data operations, comprehensive monitoring, and easy management through Make targets.
|
||||
346
docker/admin_integration/Makefile
Normal file
346
docker/admin_integration/Makefile
Normal file
@@ -0,0 +1,346 @@
|
||||
# SeaweedFS Admin Integration Test Makefile
|
||||
# Tests the admin server and worker functionality using official weed commands
|
||||
|
||||
.PHONY: help build build-and-restart restart-workers start stop restart logs clean status test admin-ui worker-logs master-logs admin-logs vacuum-test vacuum-demo vacuum-status vacuum-data vacuum-data-high vacuum-data-low vacuum-continuous vacuum-clean vacuum-help
|
||||
.DEFAULT_GOAL := help
|
||||
|
||||
COMPOSE_FILE := docker-compose-ec-test.yml
|
||||
PROJECT_NAME := admin_integration
|
||||
|
||||
build: ## Build SeaweedFS with latest changes and create Docker image
|
||||
@echo "🔨 Building SeaweedFS with latest changes..."
|
||||
@echo "1️⃣ Generating admin templates..."
|
||||
@cd ../../ && make admin-generate
|
||||
@echo "2️⃣ Building Docker image with latest changes..."
|
||||
@cd ../ && make build
|
||||
@echo "3️⃣ Copying binary for local docker-compose..."
|
||||
@cp ../weed ./weed-local
|
||||
@echo "✅ Build complete! Updated image: chrislusf/seaweedfs:local"
|
||||
@echo "💡 Run 'make restart' to apply changes to running services"
|
||||
|
||||
build-and-restart: build ## Build with latest changes and restart services
|
||||
@echo "🔄 Recreating services with new image..."
|
||||
@echo "1️⃣ Recreating admin server with new image..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d admin
|
||||
@sleep 5
|
||||
@echo "2️⃣ Recreating workers to reconnect..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d worker1 worker2 worker3
|
||||
@echo "✅ All services recreated with latest changes!"
|
||||
@echo "🌐 Admin UI: http://localhost:23646/"
|
||||
@echo "💡 Workers will reconnect to the new admin server"
|
||||
|
||||
restart-workers: ## Restart all workers to reconnect to admin server
|
||||
@echo "🔄 Restarting workers to reconnect to admin server..."
|
||||
@docker-compose -f $(COMPOSE_FILE) restart worker1 worker2 worker3
|
||||
@echo "✅ Workers restarted and will reconnect to admin server"
|
||||
|
||||
help: ## Show this help message
|
||||
@echo "SeaweedFS Admin Integration Test"
|
||||
@echo "================================"
|
||||
@echo "Tests admin server task distribution to workers using official weed commands"
|
||||
@echo ""
|
||||
@echo "🏗️ Cluster Management:"
|
||||
@grep -E '^(start|stop|restart|clean|status|build):.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-18s %s\n", $$1, $$2}'
|
||||
@echo ""
|
||||
@echo "🧪 Testing:"
|
||||
@grep -E '^(test|demo|validate|quick-test):.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-18s %s\n", $$1, $$2}'
|
||||
@echo ""
|
||||
@echo "🗑️ Vacuum Testing:"
|
||||
@grep -E '^vacuum-.*:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-18s %s\n", $$1, $$2}'
|
||||
@echo ""
|
||||
@echo "📜 Monitoring:"
|
||||
@grep -E '^(logs|admin-logs|worker-logs|master-logs|admin-ui):.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf " %-18s %s\n", $$1, $$2}'
|
||||
@echo ""
|
||||
@echo "🚀 Quick Start:"
|
||||
@echo " make start # Start cluster"
|
||||
@echo " make vacuum-test # Test vacuum tasks"
|
||||
@echo " make vacuum-help # Vacuum testing guide"
|
||||
@echo ""
|
||||
@echo "💡 For detailed vacuum testing: make vacuum-help"
|
||||
|
||||
start: ## Start the complete SeaweedFS cluster with admin and workers
|
||||
@echo "🚀 Starting SeaweedFS cluster with admin and workers..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d
|
||||
@echo "✅ Cluster started!"
|
||||
@echo ""
|
||||
@echo "📊 Access points:"
|
||||
@echo " • Admin UI: http://localhost:23646/"
|
||||
@echo " • Master UI: http://localhost:9333/"
|
||||
@echo " • Filer: http://localhost:8888/"
|
||||
@echo ""
|
||||
@echo "📈 Services starting up..."
|
||||
@echo " • Master server: ✓"
|
||||
@echo " • Volume servers: Starting (6 servers)..."
|
||||
@echo " • Filer: Starting..."
|
||||
@echo " • Admin server: Starting..."
|
||||
@echo " • Workers: Starting (3 workers)..."
|
||||
@echo ""
|
||||
@echo "⏳ Use 'make status' to check startup progress"
|
||||
@echo "💡 Use 'make logs' to watch the startup process"
|
||||
|
||||
start-staged: ## Start services in proper order with delays
|
||||
@echo "🚀 Starting SeaweedFS cluster in stages..."
|
||||
@echo ""
|
||||
@echo "Stage 1: Starting Master server..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d master
|
||||
@sleep 10
|
||||
@echo ""
|
||||
@echo "Stage 2: Starting Volume servers..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d volume1 volume2 volume3 volume4 volume5 volume6
|
||||
@sleep 15
|
||||
@echo ""
|
||||
@echo "Stage 3: Starting Filer..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d filer
|
||||
@sleep 10
|
||||
@echo ""
|
||||
@echo "Stage 4: Starting Admin server..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d admin
|
||||
@sleep 15
|
||||
@echo ""
|
||||
@echo "Stage 5: Starting Workers..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d worker1 worker2 worker3
|
||||
@sleep 10
|
||||
@echo ""
|
||||
@echo "Stage 6: Starting Load generator and Monitor..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d load_generator monitor
|
||||
@echo ""
|
||||
@echo "✅ All services started!"
|
||||
@echo ""
|
||||
@echo "📊 Access points:"
|
||||
@echo " • Admin UI: http://localhost:23646/"
|
||||
@echo " • Master UI: http://localhost:9333/"
|
||||
@echo " • Filer: http://localhost:8888/"
|
||||
@echo ""
|
||||
@echo "⏳ Services are initializing... Use 'make status' to check progress"
|
||||
|
||||
stop: ## Stop all services
|
||||
@echo "🛑 Stopping SeaweedFS cluster..."
|
||||
@docker-compose -f $(COMPOSE_FILE) down
|
||||
@echo "✅ Cluster stopped"
|
||||
|
||||
restart: stop start ## Restart the entire cluster
|
||||
|
||||
clean: ## Stop and remove all containers, networks, and volumes
|
||||
@echo "🧹 Cleaning up SeaweedFS test environment..."
|
||||
@docker-compose -f $(COMPOSE_FILE) down -v --remove-orphans
|
||||
@docker system prune -f
|
||||
@rm -rf data/
|
||||
@echo "✅ Environment cleaned"
|
||||
|
||||
status: ## Check the status of all services
|
||||
@echo "📊 SeaweedFS Cluster Status"
|
||||
@echo "=========================="
|
||||
@docker-compose -f $(COMPOSE_FILE) ps
|
||||
@echo ""
|
||||
@echo "📋 Service Health:"
|
||||
@echo "Master:"
|
||||
@curl -s http://localhost:9333/cluster/status | jq '.IsLeader' 2>/dev/null || echo " ❌ Master not ready"
|
||||
@echo "Admin:"
|
||||
@curl -s http://localhost:23646/ | grep -q "Admin" && echo " ✅ Admin ready" || echo " ❌ Admin not ready"
|
||||
|
||||
logs: ## Show logs from all services
|
||||
@echo "📜 Following logs from all services..."
|
||||
@echo "💡 Press Ctrl+C to stop following logs"
|
||||
@docker-compose -f $(COMPOSE_FILE) logs -f
|
||||
|
||||
admin-logs: ## Show logs from admin server only
|
||||
@echo "📜 Admin server logs:"
|
||||
@docker-compose -f $(COMPOSE_FILE) logs -f admin
|
||||
|
||||
worker-logs: ## Show logs from all workers
|
||||
@echo "📜 Worker logs:"
|
||||
@docker-compose -f $(COMPOSE_FILE) logs -f worker1 worker2 worker3
|
||||
|
||||
master-logs: ## Show logs from master server
|
||||
@echo "📜 Master server logs:"
|
||||
@docker-compose -f $(COMPOSE_FILE) logs -f master
|
||||
|
||||
admin-ui: ## Open admin UI in browser (macOS)
|
||||
@echo "🌐 Opening admin UI in browser..."
|
||||
@open http://localhost:23646/ || echo "💡 Manually open: http://localhost:23646/"
|
||||
|
||||
test: ## Run integration test to verify task assignment and completion
|
||||
@echo "🧪 Running Admin-Worker Integration Test"
|
||||
@echo "========================================"
|
||||
@echo ""
|
||||
@echo "1️⃣ Checking cluster health..."
|
||||
@sleep 5
|
||||
@curl -s http://localhost:9333/cluster/status | jq '.IsLeader' > /dev/null && echo "✅ Master healthy" || echo "❌ Master not ready"
|
||||
@curl -s http://localhost:23646/ | grep -q "Admin" && echo "✅ Admin healthy" || echo "❌ Admin not ready"
|
||||
@echo ""
|
||||
@echo "2️⃣ Checking worker registration..."
|
||||
@sleep 10
|
||||
@echo "💡 Check admin UI for connected workers: http://localhost:23646/"
|
||||
@echo ""
|
||||
@echo "3️⃣ Generating load to trigger EC tasks..."
|
||||
@echo "📝 Creating test files to fill volumes..."
|
||||
@echo "Creating large files with random data to trigger EC (targeting ~60MB total to exceed 50MB limit)..."
|
||||
@for i in {1..12}; do \
|
||||
echo "Creating 5MB random file $$i..."; \
|
||||
docker run --rm --network admin_integration_seaweed_net -v /tmp:/tmp --entrypoint sh chrislusf/seaweedfs:local -c "dd if=/dev/urandom of=/tmp/largefile$$i.dat bs=1M count=5 2>/dev/null && weed upload -master=master:9333 /tmp/largefile$$i.dat && rm /tmp/largefile$$i.dat"; \
|
||||
sleep 3; \
|
||||
done
|
||||
@echo ""
|
||||
@echo "4️⃣ Waiting for volumes to process large files and reach 50MB limit..."
|
||||
@echo "This may take a few minutes as we're uploading 60MB of data..."
|
||||
@sleep 60
|
||||
@echo ""
|
||||
@echo "5️⃣ Checking for EC task creation and assignment..."
|
||||
@echo "💡 Monitor the admin UI to see:"
|
||||
@echo " • Tasks being created for volumes needing EC"
|
||||
@echo " • Workers picking up tasks"
|
||||
@echo " • Task progress (pending → running → completed)"
|
||||
@echo " • EC shards being distributed"
|
||||
@echo ""
|
||||
@echo "✅ Integration test setup complete!"
|
||||
@echo "📊 Monitor progress at: http://localhost:23646/"
|
||||
|
||||
quick-test: ## Quick verification that core services are running
|
||||
@echo "⚡ Quick Health Check"
|
||||
@echo "===================="
|
||||
@echo "Master: $$(curl -s http://localhost:9333/cluster/status | jq -r '.IsLeader // "not ready"')"
|
||||
@echo "Admin: $$(curl -s http://localhost:23646/ | grep -q "Admin" && echo "ready" || echo "not ready")"
|
||||
@echo "Workers: $$(docker-compose -f $(COMPOSE_FILE) ps worker1 worker2 worker3 | grep -c Up) running"
|
||||
|
||||
validate: ## Validate integration test configuration
|
||||
@echo "🔍 Validating Integration Test Configuration"
|
||||
@echo "==========================================="
|
||||
@chmod +x test-integration.sh
|
||||
@./test-integration.sh
|
||||
|
||||
demo: start ## Start cluster and run demonstration
|
||||
@echo "🎭 SeaweedFS Admin-Worker Demo"
|
||||
@echo "============================="
|
||||
@echo ""
|
||||
@echo "⏳ Waiting for services to start..."
|
||||
@sleep 45
|
||||
@echo ""
|
||||
@echo "🎯 Demo Overview:"
|
||||
@echo " • 1 Master server (coordinates cluster)"
|
||||
@echo " • 6 Volume servers (50MB volume limit)"
|
||||
@echo " • 1 Admin server (task management)"
|
||||
@echo " • 3 Workers (execute EC tasks)"
|
||||
@echo " • Load generator (creates files continuously)"
|
||||
@echo ""
|
||||
@echo "📊 Watch the process:"
|
||||
@echo " 1. Visit: http://localhost:23646/"
|
||||
@echo " 2. Observe workers connecting"
|
||||
@echo " 3. Watch tasks being created and assigned"
|
||||
@echo " 4. See tasks progress from pending → completed"
|
||||
@echo ""
|
||||
@echo "🔄 The demo will:"
|
||||
@echo " • Fill volumes to 50MB limit"
|
||||
@echo " • Admin detects volumes needing EC"
|
||||
@echo " • Workers receive and execute EC tasks"
|
||||
@echo " • Tasks complete with shard distribution"
|
||||
@echo ""
|
||||
@echo "💡 Use 'make worker-logs' to see worker activity"
|
||||
@echo "💡 Use 'make admin-logs' to see admin task management"
|
||||
|
||||
# Vacuum Testing Targets
|
||||
vacuum-test: ## Create test data with garbage and verify vacuum detection
|
||||
@echo "🧪 SeaweedFS Vacuum Task Testing"
|
||||
@echo "================================"
|
||||
@echo ""
|
||||
@echo "1️⃣ Checking cluster health..."
|
||||
@curl -s http://localhost:9333/cluster/status | jq '.IsLeader' > /dev/null && echo "✅ Master ready" || (echo "❌ Master not ready. Run 'make start' first." && exit 1)
|
||||
@curl -s http://localhost:23646/ | grep -q "Admin" && echo "✅ Admin ready" || (echo "❌ Admin not ready. Run 'make start' first." && exit 1)
|
||||
@echo ""
|
||||
@echo "2️⃣ Creating test data with garbage..."
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester go run create_vacuum_test_data.go -files=25 -delete=0.5 -size=200
|
||||
@echo ""
|
||||
@echo "3️⃣ Configuration Instructions:"
|
||||
@echo " Visit: http://localhost:23646/maintenance/config/vacuum"
|
||||
@echo " Set for testing:"
|
||||
@echo " • Enable Vacuum Tasks: ✅ Checked"
|
||||
@echo " • Garbage Threshold: 0.20 (20%)"
|
||||
@echo " • Scan Interval: [30] [Seconds]"
|
||||
@echo " • Min Volume Age: [0] [Minutes]"
|
||||
@echo " • Max Concurrent: 2"
|
||||
@echo ""
|
||||
@echo "4️⃣ Monitor vacuum tasks at: http://localhost:23646/maintenance"
|
||||
@echo ""
|
||||
@echo "💡 Use 'make vacuum-status' to check volume garbage ratios"
|
||||
|
||||
vacuum-demo: ## Run automated vacuum testing demonstration
|
||||
@echo "🎭 Vacuum Task Demo"
|
||||
@echo "=================="
|
||||
@echo ""
|
||||
@echo "⚠️ This demo requires user interaction for configuration"
|
||||
@echo "💡 Make sure cluster is running with 'make start'"
|
||||
@echo ""
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester sh -c "chmod +x demo_vacuum_testing.sh && ./demo_vacuum_testing.sh"
|
||||
|
||||
vacuum-status: ## Check current volume status and garbage ratios
|
||||
@echo "📊 Current Volume Status"
|
||||
@echo "======================="
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester sh -c "chmod +x check_volumes.sh && ./check_volumes.sh"
|
||||
|
||||
vacuum-data: ## Create test data with configurable parameters
|
||||
@echo "📁 Creating vacuum test data..."
|
||||
@echo "Usage: make vacuum-data [FILES=20] [DELETE=0.4] [SIZE=100]"
|
||||
@echo ""
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester go run create_vacuum_test_data.go \
|
||||
-files=$${FILES:-20} \
|
||||
-delete=$${DELETE:-0.4} \
|
||||
-size=$${SIZE:-100}
|
||||
|
||||
vacuum-data-high: ## Create high garbage ratio test data (should trigger vacuum)
|
||||
@echo "📁 Creating high garbage test data (70% garbage)..."
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester go run create_vacuum_test_data.go -files=30 -delete=0.7 -size=150
|
||||
|
||||
vacuum-data-low: ## Create low garbage ratio test data (should NOT trigger vacuum)
|
||||
@echo "📁 Creating low garbage test data (15% garbage)..."
|
||||
@docker-compose -f $(COMPOSE_FILE) exec vacuum-tester go run create_vacuum_test_data.go -files=30 -delete=0.15 -size=150
|
||||
|
||||
vacuum-continuous: ## Generate garbage continuously for testing
|
||||
@echo "🔄 Generating continuous garbage for vacuum testing..."
|
||||
@echo "Creating 5 rounds of test data with 30-second intervals..."
|
||||
@for i in {1..5}; do \
|
||||
echo "Round $$i: Creating garbage..."; \
|
||||
docker-compose -f $(COMPOSE_FILE) exec vacuum-tester go run create_vacuum_test_data.go -files=10 -delete=0.6 -size=100; \
|
||||
echo "Waiting 30 seconds..."; \
|
||||
sleep 30; \
|
||||
done
|
||||
@echo "✅ Continuous test complete. Check vacuum task activity!"
|
||||
|
||||
vacuum-clean: ## Clean up vacuum test data (removes all volumes!)
|
||||
@echo "🧹 Cleaning up vacuum test data..."
|
||||
@echo "⚠️ WARNING: This will delete ALL volumes!"
|
||||
@read -p "Are you sure? (y/N): " confirm && [ "$$confirm" = "y" ] || exit 1
|
||||
@echo "Stopping cluster..."
|
||||
@docker-compose -f $(COMPOSE_FILE) down
|
||||
@echo "Removing volume data..."
|
||||
@rm -rf data/volume*/
|
||||
@echo "Restarting cluster..."
|
||||
@docker-compose -f $(COMPOSE_FILE) up -d
|
||||
@echo "✅ Clean up complete. Fresh volumes ready for testing."
|
||||
|
||||
vacuum-help: ## Show vacuum testing help and examples
|
||||
@echo "🧪 Vacuum Testing Commands (Docker-based)"
|
||||
@echo "=========================================="
|
||||
@echo ""
|
||||
@echo "Quick Start:"
|
||||
@echo " make start # Start SeaweedFS cluster with vacuum-tester"
|
||||
@echo " make vacuum-test # Create test data and instructions"
|
||||
@echo " make vacuum-status # Check volume status"
|
||||
@echo ""
|
||||
@echo "Data Generation:"
|
||||
@echo " make vacuum-data-high # High garbage (should trigger)"
|
||||
@echo " make vacuum-data-low # Low garbage (should NOT trigger)"
|
||||
@echo " make vacuum-continuous # Continuous garbage generation"
|
||||
@echo ""
|
||||
@echo "Monitoring:"
|
||||
@echo " make vacuum-status # Quick volume status check"
|
||||
@echo " make vacuum-demo # Full guided demonstration"
|
||||
@echo ""
|
||||
@echo "Configuration:"
|
||||
@echo " Visit: http://localhost:23646/maintenance/config/vacuum"
|
||||
@echo " Monitor: http://localhost:23646/maintenance"
|
||||
@echo ""
|
||||
@echo "Custom Parameters:"
|
||||
@echo " make vacuum-data FILES=50 DELETE=0.8 SIZE=200"
|
||||
@echo ""
|
||||
@echo "💡 All commands now run inside Docker containers"
|
||||
@echo "Documentation:"
|
||||
@echo " See: VACUUM_TEST_README.md for complete guide"
|
||||
32
docker/admin_integration/check_volumes.sh
Executable file
32
docker/admin_integration/check_volumes.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "📊 Quick Volume Status Check"
|
||||
echo "============================"
|
||||
echo ""
|
||||
|
||||
# Check if master is running
|
||||
MASTER_URL="${MASTER_HOST:-master:9333}"
|
||||
if ! curl -s http://$MASTER_URL/cluster/status > /dev/null; then
|
||||
echo "❌ Master server not available at $MASTER_URL"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "🔍 Fetching volume status from master..."
|
||||
curl -s "http://$MASTER_URL/vol/status" | jq -r '
|
||||
if .Volumes and .Volumes.DataCenters then
|
||||
.Volumes.DataCenters | to_entries[] | .value | to_entries[] | .value | to_entries[] | .value | if . then .[] else empty end |
|
||||
"Volume \(.Id):
|
||||
Size: \(.Size | if . < 1024 then "\(.) B" elif . < 1048576 then "\(. / 1024 | floor) KB" elif . < 1073741824 then "\(. / 1048576 * 100 | floor / 100) MB" else "\(. / 1073741824 * 100 | floor / 100) GB" end)
|
||||
Files: \(.FileCount) active, \(.DeleteCount) deleted
|
||||
Garbage: \(.DeletedByteCount | if . < 1024 then "\(.) B" elif . < 1048576 then "\(. / 1024 | floor) KB" elif . < 1073741824 then "\(. / 1048576 * 100 | floor / 100) MB" else "\(. / 1073741824 * 100 | floor / 100) GB" end) (\(if .Size > 0 then (.DeletedByteCount / .Size * 100 | floor) else 0 end)%)
|
||||
Status: \(if (.DeletedByteCount / .Size * 100) > 30 then "🎯 NEEDS VACUUM" else "✅ OK" end)
|
||||
"
|
||||
else
|
||||
"No volumes found"
|
||||
end'
|
||||
|
||||
echo ""
|
||||
echo "💡 Legend:"
|
||||
echo " 🎯 NEEDS VACUUM: >30% garbage ratio"
|
||||
echo " ✅ OK: <30% garbage ratio"
|
||||
echo ""
|
||||
280
docker/admin_integration/create_vacuum_test_data.go
Normal file
280
docker/admin_integration/create_vacuum_test_data.go
Normal file
@@ -0,0 +1,280 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
master = flag.String("master", "master:9333", "SeaweedFS master server address")
|
||||
fileCount = flag.Int("files", 20, "Number of files to create")
|
||||
deleteRatio = flag.Float64("delete", 0.4, "Ratio of files to delete (0.0-1.0)")
|
||||
fileSizeKB = flag.Int("size", 100, "Size of each file in KB")
|
||||
)
|
||||
|
||||
type AssignResult struct {
|
||||
Fid string `json:"fid"`
|
||||
Url string `json:"url"`
|
||||
PublicUrl string `json:"publicUrl"`
|
||||
Count int `json:"count"`
|
||||
Error string `json:"error"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
fmt.Println("🧪 Creating fake data for vacuum task testing...")
|
||||
fmt.Printf("Master: %s\n", *master)
|
||||
fmt.Printf("Files to create: %d\n", *fileCount)
|
||||
fmt.Printf("Delete ratio: %.1f%%\n", *deleteRatio*100)
|
||||
fmt.Printf("File size: %d KB\n", *fileSizeKB)
|
||||
fmt.Println()
|
||||
|
||||
if *fileCount == 0 {
|
||||
// Just check volume status
|
||||
fmt.Println("📊 Checking volume status...")
|
||||
checkVolumeStatus()
|
||||
return
|
||||
}
|
||||
|
||||
// Step 1: Create test files
|
||||
fmt.Println("📁 Step 1: Creating test files...")
|
||||
fids := createTestFiles()
|
||||
|
||||
// Step 2: Delete some files to create garbage
|
||||
fmt.Println("🗑️ Step 2: Deleting files to create garbage...")
|
||||
deleteFiles(fids)
|
||||
|
||||
// Step 3: Check volume status
|
||||
fmt.Println("📊 Step 3: Checking volume status...")
|
||||
checkVolumeStatus()
|
||||
|
||||
// Step 4: Configure vacuum for testing
|
||||
fmt.Println("⚙️ Step 4: Instructions for testing...")
|
||||
printTestingInstructions()
|
||||
}
|
||||
|
||||
func createTestFiles() []string {
|
||||
var fids []string
|
||||
|
||||
for i := 0; i < *fileCount; i++ {
|
||||
// Generate random file content
|
||||
fileData := make([]byte, *fileSizeKB*1024)
|
||||
rand.Read(fileData)
|
||||
|
||||
// Get file ID assignment
|
||||
assign, err := assignFileId()
|
||||
if err != nil {
|
||||
log.Printf("Failed to assign file ID for file %d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
|
||||
// Upload file
|
||||
err = uploadFile(assign, fileData, fmt.Sprintf("test_file_%d.dat", i))
|
||||
if err != nil {
|
||||
log.Printf("Failed to upload file %d: %v", i, err)
|
||||
continue
|
||||
}
|
||||
|
||||
fids = append(fids, assign.Fid)
|
||||
|
||||
if (i+1)%5 == 0 {
|
||||
fmt.Printf(" Created %d/%d files...\n", i+1, *fileCount)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("✅ Created %d files successfully\n\n", len(fids))
|
||||
return fids
|
||||
}
|
||||
|
||||
func deleteFiles(fids []string) {
|
||||
deleteCount := int(float64(len(fids)) * *deleteRatio)
|
||||
|
||||
for i := 0; i < deleteCount; i++ {
|
||||
err := deleteFile(fids[i])
|
||||
if err != nil {
|
||||
log.Printf("Failed to delete file %s: %v", fids[i], err)
|
||||
continue
|
||||
}
|
||||
|
||||
if (i+1)%5 == 0 {
|
||||
fmt.Printf(" Deleted %d/%d files...\n", i+1, deleteCount)
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Printf("✅ Deleted %d files (%.1f%% of total)\n\n", deleteCount, *deleteRatio*100)
|
||||
}
|
||||
|
||||
func assignFileId() (*AssignResult, error) {
|
||||
resp, err := http.Get(fmt.Sprintf("http://%s/dir/assign", *master))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var result AssignResult
|
||||
err = json.NewDecoder(resp.Body).Decode(&result)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if result.Error != "" {
|
||||
return nil, fmt.Errorf("assignment error: %s", result.Error)
|
||||
}
|
||||
|
||||
return &result, nil
|
||||
}
|
||||
|
||||
func uploadFile(assign *AssignResult, data []byte, filename string) error {
|
||||
url := fmt.Sprintf("http://%s/%s", assign.Url, assign.Fid)
|
||||
|
||||
body := &bytes.Buffer{}
|
||||
body.Write(data)
|
||||
|
||||
req, err := http.NewRequest("POST", url, body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
req.Header.Set("Content-Type", "application/octet-stream")
|
||||
if filename != "" {
|
||||
req.Header.Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 30 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusCreated && resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("upload failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func deleteFile(fid string) error {
|
||||
url := fmt.Sprintf("http://%s/%s", *master, fid)
|
||||
|
||||
req, err := http.NewRequest("DELETE", url, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 10 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkVolumeStatus() {
|
||||
// Get volume list from master
|
||||
resp, err := http.Get(fmt.Sprintf("http://%s/vol/status", *master))
|
||||
if err != nil {
|
||||
log.Printf("Failed to get volume status: %v", err)
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var volumes map[string]interface{}
|
||||
err = json.NewDecoder(resp.Body).Decode(&volumes)
|
||||
if err != nil {
|
||||
log.Printf("Failed to decode volume status: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Println("📊 Volume Status Summary:")
|
||||
|
||||
if vols, ok := volumes["Volumes"].([]interface{}); ok {
|
||||
for _, vol := range vols {
|
||||
if v, ok := vol.(map[string]interface{}); ok {
|
||||
id := int(v["Id"].(float64))
|
||||
size := uint64(v["Size"].(float64))
|
||||
fileCount := int(v["FileCount"].(float64))
|
||||
deleteCount := int(v["DeleteCount"].(float64))
|
||||
deletedBytes := uint64(v["DeletedByteCount"].(float64))
|
||||
|
||||
garbageRatio := 0.0
|
||||
if size > 0 {
|
||||
garbageRatio = float64(deletedBytes) / float64(size) * 100
|
||||
}
|
||||
|
||||
fmt.Printf(" Volume %d:\n", id)
|
||||
fmt.Printf(" Size: %s\n", formatBytes(size))
|
||||
fmt.Printf(" Files: %d (active), %d (deleted)\n", fileCount, deleteCount)
|
||||
fmt.Printf(" Garbage: %s (%.1f%%)\n", formatBytes(deletedBytes), garbageRatio)
|
||||
|
||||
if garbageRatio > 30 {
|
||||
fmt.Printf(" 🎯 This volume should trigger vacuum (>30%% garbage)\n")
|
||||
}
|
||||
fmt.Println()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func formatBytes(bytes uint64) string {
|
||||
if bytes < 1024 {
|
||||
return fmt.Sprintf("%d B", bytes)
|
||||
} else if bytes < 1024*1024 {
|
||||
return fmt.Sprintf("%.1f KB", float64(bytes)/1024)
|
||||
} else if bytes < 1024*1024*1024 {
|
||||
return fmt.Sprintf("%.1f MB", float64(bytes)/(1024*1024))
|
||||
} else {
|
||||
return fmt.Sprintf("%.1f GB", float64(bytes)/(1024*1024*1024))
|
||||
}
|
||||
}
|
||||
|
||||
func printTestingInstructions() {
|
||||
fmt.Println("🧪 Testing Instructions:")
|
||||
fmt.Println()
|
||||
fmt.Println("1. Configure Vacuum for Testing:")
|
||||
fmt.Println(" Visit: http://localhost:23646/maintenance/config/vacuum")
|
||||
fmt.Println(" Set:")
|
||||
fmt.Printf(" - Garbage Percentage Threshold: 20 (20%% - lower than default 30)\n")
|
||||
fmt.Printf(" - Scan Interval: [30] [Seconds] (faster than default)\n")
|
||||
fmt.Printf(" - Min Volume Age: [0] [Minutes] (no age requirement)\n")
|
||||
fmt.Printf(" - Max Concurrent: 2\n")
|
||||
fmt.Printf(" - Min Interval: 1m (faster repeat)\n")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("2. Monitor Vacuum Tasks:")
|
||||
fmt.Println(" Visit: http://localhost:23646/maintenance")
|
||||
fmt.Println(" Watch for vacuum tasks to appear in the queue")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("3. Manual Vacuum (Optional):")
|
||||
fmt.Println(" curl -X POST 'http://localhost:9333/vol/vacuum?garbageThreshold=0.20'")
|
||||
fmt.Println(" (Note: Master API still uses 0.0-1.0 decimal format)")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("4. Check Logs:")
|
||||
fmt.Println(" Look for messages like:")
|
||||
fmt.Println(" - 'Vacuum detector found X volumes needing vacuum'")
|
||||
fmt.Println(" - 'Applied vacuum configuration'")
|
||||
fmt.Println(" - 'Worker executing task: vacuum'")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Println("5. Verify Results:")
|
||||
fmt.Println(" Re-run this script with -files=0 to check volume status")
|
||||
fmt.Println(" Garbage ratios should decrease after vacuum operations")
|
||||
fmt.Println()
|
||||
|
||||
fmt.Printf("🚀 Quick test command:\n")
|
||||
fmt.Printf(" go run create_vacuum_test_data.go -files=0\n")
|
||||
fmt.Println()
|
||||
}
|
||||
105
docker/admin_integration/demo_vacuum_testing.sh
Executable file
105
docker/admin_integration/demo_vacuum_testing.sh
Executable file
@@ -0,0 +1,105 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "🧪 SeaweedFS Vacuum Task Testing Demo"
|
||||
echo "======================================"
|
||||
echo ""
|
||||
|
||||
# Check if SeaweedFS is running
|
||||
echo "📋 Checking SeaweedFS status..."
|
||||
MASTER_URL="${MASTER_HOST:-master:9333}"
|
||||
ADMIN_URL="${ADMIN_HOST:-admin:23646}"
|
||||
|
||||
if ! curl -s http://$MASTER_URL/cluster/status > /dev/null; then
|
||||
echo "❌ SeaweedFS master not running at $MASTER_URL"
|
||||
echo " Please ensure Docker cluster is running: make start"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! curl -s http://volume1:8080/status > /dev/null; then
|
||||
echo "❌ SeaweedFS volume servers not running"
|
||||
echo " Please ensure Docker cluster is running: make start"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! curl -s http://$ADMIN_URL/ > /dev/null; then
|
||||
echo "❌ SeaweedFS admin server not running at $ADMIN_URL"
|
||||
echo " Please ensure Docker cluster is running: make start"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ All SeaweedFS components are running"
|
||||
echo ""
|
||||
|
||||
# Phase 1: Create test data
|
||||
echo "📁 Phase 1: Creating test data with garbage..."
|
||||
go run create_vacuum_test_data.go -master=$MASTER_URL -files=15 -delete=0.5 -size=150
|
||||
echo ""
|
||||
|
||||
# Phase 2: Check initial status
|
||||
echo "📊 Phase 2: Checking initial volume status..."
|
||||
go run create_vacuum_test_data.go -master=$MASTER_URL -files=0
|
||||
echo ""
|
||||
|
||||
# Phase 3: Configure vacuum
|
||||
echo "⚙️ Phase 3: Vacuum configuration instructions..."
|
||||
echo " 1. Visit: http://localhost:23646/maintenance/config/vacuum"
|
||||
echo " 2. Set these values for testing:"
|
||||
echo " - Enable Vacuum Tasks: ✅ Checked"
|
||||
echo " - Garbage Threshold: 0.30"
|
||||
echo " - Scan Interval: [30] [Seconds]"
|
||||
echo " - Min Volume Age: [0] [Minutes]"
|
||||
echo " - Max Concurrent: 2"
|
||||
echo " 3. Click 'Save Configuration'"
|
||||
echo ""
|
||||
|
||||
read -p " Press ENTER after configuring vacuum settings..."
|
||||
echo ""
|
||||
|
||||
# Phase 4: Monitor tasks
|
||||
echo "🎯 Phase 4: Monitoring vacuum tasks..."
|
||||
echo " Visit: http://localhost:23646/maintenance"
|
||||
echo " You should see vacuum tasks appear within 30 seconds"
|
||||
echo ""
|
||||
|
||||
echo " Waiting 60 seconds for vacuum detection and execution..."
|
||||
for i in {60..1}; do
|
||||
printf "\r Countdown: %02d seconds" $i
|
||||
sleep 1
|
||||
done
|
||||
echo ""
|
||||
echo ""
|
||||
|
||||
# Phase 5: Check results
|
||||
echo "📈 Phase 5: Checking results after vacuum..."
|
||||
go run create_vacuum_test_data.go -master=$MASTER_URL -files=0
|
||||
echo ""
|
||||
|
||||
# Phase 6: Create more garbage for continuous testing
|
||||
echo "🔄 Phase 6: Creating additional garbage for continuous testing..."
|
||||
echo " Running 3 rounds of garbage creation..."
|
||||
|
||||
for round in {1..3}; do
|
||||
echo " Round $round: Creating garbage..."
|
||||
go run create_vacuum_test_data.go -master=$MASTER_URL -files=8 -delete=0.6 -size=100
|
||||
echo " Waiting 30 seconds before next round..."
|
||||
sleep 30
|
||||
done
|
||||
|
||||
echo ""
|
||||
echo "📊 Final volume status:"
|
||||
go run create_vacuum_test_data.go -master=$MASTER_URL -files=0
|
||||
echo ""
|
||||
|
||||
echo "🎉 Demo Complete!"
|
||||
echo ""
|
||||
echo "🔍 Things to check:"
|
||||
echo " 1. Maintenance Queue: http://localhost:23646/maintenance"
|
||||
echo " 2. Volume Status: http://localhost:9333/vol/status"
|
||||
echo " 3. Admin Dashboard: http://localhost:23646"
|
||||
echo ""
|
||||
echo "💡 Next Steps:"
|
||||
echo " - Try different garbage thresholds (0.10, 0.50, 0.80)"
|
||||
echo " - Adjust scan intervals (10s, 1m, 5m)"
|
||||
echo " - Monitor logs for vacuum operations"
|
||||
echo " - Test with multiple volumes"
|
||||
echo ""
|
||||
240
docker/admin_integration/docker-compose-ec-test.yml
Normal file
240
docker/admin_integration/docker-compose-ec-test.yml
Normal file
@@ -0,0 +1,240 @@
|
||||
name: admin_integration
|
||||
|
||||
networks:
|
||||
seaweed_net:
|
||||
driver: bridge
|
||||
|
||||
services:
|
||||
master:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "9333:9333"
|
||||
- "19333:19333"
|
||||
command: "master -ip=master -mdir=/data -volumeSizeLimitMB=50"
|
||||
environment:
|
||||
- WEED_MASTER_VOLUME_GROWTH_COPY_1=1
|
||||
- WEED_MASTER_VOLUME_GROWTH_COPY_2=2
|
||||
- WEED_MASTER_VOLUME_GROWTH_COPY_OTHER=1
|
||||
volumes:
|
||||
- ./data/master:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume1:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "18080:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume1 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume1:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume2:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8081:8080"
|
||||
- "18081:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume2 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume2:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume3:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8082:8080"
|
||||
- "18082:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume3 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume3:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume4:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8083:8080"
|
||||
- "18083:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume4 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume4:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume5:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8084:8080"
|
||||
- "18084:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume5 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume5:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
volume6:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8085:8080"
|
||||
- "18085:18080"
|
||||
command: "volume -mserver=master:9333 -ip=volume6 -dir=/data -max=10"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/volume6:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
filer:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "8888:8888"
|
||||
- "18888:18888"
|
||||
command: "filer -master=master:9333 -ip=filer"
|
||||
depends_on:
|
||||
- master
|
||||
volumes:
|
||||
- ./data/filer:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
admin:
|
||||
image: chrislusf/seaweedfs:local
|
||||
ports:
|
||||
- "23646:23646" # HTTP admin interface (default port)
|
||||
- "33646:33646" # gRPC worker communication (23646 + 10000)
|
||||
command: "admin -port=23646 -masters=master:9333 -dataDir=/data"
|
||||
depends_on:
|
||||
- master
|
||||
- filer
|
||||
volumes:
|
||||
- ./data/admin:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
worker1:
|
||||
image: chrislusf/seaweedfs:local
|
||||
command: "-v=2 worker -admin=admin:23646 -capabilities=erasure_coding,vacuum -maxConcurrent=2"
|
||||
depends_on:
|
||||
- admin
|
||||
volumes:
|
||||
- ./data/worker1:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
environment:
|
||||
- WORKER_ID=worker-1
|
||||
|
||||
worker2:
|
||||
image: chrislusf/seaweedfs:local
|
||||
command: "-v=2 worker -admin=admin:23646 -capabilities=erasure_coding,vacuum -maxConcurrent=2"
|
||||
depends_on:
|
||||
- admin
|
||||
volumes:
|
||||
- ./data/worker2:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
environment:
|
||||
- WORKER_ID=worker-2
|
||||
|
||||
worker3:
|
||||
image: chrislusf/seaweedfs:local
|
||||
command: "-v=2 worker -admin=admin:23646 -capabilities=erasure_coding,vacuum -maxConcurrent=2"
|
||||
depends_on:
|
||||
- admin
|
||||
volumes:
|
||||
- ./data/worker3:/data
|
||||
networks:
|
||||
- seaweed_net
|
||||
environment:
|
||||
- WORKER_ID=worker-3
|
||||
|
||||
load_generator:
|
||||
image: chrislusf/seaweedfs:local
|
||||
entrypoint: ["/bin/sh"]
|
||||
command: >
|
||||
-c "
|
||||
echo 'Starting load generator...';
|
||||
sleep 30;
|
||||
echo 'Generating continuous load with 50MB volume limit...';
|
||||
while true; do
|
||||
echo 'Writing test files...';
|
||||
echo 'Test file content at $(date)' | /usr/bin/weed upload -server=master:9333;
|
||||
sleep 5;
|
||||
echo 'Deleting some files...';
|
||||
/usr/bin/weed shell -master=master:9333 <<< 'fs.rm /test_file_*' || true;
|
||||
sleep 10;
|
||||
done
|
||||
"
|
||||
depends_on:
|
||||
- master
|
||||
- filer
|
||||
- admin
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
monitor:
|
||||
image: alpine:latest
|
||||
entrypoint: ["/bin/sh"]
|
||||
command: >
|
||||
-c "
|
||||
apk add --no-cache curl jq;
|
||||
echo 'Starting cluster monitor...';
|
||||
sleep 30;
|
||||
while true; do
|
||||
echo '=== Cluster Status $(date) ===';
|
||||
echo 'Master status:';
|
||||
curl -s http://master:9333/cluster/status | jq '.IsLeader, .Peers' || echo 'Master not ready';
|
||||
echo;
|
||||
echo 'Admin status:';
|
||||
curl -s http://admin:23646/ | grep -o 'Admin.*Interface' || echo 'Admin not ready';
|
||||
echo;
|
||||
echo 'Volume count by server:';
|
||||
curl -s http://master:9333/vol/status | jq '.Volumes | length' || echo 'Volumes not ready';
|
||||
echo;
|
||||
sleep 60;
|
||||
done
|
||||
"
|
||||
depends_on:
|
||||
- master
|
||||
- admin
|
||||
- filer
|
||||
networks:
|
||||
- seaweed_net
|
||||
|
||||
vacuum-tester:
|
||||
image: chrislusf/seaweedfs:local
|
||||
entrypoint: ["/bin/sh"]
|
||||
command: >
|
||||
-c "
|
||||
echo 'Installing dependencies for vacuum testing...';
|
||||
apk add --no-cache jq curl go bash;
|
||||
echo 'Vacuum tester ready...';
|
||||
echo 'Use: docker-compose exec vacuum-tester sh';
|
||||
echo 'Available commands: go, weed, curl, jq, bash, sh';
|
||||
sleep infinity
|
||||
"
|
||||
depends_on:
|
||||
- master
|
||||
- admin
|
||||
- filer
|
||||
volumes:
|
||||
- .:/testing
|
||||
working_dir: /testing
|
||||
networks:
|
||||
- seaweed_net
|
||||
environment:
|
||||
- MASTER_HOST=master:9333
|
||||
- ADMIN_HOST=admin:23646
|
||||
73
docker/admin_integration/test-integration.sh
Executable file
73
docker/admin_integration/test-integration.sh
Executable file
@@ -0,0 +1,73 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
echo "🧪 Testing SeaweedFS Admin-Worker Integration"
|
||||
echo "============================================="
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
echo -e "${BLUE}1. Validating docker-compose configuration...${NC}"
|
||||
if docker-compose -f docker-compose-ec-test.yml config > /dev/null; then
|
||||
echo -e "${GREEN}✅ Docker compose configuration is valid${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ Docker compose configuration is invalid${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}2. Checking if required ports are available...${NC}"
|
||||
for port in 9333 8080 8081 8082 8083 8084 8085 8888 23646; do
|
||||
if lsof -i :$port > /dev/null 2>&1; then
|
||||
echo -e "${YELLOW}⚠️ Port $port is in use${NC}"
|
||||
else
|
||||
echo -e "${GREEN}✅ Port $port is available${NC}"
|
||||
fi
|
||||
done
|
||||
|
||||
echo -e "${BLUE}3. Testing worker command syntax...${NC}"
|
||||
# Test that the worker command in docker-compose has correct syntax
|
||||
if docker-compose -f docker-compose-ec-test.yml config | grep -q "workingDir=/work"; then
|
||||
echo -e "${GREEN}✅ Worker working directory option is properly configured${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ Worker working directory option is missing${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}4. Verifying admin server configuration...${NC}"
|
||||
if docker-compose -f docker-compose-ec-test.yml config | grep -q "admin:23646"; then
|
||||
echo -e "${GREEN}✅ Admin server port configuration is correct${NC}"
|
||||
else
|
||||
echo -e "${RED}❌ Admin server port configuration is incorrect${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}5. Checking service dependencies...${NC}"
|
||||
if docker-compose -f docker-compose-ec-test.yml config | grep -q "depends_on"; then
|
||||
echo -e "${GREEN}✅ Service dependencies are configured${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠️ Service dependencies may not be configured${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}🎉 Integration test configuration is ready!${NC}"
|
||||
echo ""
|
||||
echo -e "${BLUE}To start the integration test:${NC}"
|
||||
echo " make start # Start all services"
|
||||
echo " make health # Check service health"
|
||||
echo " make logs # View logs"
|
||||
echo " make stop # Stop all services"
|
||||
echo ""
|
||||
echo -e "${BLUE}Key features verified:${NC}"
|
||||
echo " ✅ Official SeaweedFS images are used"
|
||||
echo " ✅ Worker working directories are configured"
|
||||
echo " ✅ Admin-worker communication on correct ports"
|
||||
echo " ✅ Task-specific directories will be created"
|
||||
echo " ✅ Load generator will trigger EC tasks"
|
||||
echo " ✅ Monitor will track progress"
|
||||
Reference in New Issue
Block a user