Files
Chris Lu 753e1db096 Prevent split-brain: Persistent ClusterID and Join Validation (#8022)
* Prevent split-brain: Persistent ClusterID and Join Validation

- Persist ClusterId in Raft store to survive restarts.
- Validate ClusterId on Raft command application (piggybacked on MaxVolumeId).
- Prevent masters with conflicting ClusterIds from joining/operating together.
- Update Telemetry to report the persistent ClusterId.

* Refine ClusterID validation based on feedback

- Improved error message in cluster_commands.go.
- Added ClusterId mismatch check in RaftServer.Recovery.

* Handle Raft errors and support Hashicorp Raft for ClusterId

- Check for errors when persisting ClusterId in legacy Raft.
- Implement ClusterId generation and persistence for Hashicorp Raft leader changes.
- Ensure consistent error logging.

* Refactor ClusterId validation

- Centralize ClusterId mismatch check in Topology.SetClusterId.
- Simplify MaxVolumeIdCommand.Apply and RaftServer.Recovery to rely on SetClusterId.

* Fix goroutine leak and add timeout

- Handle channel closure in Hashicorp Raft leader listener.
- Add timeout to Raft Apply call to prevent blocking.

* Fix deadlock in legacy Raft listener

- Wrap ClusterId generation/persistence in a goroutine to avoid blocking the Raft event loop (deadlock).

* Rename ClusterId to SystemId

- Renamed ClusterId to SystemId across the codebase (protobuf, topology, server, telemetry).
- Regenerated telemetry.pb.go with new field.

* Rename SystemId to TopologyId

- Rename to SystemId was intermediate step.
- Final name is TopologyId for the persistent cluster identifier.
- Updated protobuf, topology, raft server, master server, and telemetry.

* Optimize Hashicorp Raft listener

- Integrated TopologyId generation into existing monitorLeaderLoop.
- Removed extra goroutine in master_server.go.

* Fix optimistic TopologyId update

- Removed premature local state update of TopologyId in master_server.go and raft_hashicorp.go.
- State is now solely updated via the Raft state machine Apply/Restore methods after consensus.

* Add explicit log for recovered TopologyId

- Added glog.V(0) info log in RaftServer.Recovery to print the recovered TopologyId on startup.

* Add Raft barrier to prevent TopologyId race condition

- Implement ensureTopologyId helper method
- Send no-op MaxVolumeIdCommand to sync Raft log before checking TopologyId
- Ensures persisted TopologyId is recovered before generating new one
- Prevents race where generation happens during log replay

* Serialize TopologyId generation with mutex

- Add topologyIdGenLock mutex to MasterServer struct
- Wrap ensureTopologyId method with lock to prevent concurrent generation
- Fixes race where event listener and manual leadership check both generate IDs
- Second caller waits for first to complete and sees the generated ID

* Add TopologyId recovery logging to Apply method

- Change log level from V(1) to V(0) for visibility
- Log 'Recovered TopologyId' when applying from Raft log
- Ensures recovery is visible whether from snapshot or log replay
- Matches Recovery() method logging for consistency

* Fix Raft barrier timing issue

- Add 100ms delay after barrier command to ensure log application completes
- Add debug logging to track barrier execution and TopologyId state
- Return early if barrier command fails
- Prevents TopologyId generation before old logs are fully applied

* ensure leader

* address comments

* address comments

* redundant

* clean up

* double check

* refactoring

* comment
2026-01-18 14:02:34 -08:00
..
2026-01-05 17:56:11 -08:00
2025-10-13 18:05:17 -07:00
2025-10-13 18:05:17 -07:00
2025-06-28 14:11:55 -07:00
2025-06-28 14:11:55 -07:00

SeaweedFS Telemetry System

A privacy-respecting telemetry system for SeaweedFS that collects cluster-level usage statistics and provides visualization through Prometheus and Grafana.

Features

  • Privacy-First Design: Uses in-memory cluster IDs (regenerated on restart), no personal data collection
  • Prometheus Integration: Native Prometheus metrics for monitoring and alerting
  • Grafana Dashboards: Pre-built dashboards for data visualization
  • Protocol Buffers: Efficient binary data transmission for optimal performance
  • Opt-in Only: Disabled by default, requires explicit configuration
  • Docker Compose: Complete monitoring stack deployment
  • Automatic Cleanup: Configurable data retention policies

Architecture

SeaweedFS Cluster → Telemetry Client → Telemetry Server → Prometheus → Grafana
                       (protobuf)         (metrics)      (queries)

Data Transmission

The telemetry system uses Protocol Buffers exclusively for efficient binary data transmission:

  • Compact Format: 30-50% smaller than JSON
  • Fast Serialization: Better performance than text-based formats
  • Type Safety: Strong typing with generated Go structs
  • Schema Evolution: Built-in versioning support

Protobuf Schema

message TelemetryData {
  string cluster_id = 1;           // In-memory generated UUID
  string version = 2;              // SeaweedFS version
  string os = 3;                   // Operating system
  // Field 4 reserved (was features)
  // Field 5 reserved (was deployment)
  int32 volume_server_count = 6;   // Number of volume servers
  uint64 total_disk_bytes = 7;     // Total disk usage
  int32 total_volume_count = 8;    // Total volume count
  int32 filer_count = 9;           // Number of filer servers
  int32 broker_count = 10;         // Number of broker servers
  int64 timestamp = 11;            // Collection timestamp
}

Privacy Approach

  • No Personal Data: No hostnames, IP addresses, or user information
  • In-Memory IDs: Cluster IDs are generated in-memory and change on restart
  • Aggregated Data: Only cluster-level statistics, no individual file/user data
  • Opt-in Only: Telemetry is disabled by default
  • Transparent: Open source implementation, clear data collection policy

Collected Data

Field Description Example
cluster_id In-memory UUID (changes on restart) a1b2c3d4-...
version SeaweedFS version 3.45
os Operating system and architecture linux/amd64
volume_server_count Number of volume servers 5
total_disk_bytes Total disk usage across cluster 1073741824
total_volume_count Total number of volumes 120
filer_count Number of filer servers 2
broker_count Number of broker servers 1
timestamp When data was collected 1640995200

Quick Start

1. Deploy Telemetry Server

# Clone and start the complete monitoring stack
git clone https://github.com/seaweedfs/seaweedfs.git
cd seaweedfs
docker compose -f telemetry/docker-compose.yml up -d

# Or run the server directly
cd telemetry/server
go run . -port=8080 -dashboard=true

2. Configure SeaweedFS

# Enable telemetry in SeaweedFS master (uses default telemetry.seaweedfs.com)
weed master -telemetry=true

# Or in server mode
weed server -telemetry=true

# Or specify custom telemetry server
weed master -telemetry=true -telemetry.url=http://localhost:8080/api/collect

3. Access Dashboards

Configuration

SeaweedFS Master/Server

# Enable telemetry
-telemetry=true

# Set custom telemetry server URL (optional, defaults to telemetry.seaweedfs.com)
-telemetry.url=http://your-telemetry-server:8080/api/collect

Telemetry Server

# Server configuration
-port=8080                    # Server port
-dashboard=true               # Enable built-in dashboard
-cleanup=24h                  # Cleanup interval
-max-age=720h                 # Maximum data retention (30 days)

# Example
./telemetry-server -port=8080 -dashboard=true -cleanup=24h -max-age=720h

Prometheus Metrics

The telemetry server exposes these Prometheus metrics:

Cluster Metrics

  • seaweedfs_telemetry_total_clusters: Total unique clusters (30 days)
  • seaweedfs_telemetry_active_clusters: Active clusters (7 days)

Per-Cluster Metrics

  • seaweedfs_telemetry_volume_servers{cluster_id, version, os}: Volume servers per cluster
  • seaweedfs_telemetry_disk_bytes{cluster_id, version, os}: Disk usage per cluster
  • seaweedfs_telemetry_volume_count{cluster_id, version, os}: Volume count per cluster
  • seaweedfs_telemetry_filer_count{cluster_id, version, os}: Filer servers per cluster
  • seaweedfs_telemetry_broker_count{cluster_id, version, os}: Broker servers per cluster
  • seaweedfs_telemetry_cluster_info{cluster_id, version, os}: Cluster metadata

Server Metrics

  • seaweedfs_telemetry_reports_received_total: Total telemetry reports received

API Endpoints

Data Collection

# Submit telemetry data (protobuf only)
POST /api/collect
Content-Type: application/x-protobuf
[TelemetryRequest protobuf data]

Statistics (JSON for dashboard/debugging)

# Get aggregated statistics
GET /api/stats

# Get recent cluster instances
GET /api/instances?limit=100

# Get metrics over time
GET /api/metrics?days=30

Monitoring

# Prometheus metrics
GET /metrics

Docker Deployment

# docker-compose.yml
version: '3.8'
services:
  telemetry-server:
    build:
      context: ../
      dockerfile: telemetry/server/Dockerfile
    ports:
      - "8080:8080"
    command: ["-port=8080", "-dashboard=true", "-cleanup=24h"]
    
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml
      
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - ./grafana-provisioning:/etc/grafana/provisioning
      - ./grafana-dashboard.json:/var/lib/grafana/dashboards/seaweedfs.json
# Deploy the stack
docker compose -f telemetry/docker-compose.yml up -d

# Scale telemetry server if needed
docker compose -f telemetry/docker-compose.yml up -d --scale telemetry-server=3

Server Only

# Build and run telemetry server (build from repo root to include all sources)
docker build -t seaweedfs-telemetry -f telemetry/server/Dockerfile .
docker run -p 8080:8080 seaweedfs-telemetry -port=8080 -dashboard=true

Development

Protocol Buffer Development

# Generate protobuf code
cd telemetry
protoc --go_out=. --go_opt=paths=source_relative proto/telemetry.proto

# The generated code is already included in the repository

Build from Source

# Build telemetry server
cd telemetry/server
go build -o telemetry-server .

# Build SeaweedFS with telemetry support
cd ../..
go build -o weed ./weed

Testing

# Test telemetry server
cd telemetry/server
go test ./...

# Test protobuf communication (requires protobuf tools)
# See telemetry client code for examples

Grafana Dashboard

The included Grafana dashboard provides:

  • Overview: Total and active clusters, version distribution
  • Resource Usage: Volume servers and disk usage over time
  • Infrastructure: Operating system distribution and server counts
  • Growth Trends: Historical growth patterns

Custom Queries

# Total active clusters
seaweedfs_telemetry_active_clusters

# Disk usage by version
sum by (version) (seaweedfs_telemetry_disk_bytes)

# Volume servers by operating system
sum by (os) (seaweedfs_telemetry_volume_servers)

# Filer servers by version
sum by (version) (seaweedfs_telemetry_filer_count)

# Broker servers across all clusters
sum(seaweedfs_telemetry_broker_count)

# Growth rate (weekly)
increase(seaweedfs_telemetry_total_clusters[7d])

Security Considerations

  • Network Security: Use HTTPS in production environments
  • Access Control: Implement authentication for Grafana and Prometheus
  • Data Retention: Configure appropriate retention policies
  • Monitoring: Monitor the telemetry infrastructure itself

Troubleshooting

Common Issues

SeaweedFS not sending data:

# Check telemetry configuration
weed master -h | grep telemetry

# Verify connectivity
curl -v http://your-telemetry-server:8080/api/collect

Server not receiving data:

# Check server logs
docker-compose logs telemetry-server

# Verify metrics endpoint
curl http://localhost:8080/metrics

Prometheus not scraping:

# Check Prometheus targets
curl http://localhost:9090/api/v1/targets

# Verify configuration
docker-compose logs prometheus

Debugging

# Enable verbose logging in SeaweedFS
weed master -v=2 -telemetry=true

# Check telemetry server metrics
curl http://localhost:8080/metrics | grep seaweedfs_telemetry

# Test data flow
curl http://localhost:8080/api/stats

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

This telemetry system is part of SeaweedFS and follows the same Apache 2.0 license.