Files
seaweedfs/weed/mq/kafka/protocol
chrislu 690754acb7 fix: Increase fetch batch sizes to utilize available maxBytes capacity
PROBLEM:
Consumer throughput only 36.80 msgs/sec vs producer 50.21 msgs/sec.
Test shows messages consumed at 73% of production rate.

ROOT CAUSE:
FetchMultipleBatches was hardcoded to fetch only:
  - 10 records per batch (5.1 KB per batch with 512-byte messages)
  - 10 batches max per fetch (~51 KB total per fetch)

But clients request 10 MB per fetch!
  - Utilization: 0.5% of requested capacity
  - Massive inefficiency causing slow consumer throughput

Analysis:
  - Client requests: 10 MB per fetch (FetchSize: 10e6)
  - Server returns: ~51 KB per fetch (200x less!)
  - Batches: 10 records each (way too small)
  - Result: Consumer falls behind producer by 26%

FIX:
Calculate optimal batch size based on maxBytes:
  - recordsPerBatch = (maxBytes - overhead) / estimatedMsgSize
  - Start with 9.8MB / 1024 bytes = ~9,600 records per fetch
  - Min 100 records, max 10,000 records per batch
  - Scale max batches based on available space
  - Adaptive sizing for remaining bytes

EXPECTED IMPACT:
  - Consumer throughput: 36.80 → ~48+ msgs/sec (match producer)
  - Fetch efficiency: 0.5% → ~98% of maxBytes
  - Message loss: 45% → near 0%

This is critical for matching Kafka semantics where clients
specify fetch sizes and the broker should honor them.
2025-10-16 23:07:00 -07:00
..
2025-10-13 18:05:17 -07:00
2025-10-14 18:22:09 -07:00
2025-10-16 10:42:18 -07:00
2025-10-13 18:05:17 -07:00
2025-10-16 20:28:21 -07:00
2025-10-13 18:05:17 -07:00
2025-10-16 19:25:51 -07:00
2025-10-13 18:05:17 -07:00