Loki chunks size
Jump to navigation
Jump to search
Loki’s Azure “blobs” are really just the compressed chunks of log entries that the Ingester flushes to your <container>/chunks/<tenant-ID>/…
path. The size of each blob is controlled by Loki itself, not by Azure:
chunk_target_size
(default \~1 MiB) tells Loki roughly how big to let a chunk grow before flushing.max_chunk_age
(default 1 h) forces a flush once a chunk has been open for that long, even if it hasn’t hitchunk_target_size
.chunk_idle_period
(default 1 h) flushes any stream that hasn’t received new entries in that window. You tune these under your Ingester config (ingester.chunk_target_size
,ingester.max_chunk_age
, etc.) inloki.yaml
or via Helm values (medium.com).
What Azure does under the covers
When Loki hands off a chunk file to Azure, it uses the Block Blob API. Azure will automatically break any upload into blocks (up to 50 000 blocks per blob) and stitch them together:
- The max block size and max blob size are governed by the Storage service version—e.g. service 2019-12-12+ supports up to 4000 MiB per block and \~190 TiB per blob (learn.microsoft.com).
- Loki’s client-side buffer controls (e.g.
azure.upload_buffer_size
, default 256 KiB) simply govern how much of the chunk is read/streamed per block upload; they do not change the logical size of your chunk blob (techdocs.akamai.com).
Bottom line
- To change blob sizes, adjust Loki’s chunk‐flush settings (
chunk_target_size
,max_chunk_age
, etc.). - You cannot alter Azure’s internal block-splitting behavior via Loki—it’s handled by the Azure SDK/service.
- The only other knobs are the SDK buffer sizes (
upload_buffer_size
,download_buffer_size
), which affect upload/download performance but not the chunk‐blob’s overall size.