3

If we need to limit total size of chronicle queue to let say 1GB or 10GB. What would be the best way to do it ?

We store buffers as bytes so I was trying to calculate total size just by summing buffer sizes but looks like there is no easy way to correlate this with actual size of queue. One way is to calculate total directory size using file utils every 5 minutes or so but that would be error prone if there is huge data in the interval and it may overflow

Chandra
  • 43
  • 4
  • Are you using a resource-limited device (this is one use case), however, if you are using a server, 3 GB of SSD costs about $1. – Peter Lawrey Sep 08 '18 at 09:46
  • 1
    I need it for the case where I have resource-limited size and there is chance that data can be bursts. need to discard if it is more than limit – Chandra Sep 12 '18 at 20:06
  • It's only the disk size which should cause a problem. A queue can be many, many time main memory e.g. you can have 100 TB on a machine with a 16 GBs. – Peter Lawrey Sep 13 '18 at 06:18
  • @Chandra Did you implement this requirement .What is the final approach that you took ? – Sam Jan 17 '21 at 20:00

2 Answers2

1

you would have to add up the size of each cq4 files

the writePosition gives you the length of each cq4 file in bytes

public class Example {

    public static void main(String[] args) throws FileNotFoundException {
        SingleChronicleQueue q = SingleChronicleQueueBuilder.builder().path("tmp").build();

        ExcerptAppender appender = q.createAppender();

        try (DocumentContext dc = appender.writingDocument()) {
            long l = dc.wire().bytes().writePosition();
            dc.wire().write().text("lastx");
        }

       DumpQueueMain.dump(q.fileAbsolutePath());

    }
}

outputs the following

--- !!meta-data #binary
header: !SCQStore {
  writePosition: [
    131328,
    564049465049088
  ],
  indexing: !SCQSIndexing {
    indexCount: !short 8192,
    indexSpacing: 64,
    index2Index: 184,
    lastIndex: 64
  }
}

# position: 184, header: -1
--- !!meta-data #binary
index2index: [
  # length: 8192, used: 1
  65760 # truncated trailing zeros
]
# position: 65760, header: -1
--- !!meta-data #binary
index: [
  # length: 8192, used: 1
  131328 # truncated trailing zeros
]
# position: 131328, header: 0
--- !!data #binary
"": lastx

...
# 83754737 bytes remaining

--- !!meta-data #binary
header: !STStore {
  wireType: !WireType BINARY_LIGHT,
  recovery: !TimedStoreRecovery {
    timeStamp: 0
  },
  metadata: !SCQMeta {
    roll: !SCQSRoll { length: !int 86400000, format: yyyyMMdd, epoch: 0 },
    deltaCheckpointInterval: 64,
    sourceId: 0
  }
}

# position: 225, header: 0
--- !!data #binary
listing.highestCycle: 17780

# position: 264, header: 1
--- !!data #binary
listing.lowestCycle: 17780

# position: 304, header: 2
--- !!data #binary
listing.modCount: 1

# position: 336, header: 3
--- !!data #binary
chronicle.write.lock: -9223372036854775808

# position: 376, header: 4
--- !!data #binary
chronicle.lastIndexReplicated: -1

# position: 432, header: 5
--- !!data #binary
chronicle.lastAcknowledgedIndexReplicated: -1

...
# 65044 bytes remaining

when the length of the .cq4 file is defined by

writePosition: [
    131328,
     ....
  ],

in other words 131328 bytes

Rob Austin
  • 620
  • 3
  • 5
  • Thanks Rob, that helps. It is closer to what I need. .cq4 file size is still bigger than write position. Looks like chronicle queue pre-allocates and resizes based on chunk size. – Chandra Sep 06 '18 at 18:18
  • @Chandra Chronicle Queue uses virtual memory, and the extends of the file is not the disk space actually used on Linux. Using `du` will give you the size actually used. – Peter Lawrey Sep 08 '18 at 09:42
  • @Chandra You can reduce the block size to reduce the extends, however this will slow the performance. – Peter Lawrey Sep 08 '18 at 09:43
  • Thanks @PeterLawrey yeah, I can calculate size of directory but as it is expensive I don't want to do it for every write. It will be cool to get size of current .cq4 file. – Chandra Sep 12 '18 at 20:09
  • @chandra The write position gives you the extends of the file to within 4 KB (the page size) – Peter Lawrey Sep 13 '18 at 06:20
  • @PeterLawrey I wanted a similar feature where I want to restrict the size of chronicle queue due to a conscious design decision. We want to only store X GB worth of messages for a particular queue .Are we saying we could use the appender. writePosition() to get an idea of the current size of the queue .What if I delete a rolled file in a listener ?Is getting the writePosition for every write a costly operation ? Let me know the adviced way to achieve this functionality – Sam Jan 17 '21 at 19:58
  • @Sam What you could do is look at the file size on a roll and retain a list of these. When the total exceeds a threshold you could delete older roll files until the total is acceptable. – Peter Lawrey Jan 19 '21 at 08:31
0

The chronicle queue files roll based on time not file size. Why do you wish to limit the size of the .cq4 file ?

If you wish you could roll the queue file every Minute by changing the roll cycle.

Rob Austin
  • 620
  • 3
  • 5
  • I don't want to limit .cq4 file size but want to limit total queue size given that I want to roll every hour. Is there way to determine file size of current .cq4 file ? – Chandra Sep 06 '18 at 00:15