2

Each example of Chronicle map needs to be given an "entries" size in the builder but I don't understand the impact this value has on a persisted map:

 ChronicleMap
    .of(Long.class, Point.class)
    .averageValueSize(8)
    .valueMarshaller(PointSerializer.getInstance())
    .entries(999)
    .createOrRecoverPersistedTo(new File("my-map"));
  1. What happens when I insert more than 999 entries into this map?
  2. Is this number defining the number of entries Chronicle or the memory mapped find should be holding in memory for me?
trafalgar
  • 63
  • 4

1 Answers1

1

If the number of entries is too small, it will have to resize, which comes at a small cost, and it won't be arranged as efficiently.

In general, it is better to oversize the map so you don't have to worry about this. On Linux systems it uses sparse files so you might not even end up using more disk space.

Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Thanks. How does a persisted map actually deliver performance? Is it because it's writing to a memory mapped file and so every write is actually a write into a block of memory & your reads are kept fast by the OS paging the most frequently accessed parts of the file? – trafalgar Jan 29 '21 at 15:33
  • @trafalgar The software writes/reads native memory so while it is in memory you get in-memory speed, however, OS will also read in/flush out dirty pages so even if the process ides data isn't lost (if the OS dies it could lose data) SO you get near in memory speed for data sets larger than main memory and persistence even if the process dies. – Peter Lawrey Jan 30 '21 at 21:46