1

I have a large array that can be reordered and stored in a NoSQL database (Firebase Realtime Database). I use Ionic4 ion-reorder-group to sort them.

I am currently saving the data as follows:

{
Data:
    0:{
       N1:'',
       N2:'',
       N3:false
      },
    1:{
       N1:'',
       N2:'',
       N3:false
      },
    ...
}

But by add an object at the beginning, I have to update the full-array in the server. Problem is if I have 1'000 objects and add 10 new ones at the beginning, I have written 10'000 objects in the server.

So what is the best way to structure a large reorder array list? My idea was to work with pushId, but I have not found a solution where the problem no longer exists. In the code below you can see my idea, but the problem is the same that everything has to be updated and a lot of pushId need more memory...

{
Data:
    [pushId]:{
       P: 0, // Position at the array
       N1:'',
       N2:'',
       N3:false
      },
    ...
}
Frank van Puffelen
  • 565,676
  • 79
  • 828
  • 807
Alex
  • 190
  • 1
  • 1
  • 12
  • I highly recommend reading Dan's answer here: https://stackoverflow.com/questions/46720997/leaderboard-ranking-with-firebase While written for Firestore, most of the considerations are the same for the Realtime Database. – Frank van Puffelen Dec 28 '19 at 16:33

1 Answers1

1

The problem of assigning ordered labels to the items of a list as they are inserted/deleted/moved is well known and referred to as "online list labeling". It's a strict version of the "order maintenance problem", which has a wikipedia page here: https://en.wikipedia.org/wiki/Order-maintenance_problem

The best you can do is amortized O(log N) relabelings per insertion, so if you have a list of 1000 objects and you insert 10 things, you will have to update around 100 items -- not too bad.

It's not too hard to achieve that bound. Perhaps the simplest algorithm is the first one described in this paper: "Two Simplified Algorithms for Maintaining Order in a List", by Bender, Cole, Demaine, etc, which you can get for free here: https://erikdemaine.org/papers/DietzSleator_ESA2002/paper.pdf

For a list of up to N items, it will require to have up to N2 available keys or so. You would use 64-bit ints for up to 232 items, for example.

Here's a simple incarnation:

  • We will assign 64-bit labels for up to 232 items
  • For each k-bit prefix of a label (consisting of the high-order bits), there are therefore 264-k available labels with that prefix.
  • When we insert an item between labels a and b, we can choose any label between them. If they are adjacent, we have to relabel some first.
  • To relabel, find the longest prefix with a low enough density (used labels/available labels), and evenly space all the labels with that prefix.

A prefix with m available labels has low enough density if less than sqrt(m) labels are used. A prefix of length k, then, has low enough density if there are less than 232-k/2 used labels with that prefix.

Refer to the linked paper for the proof that this provides the claimed amortized complexity

Matt Timmermans
  • 53,709
  • 3
  • 46
  • 87