The Problem
I need to read and write a large number of records (about 1000). The example below takes as long as 20 minutes to write 1000 records, and as long as 12 seconds to read them (when doing my "read" tests, I comment out the line do create_notes()
).
The Source
This is a complete example (that builds and runs). It only prints output to the console (not to the browser).
type User.t =
{ id : int
; notes : list(int) // a list of note ids
}
type Note.t =
{ id : int
; uid : int // id of the user this note belongs to
; content : string
}
db /user : intmap(User.t)
db /note : intmap(Note.t)
get_notes(uid:int) : list(Note.t) =
noteids = /user[uid]/notes
List.fold(
(h,acc ->
match ?/note[h] with
| {none} -> acc
| {some = note} -> [note|acc]
), noteids, [])
create_user() =
match ?/user[0] with
| {none} -> /user[0] <- {id=0 notes=[]}
| _ -> void
create_note() =
key = Db.fresh_key(@/note)
do /note[key] <- {id = key uid = 0 content = "note"}
noteids = /user[0]/notes
/user[0]/notes <- [key|noteids]
create_notes() =
repeat(1000, create_note)
page() =
do create_user()
do create_notes()
do Debug.alert("{get_notes(0)}")
<>Notes</>
server = one_page_server("Notes", page)
One More Thing
I also tried getting notes via a transaction (shown below). It looks like a Db.transaction might be the right tool, but I haven't found a way to successfully employ it. I've found this get_notes_via_transaction
method to be exactly as slow as get_notes
.
get_notes_via_transaction(uid:int) : list(Note.t) =
result = Db.transaction( ->
noteids = /user[uid]/notes
List.fold(
(h,acc ->
match ?/note[h] with
| {none} -> acc
| {some = note} -> [note|acc]
), noteids, [])
)
match result with
| {none} -> []
|~{some} -> some
Thanks for your help.
Edit: More Details
A little extra info that might useful:
After more testing I've noticed that writing the first 100 records takes only 5 seconds. Each record takes longer to write than the previous one. At the 500th record, it takes 5 seconds to write each record.
If I interrupt the program (when it starts feeling slow) and start it again (without clearing the database), it writes records at the same (slow) pace it was writing when I interrupted it.
Does that get us closer to a solution?