I am looking to improve the performance of a project that uses WPF.
At the moment it gets a huge chunk of char data (in GBs) from a db and binds it directly to an items control, that is wrapped by a wrap panel. (note I cannot change the fact it gets GBs of data from the DB e.g. page it in)
The performance is...awful. I have looked into various virtualization techniques but to me there seems to be a more simplistic solution.
"Don't add as many controls as there is data."
My plan is to take just the chunks of the data that would be visible and update a set amount of controls (e.g. a TilePanel of controls that fill the screen with a slight buffer either side). As the user scrolls it would replace the data in the existing controls, instead of adding and removing controls.
Question 1: does this seem like a good idea?
As the data is so large it seems sensible to me to not even have that in RAM but instead dump it in a local file. Then read that file in chunks as and when I need it (possibly via a background paging thread). Would this make a good or bad difference to use-ability?
Question 2: any tips on this, I was planning on taking the basic set up from this question&answer: Seeking and writing files bigger than 2GB in C#
I can't shake this feeling that if this was indeed the right approach, I would have stumbled across it already instead of the seemingly complex virtualization techniques.
Clarification I will most likely have 3 tiers of data, the file (the whole, big data), an array in RAM (partial data, say 10000 elements), and the "visual data" the array that I use to update the controls on screen.