-2

I am working with large files writing from database MySQL to disk. My file may reach GB per file. My concern is, I've tried to write the file in several ways but the results are the same (taking longer time to finished).

To produce a file with 0.2 GB or 200,000 KB takes almost 2 hours and half to complete. I've tried to grab the data by using either TSQLQuery or TClientdataset, but they gives the same results (time taken to finish) writing the files.

Meanwhile, for the writing processes, I've used TFileStream and also Windows.API class to write the files, there are no difference between them. My expectation is it will gives some difference since they are processing in different ways.

I also have tried the method which have been posted before at (Buffered files (for faster disk access))

but still it didn't enhance the performance.

This situation only occurs for large files. When I try to produce a small size of files, it is quite fast.

I've ran the same code in D7. The writing process takes only few minutes to be done. The reason I used XE5 is because I want to avoid from using BDE's components.

Is there any other way to boost the performance for writing large files to disk?

Community
  • 1
  • 1
  • Are you sure the file writing is the bottleneck? How long does it take to fetch all the data from the DB without writing it to a file? – JimmyB Jul 20 '15 at 11:28
  • you first need to identify where your performance bottleneck is. Ì bet the problem is not writing files but getting the data from the DB... – whosrdaddy Jul 20 '15 at 11:29
  • `TFileStream` et al. is as fast as you will get. If you don't mess it up really good, all file IO routines are much faster than the actual disk operation, so there's nothing to be gained on that side. – JimmyB Jul 20 '15 at 11:30
  • The time taken to collect all the data was only takes few minutes, the rest of the time is the looping of writing the files. – SyafiqBear Jul 20 '15 at 11:31
  • Suggestion: 1) make a simple test by writing e.g. 2GB of random data, 0x00 or whatever, to a file and note how long that takes, 2) fetch all the data you need from the database without writing anything to any file. Note how long that takes. – JimmyB Jul 20 '15 at 11:32
  • Is there any chance that it may caused by the component itself, which the TClientdataset example can't perform as fast as TCliendataset in D7? – SyafiqBear Jul 20 '15 at 11:34
  • 1) I've tried several data which consist of large size (1GB), most of them take almost 4 hour to complete. – SyafiqBear Jul 20 '15 at 11:36
  • 2) I did check the time take for the process Clientdataset.open, it takes 2-3 minutes to complete – SyafiqBear Jul 20 '15 at 11:37
  • 3
    Without a MVCE there's really nothing we can say except throw questions and guesses in your direction. You need to put the actual slow code in the question, and you also need to tell us something about your system (RAM, type of disk). Of course it could be something as simple as bad sectors on the disk in which case all bets are off, but without the code there's nothing sensible we can contribute. – Johan Jul 20 '15 at 11:41
  • @Johan My code consists of complex code which calling several functions in order to do one process. This code initially was in D7 version and it was fast during that time. But once uplifted to XE5 version, the process become too slow. I'll try to make a simple program to illustrate the process. My PC have 8GB RAM and runs on 64-bit. The apps itself was built on 32-bit. – SyafiqBear Jul 20 '15 at 12:03
  • 1
    Quite why you hope for help with code that you don't to show is beyond me. Please reconsider. – David Heffernan Jul 20 '15 at 13:09
  • Grab a freeware profiler, run the example through it and see where most of the time is being spent. You may need to do this with Debug DCUs enabled. This will at least narrow down which part is slow – Matt Allwood Jul 20 '15 at 13:10
  • Far simpler to set the program running, break into it and see where the program is. Do that a couple of times and you'll identify the bottleneck. – David Heffernan Jul 20 '15 at 14:03
  • a) Is the MySql server and its database files on your local PC or on a separate server? b) Do you get the same slow writes if you simply write a 200Mb file of dummy data generated by your app to the same location as you're writing to currently? – MartynA Jul 20 '15 at 19:17
  • I fail to see why you can not disable the database part and prove whether it is the writing of the file that is slow. As someone said,. its debugging 101. If it is not file saving then disconnect events on the dtat aware components and see if its code in the events or the data fetch. – Rohit Gupta Jul 21 '15 at 20:42

1 Answers1

1

try disablecontrol on the clientdataset, before looping through, it will dramatically increase the speed the clientdataset will scroll through the records.

Tom
  • 19
  • 1