3

I have a data logging system running on an STM32F7 which is storing data using FatFs by ChaN to an SD card: http://elm-chan.org/fsw/ff/00index_e.html

Each new set of data is stored in a separate file withing a directory. During post-processing on the device, each file is read and then deleted. After testing the open, read, delete sequence in a directory with 5000 files I found that the further through the directory I scanned the slower it got.

At the beginning this loop would take around 100-200ms, 2000 files in and now it takes 700 ms. Is there a quicker way of storing, reading, deleting data? or configuring FatFs?

edit: Sorry should have specified, I am using FAT32 as the FAT file system

f_opendir(&directory, "log");
while(1) {
    f_readdir(&directory, &fInfo);
    if(fInfo.fname[0] == 0) {
      //end of the directory
      break;
    }

    if(fInfo.fname[0] == '.') {
      //ignore the dot entries
      continue;
    }

    if(fInfo.fattrib & AM_DIR) {
      //its a directory (shouldnt be here), ignore it
      continue;
    }

    sprintf(path, "log/%s", fInfo.fname);
    f_open(&file, path, FA_READ);
    f_read(&file, rBuf, btr, &br);
    f_close(&file);

    //process data...

    f_unlink(path); //delete after processing
}
  • 2
    FAT as in the old DOS filesystem? IIRC, its directory tables are just unsorted arrays of entries, so, yeah, the more files in a directory, the slower it is to search through that table. Modern filesystems use better data structures like b-trees for more efficient lookups of entries. – Shawn Oct 15 '18 at 18:51
  • 1
    This is a well-known problem with FAT filesystems. (I once worked with an acquisition system which stored tens of thousands of files in a single directory, and boy, was it painful.) Really the only solution is "Don't do that" (or, if you can, use a better filesystem). – Steve Summit Oct 15 '18 at 19:00

1 Answers1

4

You can keep the directory chains shorter by splitting your files into more than one directory (simply create a new subdirectory for every 500 files or so). This can make access to a specific file quite a bit faster, as the chains to walk become shorter on average. (This is just assuming that you are not searching for files with a specific name, but rather process files in the order they have been created - In this case the search algorithm can be pretty straightforward).

Other than that, there is not much hope to get a simple FAT file system any faster. This is a principal problem of the old FAT technology.

tofro
  • 5,640
  • 14
  • 31