1

I need to repeatedly write 8 bytes of data (uint64_t counter) to a ubifs partition of size 256 MB. I am concerned about the flash wear due to the repeated writes.

Looking at the ubinfo -a output on the partition, minimum I/O unit size is 2048 bytes. So my first attempt was to do circular writes to a file of size 2048 bytes. I.e. Write 256 times (times 8 bytes) and then go back to the beginning ad infinitum.

I have set up a program to test this theory and after two weeks I notice the Current maximum erase counter value a.k.a. max_ec counter has gone up to about 20,000 after about 1.8 billion writes. That is way more than what I'd expect on a perfectly even wear across all Erase Blocks. My next approach would be to try a file size closer to 124KB, i.e. the size of the Logical Erase Block and see if it makes any difference.

I see three options:

  • Try different file sizes for comparison
  • Read ubifs driver code
  • Rebuild kernel with debug enabled and get more ubifs debug logs

Is there a better way?

Here is the little C program that does repeated writes:

#include<stdio.h>
#include<fcntl.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/stat.h>

int main(int argc, char *argv[]){
    int fd;
    char *ptr;
    uint64_t writeops, filesize, index=0, i;

    if (argc!=3){
        printf("Usage: %s filesize(uint64_t) writeops(uint64_t)\n");
        return 1;
    }

    // Sanitise args
    filesize = strtoul(argv[1], &ptr, 10);
    writeops = strtoul(argv[2], &ptr, 10);

    if (filesize%8 !=0){
        printf("Filesize must be a multiple of 8\n");
        return 2;
    }

    // Open a file to write
    fd = open("/mnt/user/magicnumber", O_CREAT|O_RDWR|O_DSYNC|O_LARGEFILE, S_IRUSR|S_IWUSR);

    // Get file size to be as big as the filesize arg
    lseek(fd, filesize-1, SEEK_SET);
    write(fd, '\0', 1);

    // Begin actual testing
    for(i=1,index=0; i<=writeops; i++,index+=8){
        if (index == filesize){
            index = 0;
        }
        lseek(fd, index, SEEK_SET);
        write(fd, (char *)&i, sizeof(i));
    }
    close(fd);
    return 0;
}
Lokesh
  • 11
  • 4
  • Rather than "*optimise [sic] wear leveling*" (whatever that is supposed to mean) I see your program exacerbating wear by performing a logical sector write to the NAND flash for every 8 bytes. Your concern "*about the flash wear due to the repeated writes*" seems insincere since you're intentionally inducing as many erase & write cycles as possible per **write()** syscall. – sawdust Jul 14 '23 at 06:18
  • Hm I think I see your point. If I understand you correctly, even by writing 8 bytes to flash at a time, I am forcing a 2KB (Minimum I/O unit size) write operation to the flash causing too much wear. Back to the drawing board.. FWIW the abstract problem is to have a `unit64_t` counter that persists any accidental power cuts etc. The counter increases quite rapidly so I need to write it at least once every 10ms. – Lokesh Jul 16 '23 at 09:29

0 Answers0