0

I have a disk with sequential writing capability 1572 Mb/second

Disk sequential and Random write and read speed

I have 4 cameras with 60 fps. Each frame is not coded and is 3.7 Mb (let's say 4 Mb) image. The writing speed I need is 4*4*60 = 960 Mb/s.

PointGrey provides examples with the code :

for (unsigned int uiCamera = 0; uiCamera < numCameras; uiCamera++)
    {
        error = retrieveImage(ppCameras[uiCamera], &image);
        // error = ppCameras[uiCamera]->RetrieveBuffer(&image);
        if (error != PGRERROR_OK)
        {
            PrintError(error);
            cout << "Press Enter to exit." << endl;
            cin.ignore();
            return -1;
        }

        // Get the size of the buffer associated with the image, in bytes.
        // Returns the size of the buffer in bytes.
        iImageSize = image.GetDataSize();

        // Write to the file
        ardwBytesWritten[uiCamera] = writeFrameToFile(m_ALLCAMS, image);


};

where:

size_t writeFrameToFile(FILE* f, Image image)
{
    return fwrite(image.GetData(),
        1,
        image.GetCols() * image.GetRows(),
        f);
}

unfortunatelly, I can get only around 370 - 500 Mb/s writing speed in release mode (according to windows profiler).

writeFrameToFile is the slowest operation and it takes 12-13 ms in debug mode.

Does it make sence to parallel file writing, or ssd can not write in parallel threads? Shall I use multiple SSDs? Thank you.

hagor
  • 304
  • 1
  • 15
  • Since you are not accessing the SSD directly and you go through the OS you cannot rely on the rated sequential write speed. Your operating system will write each file block to whichever block on the SSD it feels the urge to do so. It might be sequential,it might not be sequential, and with multiple files being written at the same time, the writes will be all over the place. – Sam Varshavchik Oct 01 '18 at 13:20
  • Can you write each camera to it's own file? – NathanOliver Oct 01 '18 at 13:22
  • What is your OS? – Jabberwocky Oct 01 '18 at 13:23
  • How long are you filming, what's the use-case? If it has to be continuous for a long time you might consider sending the file(s) through the network to dedicated machines whose only job is to write them to disk. If you data is in burst, you might be able to write part of it and keep the rest in RAM. – AlexG Oct 01 '18 at 13:28
  • @SamVarshavchik, Os is windows, but I can switch to linux, if needed. How can I record directly with skipping OS SSD managing? – hagor Oct 01 '18 at 13:44
  • @NathanOliver, I don't want to have different files for each camera, because I want to do sequential recording. I will do the parsing after recording is finished. – hagor Oct 01 '18 at 13:46
  • @AlexG, The recording will be 10-20 minutes long, so it is not possible to store 20 gb in Ram. The use case: record a session, and do analytics and stereo reconstruction after. – hagor Oct 01 '18 at 13:46
  • @Jabberwocky, Windows, but I can switch to Linux, if needed. – hagor Oct 01 '18 at 13:47
  • @hagor maybe using raw Windows `CreateFile`/`WriteFile` rather than `fopen`/`fwrite` helps. – Jabberwocky Oct 01 '18 at 13:49
  • @Jabberwocky, thank you, I will try. – hagor Oct 01 '18 at 13:50
  • @hagor `CreateFile` has multiple modes (eg. a bufferless mode). Play around with them too, but carefully read the `CreateFile` documentation, these special modes are quite tricky, e.g. memory must be aligned, you can only write multiples of 1024 byte or similar etc. – Jabberwocky Oct 01 '18 at 13:52
  • Generally you cannot control how the OS goes about its business of allocating file blocks, either Windows or Linux. You'll have to write your own custom filesystem for this. – Sam Varshavchik Oct 01 '18 at 14:03
  • Please use MB/s (Megabytes/s) https://en.wikipedia.org/wiki/Megabyte when talking about programming. Only hardware folk use Mb/s (Mbits/s) https://en.wikipedia.org/wiki/Megabit – Mark Setchell Oct 01 '18 at 14:37
  • The most likely way of achieving high performance is probably by multi-threading your app. – Mark Setchell Oct 01 '18 at 14:38
  • Related: https://stackoverflow.com/a/11564931/3871028 – Ripi2 Oct 01 '18 at 18:26

2 Answers2

1

If you are on a Linux system, consider mapping the file into memory.

Knowing the file size in advance, you can instruct the OS to set up an explicit memory mapping using functions like mmap(). The entire contents of the file then get mapped to the virtual memory space of your program, and writing to the file becomes as simple as copying the data between two memory buffers.

Of course, the entire file need not fit in the memory during the writing. But therein lies the power of this approach. By letting the OS map the memory for you, the same mechanism conventionally used for write-through caching and page prefetching dumps written pages to the actual file on the disk asynchronously. This way, your program can keep on writing, and the OS itself schedules where/when the actual I/O happens (usually on other CPU cores, without suspending the execution of your program). Furthermore, the chunk sizes are also chosen appropriately by the OS.

TL;DR:

  • File mapping gives you powerful asynchronous I/O on Linux.
  • You don't need to worry about buffer sizes. The OS makes the correct calls for you.
  • It may be more complicated to set up, but once done, you can just elegantly write into the file the same way you would write into a memory buffer.

For more info, check out the manual page and this example on Gist.

Edit: From the comments, I see now that you're on Windows. The same thing can done be there as well using CreateFileMappingA(). More information can be found in this article.

Petr Mánek
  • 1,046
  • 1
  • 9
  • 24
1

If nothing else works, your last resort would be to bypass the OS and filesystem by writing directly to the disk itself.

On Linux, you would do that by writing to the appropriate device, probably somewhere in /dev/disk/by-id/... and on Windows, I have no idea even where to start, but I am fairly sure it is possible.

Note that this would trash anything already on the disk, so you need a disk only for this purpose and if you accidentally write to the wrong device, you could trash the entire OS installation.

So this is fairly advanced and risky stuff and I don't know enough about it to really help along, other than saying that it might be worth looking into.