I have to read a binary file which contains 1300 images of 320*256 of uint16 pixels and convert this to a 3D array. The data is saved with little-endian mod so I read 2 bytes in a char and convert it into a double (to do some operation with it after). The saved data is on the following pattern:
Main header / Frame header1 / Frame1 / Frame header2 / Frame2 / etc.
Sorry I can't give you the file. My actual C++ code is :
EDIT : new version which take 4 seconds instead of 21
#include "stdafx.h"
#include <iostream>
#include "stdio.h"
#include <fstream>
#include <stdint.h>
using namespace std;
// Create a function to convert any size of data in little-endian mod
template<class T>
T my_ntoh_little(unsigned char* buf) {
const auto s = sizeof(T);
T value = 0;
for (unsigned i = 0; i < s; i++)
value |= buf[i] << CHAR_BIT * i;
return value;
}
int main()
{
ifstream is("Filename", ifstream::binary);
if (is) {
// Create 3D array to stock images
unsigned char data_char[2];
double ***data;
data = new double**[1299];
for (unsigned i = 0; i < 1299; i++) {
data[i] = new double*[256];
for (unsigned j = 0; j < 256; j++)
data[i][j] = new double[320];
}
// Pass main header
is.seekg(3000, is.beg);
// Read all data once
is.read(reinterpret_cast<char*>(data_char), length);
// Convert pixel by pixel from uchar to double
int buffer_image = width * height * 2;
int indice;
for (unsigned count = 0; count < count_frames; count++) {
indice = main_header_size + count * (frame_header_size + buffer_image) + frame_header_size;
for (unsigned i = 0; i < 256; i++){
for (unsigned j = 0; j < 320; j++) {
data[count][i][j] = my_ntoh_little<uint16_t>(data_char + indice);
indice += 2;
}
}
}
// De-Allocate memory to prevent memory leak
for (int i = 0; i < 1299; ++i) {
for (int j = 0; j < 256; ++j)
delete[] data[i][j];
delete[] data[i];
}
delete[] data;
}
return 0;
}
I already create a memory map in Python to read this file but in C++ I don't find how to achieve this goal. I try to do this because at now I need 21 seconds to read the file which is very long. I know in Python the reading with a memory map is under 0.1 seconds. I am looking for a solution equivalent, just faster than my actual way which is very too slow.
Code Python :
dt = [('headers', np.void, frame_header_size), ('frames', '<u2', (height, width))]
mmap = np.memmap(nom_fichier, dt, offset=main_header_size)
array2 = mmap['frames']
I thank you in advance for any advice/help.