0

I have a C++ program that writes some debug output to a file:

ofstream file;
ios_base::openmode mode = ios::out | ios::trunc | ios::binary;
file.open(path, mode);
file << "Very large debug output";
file.close();

This code gets executed from a python script by calling

subprocess.call(cmd, shell=True, executable="/bin/bash")

In a later step, the python script analyzes the debug output. Since the output is very large, writing it to disk and reading it again takes a lot of time, which is why I was wondering if there was a way to catch the call to open and write the data into a memory mapped file that would then be quickly accessible from python? Ideally with as little changes as possible to the C++ part of the code.

radschapur
  • 445
  • 1
  • 3
  • 13
  • I think it would make sense to write your own [stream_buf](https://timsong-cpp.github.io/cppwp/n4659/stream.buffers) and then set the file buffer using `rdbuf`. You stream_buf will actualy be very simple since your buffer would be the entirely mapped file. – Oliv Jan 23 '18 at 22:40
  • You may be running into some of the performance issues of C++ streams. See here:https://stackoverflow.com/questions/17468088/performance-difference-between-c-and-c-style-file-io – stark Jan 23 '18 at 22:48

0 Answers0