This is usually done with pipes. Pipes seem to work fine in the environments that I've been testing (and getting a lot of work done in!) in Windows 10, specifically:
- Git bash
- MSYS2 bash
I have found if I have a large file or binary stream I can accurately use some of the tools that are installed (cat largefile.JPG | wc -c
), but whenever I write my own image processing programs with C++, whatever method I use to read stdin (whether with cstdio
old school C methods, with a C program, or with C++ cin
iostreams) I get only a small fraction of the stream showing up before it ends. The length of it seems to be deterministic, so the same file produces the same result.
Testing the same code on OS X or Linux obviously leads to proper functioning where the length of the stdin stream is the correct length. Hence making this a practical way on those platforms to pass data without hitting the disk. I'd been honing my bash-fu for a decade now, so it comes pretty naturally.
Of course other methods must exist that I can leverage, but I can't really come up with something quickly that I can expect to rely on. What are some things I could try to troubleshoot this here? I really like the set of unix tools I can install with pacman inside MSYS2, including
g++.exe (Rev2, Built by MSYS2 project) 7.1.0
Copyright (C) 2017 Free Software Foundation, Inc.
But this is my one big stumbling block so far. My simplest program compiled with this compiler is unable to slurp up a useful amount of data off the standard input stream. Why is that? If it's some limitation of the operating system, or of the posix layer and all of that black magic, then why does wc
work perfectly?