Here is my problem: I have to read "binary" files, that is, files which have varying "record" sizes, and which may contain binary data, as well as UTF-8-encoded text fields.
Reading a given number of bytes from an input file is trivial, but I was wondering if there were functions to easily read a given number of characters (not bytes) from a file ? Like, if I know I need to read a 10-characters field (encoded in UTF-8, it would be at least 10 bytes long, but could go up to 40 or more, if we're talking "high" codepoints).
I emphasize that I'm reading a "mixed" file, that is, I cannot process it whole as UTF-8, because the binary fields have to be read without being interpreted as UTF-8 characters.
So, while doing it by hand is pretty straightforward (the byte-by-byte, naïve approach, isn't hard to implement - even though I'm dubious about the efficiency), I'm wondering if there are better alternatives out there. If possible, in the standard library, but I'm open to 3rd party code too - if my organization validates its use.