0

This code works on windows to enable unicode 16 in the console (in order to write) : it uses io.h to work

_setmode(_fileno(stdout), _O_U16TEXT);

On linux I have tried including sys/io.h to make it work but it is still giving me errors :

  • _fileno was not declared in this scope

  • _O_U16TEXT was not declared in this scope

  • _setmode was not declared in this scope

I have searched on the internet and I wasn't able to find the solution, maybe you could help me with this!

Do you need more information?

  • Os : Windows 10 and raspberry pi 2b running rasbian with no gui

  • Compiler : g++

Thanks

Henry Le Berre
  • 910
  • 9
  • 18
  • Modern Linux distributions are already natively UTF-8. Just write UTF-8 to `std::cout` and call it a day. None of the MS-Windows-specific silliness is required. – Sam Varshavchik Dec 09 '18 at 16:50
  • So how would I write this unicode character "0x2588"? Thanks – Henry Le Berre Dec 09 '18 at 16:51
  • It will be down to each individual console how to configure its encoding. You could check the manual for the particular console you are using. – Galik Dec 09 '18 at 16:52
  • 3
    Would you believe `std::cout << "█" << std::endl;`? If your C++ text editor uses UTF-8 encoding, that's all she wrote. Take any Unicode character, grab it's UTF-8 encoding, and use that. – Sam Varshavchik Dec 09 '18 at 16:53
  • Here is a way to convert to and from `UTF-16/UTF-8` https://stackoverflow.com/questions/52703630/convert-c-stdstring-to-utf-16-le-encoded-string/52703954#52703954 though it may be better to just use `wstring` which will compile to `UTF-16/UFT-8` on Windows and `UTF-32/UTF-8` on `Unix` . https://stackoverflow.com/questions/45565566/c-how-to-read-from-unicode-files-by-ignoring-first-character-of-each-line/45566786#45566786 – Galik Dec 09 '18 at 16:59
  • You will need to write your own 16 bit `wchar_t` to whatever-encoding-is-used-by-terminal-emulator sink. Note that on g++ you will need to build everything with `-fshort-wchar` option to ensure that `wchar_t` has appropriate size. – user7860670 Dec 09 '18 at 17:07
  • Conclusion: just use UTF-8. Wide characters are just a bunch of unneeded extra work, including locale initialization, and all that ends up happening is that libstdc++ will have to go through the paces of converting `wchar_t` to UTF-8 anyway, since that's the native Linux encoding these days, as I mentioned. Cut out the middleman, and just dump UTF-8 to `std::cout`. Why put yourself through such unneeded pain? – Sam Varshavchik Dec 09 '18 at 17:13
  • Your operating system is unclear. You have `Linux` tag, and you say your OS is Windows + Raspberry Pi. Pick one operating system. – Barmak Shemirani Dec 09 '18 at 17:27
  • Linux uses 32-bit Unicode, not 16-bit Unicode like Windows. You can verify by printing `sizeof(wchar_t)`. You probably won't find built-in support for 16-bit Unicode on Linux. – jww Dec 09 '18 at 18:30

1 Answers1

0

Would you believe std::cout << "█" << std::endl;? If your C++ text editor uses UTF-8 encoding, that's all she wrote. Take any Unicode character, grab it's UTF-8 encoding, and use that. – Sam Varshavchik

Henry Le Berre
  • 910
  • 9
  • 18