I'm making a command-line interpreter for a programming language, and by the interpreter's nature there are a number of purely cosmetic UTF-8 characters to be printed to the screen.
It's occurred to me that perhaps I should accommodate those whose terminals (line-printers?) don't like/support Unicode, or those whose font doesn't have glyphs for some characters.
The way I thought I'd implement this without rewriting a lot of existing printing code is add a command line flag (say, --no-unicode-out
), and then do something like the following:
import sys
from unicodedata import normalize
class myStdout(object):
def __init__(self):
pass
def write(self, *args, **kwds):
return sys.__stdout__.write(
"".join(" ".join(args).replace("µ", "micro"))
)
def flush(self, *args, **kwds):
return sys.__stdout__.flush()
NO_UNICODE_OUT = bool(len(sys.argv) - 1)
if NO_UNICODE_OUT:
print("stdout switcheroo")
sys.stdout = s = myStdout()
print(input("> "))
This feels kinda messy, kinda hacky. Now, that's not always a bad thing, but does this kind of solution make any sense at all, and if not then what's a better solution?
If someone wants to nitpick, by "practical" I mean sensical, efficient, readable, idiomatic, whatever.