I'm not sure this is quite the right sort of question for SO.
However, the answer is "it's up to you". Projects do it both ways and there are pros and cons to each. A lot of it depends on (a) how many contributors you have and (b) how easy you want to make it on other people to be able to build directly from your Git repository.
If you check in your derived files then other people don't need to have any of these extra tools installed on their systems to build the software (but, see a caveat below). However, if you have multiple contributors and they don't all have identical versions of the tools, you can get warring spurious file changes as these derived files are regenerated with slight differences then committed.
If you don't check in your derived files then your repository much cleaner and, abstractly, more "correct" (just like you wouldn't expect people to check in the object files that are generated during compilation). However anyone who wants to build the code will need to have a suite of tools installed themselves. On GNU/Linux systems this is pretty trivial but not so on some others. Generally people who make this choice provide a shell script that will "prep" the source directory.
One caveat with the first method: even if you do check in all derived files you probably will need to write and run some sort of "prep" script, assuming you are using make and writing makefiles to keep your gettext files up to date: Git doesn't preserve timestamps so if you want to avoid rebuilding of derived files after they've been cloned or checked out, you need to run a little script that will touch
files in the proper order to ensure make
knows they're up to date.
FWIW, I personally never check in any derived files into my repositories. I acknowledge this makes life difficult for some users but IMO it's the right way to do things.