The warnings about "not keeping a per-process global state" in a web backend app (you'll have the very same issue with Django or any wsgi app) only applies to state that you expect to be shared between requests AND processes.
If it's ok for you to have per-process state (for example db connection is typically a per-process state) then it's not an issue. wrt/ connections pooling, you could (or not) decide that having distinct pools per server process is ok.
For anything else - any state that needs to be shared amongst processes -, this is usually handled by some using some external database or cache process, so if you want to have one single connection pool for all your Flask processes you will have to use a distinct server process for maintaining the pool indeed.
Also note that:
multiple processes of the same application (which, as I understand, can be spawned in the case of large production flask servers)
Actually this has nothing to do with being "large". With a traditional "blocking" server, you can only handle concurrent requests by using either multithreading or multiprocessing. The unix philosophy traditionnally favors multiprocessing ("prefork" model) for various reasons, and Python's multithreading is bordering on useless anyway (at least in this context) so you don't have much choice if you hope to serve one more one single request at a time.
To make a long story short, consider that just any production setup for a wsgi app will run multiple processes in the background, period.