I have a Python program, however it seems I cannot really scale it because if the lack of multiprocessing. We have added threading but since it still runs on one core we cannot scale enough.
I saw here that it is possible to embed python in C++ programs. So I thought to do multiprocessing in C++ and in these processes call a Python function which we cannot convert to C++.
If I do it this way:
1: Is my thinking correct, that we can make full use of the server then?
2: Will the Python code be interpreted once when the program is started or will it need to be interpreted every time the function is called? In other words will the function still be as fast as it is now?
EDIT:
It seems I'm not clear.
In my understanding in Python there are multithreading and multiprocessing. Multithreading will use same core and can share memory space [1]. And multiprocessing can use multiple cores but cannot share memory between the processes.
I have 3 main functions which all receive websocket data and place this in memory.
Then on events one function is called which need to access this memory.
However the times this function is called and the frequency of the websocket feed (messages/second) is growing fast. One cpu core cannot handle this.
I have to say I have no experience with C++, but I thought C++ can distribute the workload over multiple cores/cpu while keeping access to the memory. So we can scale by getting more cores/cpu's.