1

I have a Python program, however it seems I cannot really scale it because if the lack of multiprocessing. We have added threading but since it still runs on one core we cannot scale enough.

I saw here that it is possible to embed python in C++ programs. So I thought to do multiprocessing in C++ and in these processes call a Python function which we cannot convert to C++.

If I do it this way:

1: Is my thinking correct, that we can make full use of the server then?

2: Will the Python code be interpreted once when the program is started or will it need to be interpreted every time the function is called? In other words will the function still be as fast as it is now?

EDIT:

It seems I'm not clear.

In my understanding in Python there are multithreading and multiprocessing. Multithreading will use same core and can share memory space [1]. And multiprocessing can use multiple cores but cannot share memory between the processes.

I have 3 main functions which all receive websocket data and place this in memory.

Then on events one function is called which need to access this memory.

However the times this function is called and the frequency of the websocket feed (messages/second) is growing fast. One cpu core cannot handle this.

I have to say I have no experience with C++, but I thought C++ can distribute the workload over multiple cores/cpu while keeping access to the memory. So we can scale by getting more cores/cpu's.

user3605780
  • 6,542
  • 13
  • 42
  • 67
  • 1
    You're doing something wrong. Do you mean multiprocessing for multithreading. You should be more clear. Give some details. – Onur Tuna Jan 31 '18 at 10:40
  • 2
    "*the lack of multiprocessing*". What's wrong with https://docs.python.org/2/library/multiprocessing.html ? You can easily change the `threading` based code to `multiprocessing`. – CristiFati Jan 31 '18 at 10:43
  • As @CristiFati mentioned, I also think Python provides enough flexibility. I can't think of any scenario that you cannot use the multiprocessing. Just one note that if you want true concurrency then make sure how kernel threads are assigned to user-level threads. I assume there should be an interface for that in multiprocessing. You may also find this post useful: https://medium.com/@bfortuner/python-multithreading-vs-multiprocessing-73072ce5600b especially the part about ```concurrent.futures library``` – Novin Shahroudi Jan 31 '18 at 11:01
  • I added some extra information, please let me know if my thoughtprocess is wrong. – user3605780 Jan 31 '18 at 11:05
  • _“I thought C++ can distribute the workload over multiple cores/cpu while keeping access to the memory.”_ This clearly looks like an [XY problem](http://xyproblem.info/). _“Multithreading will use same core”_ No, it is widely used for running a single application on several cores simultaneously. – Melebius Jan 31 '18 at 11:21
  • @Melebius , I closed the question, seems I indeed to read more into the subject. – user3605780 Jan 31 '18 at 11:33
  • I don't doubt that you *could* use C++ this way, but I don't think you'd end up with a design that you'd be happy with. If you're going for a multi-process design, you can write just your critical process in C++. You would have to look into a cross-language way to do inter-process commincation and probably inter-process memory sharing too. – Humphrey Winnebago Jan 31 '18 at 21:03

0 Answers0