2

Essentially I have 2 "while True:" loops in my code. Both of the loops are right at the end. However when I run the code, only the first while True: loop gets run, and the second one gets ignored.

For example:

while True:
    print "hi"

while True:
    print "bye"

Here, it will continuously print hi, but wont print bye at all (the actual code has a tracer.execute() for one loop, and the other is listening to a port, and they both work on their own).


Is there any way to get both loops to work at the same time independently?

user3666197
  • 1
  • 6
  • 50
  • 92
doodspav
  • 35
  • 2
  • 8
  • What do you want your output as? Please post a sample of your expected output – yash Dec 03 '17 at 01:24
  • Also, it's only printing `hi` because it's in an infinite while loop - `while True`. It will run as long as the condition is `True` which it is in this case – yash Dec 03 '17 at 01:24
  • ah i see. ok so after your comment i tried a fix, and it worked for this, but not for my actual code - so im gonna post a new question with the actual code (and the expected output as yklsga said above) - but thnx you both helped :) – doodspav Dec 03 '17 at 09:39
  • it’s just me on both comments. Also, glad i could help. yw @doodspav – yash Dec 03 '17 at 09:55

1 Answers1

1

Yes.
A way to get both loops to work at the same time independently:

Your initial surprise was related to the nature, how Finite-State-Automata actually work.

[0]: any-processing-will-always-<START>-here
[1]: Read a next instruction
[2]: Execute the instruction
[3]: GO TO [1]

The stream of abstract instructions is being executed in a pure-[SERIAL] manner, one after another. There is no other way in the CPU since uncle Turing.

Your desire to have more streams-of-instructions run at the same time independently is called [CONCURRENT] process-scheduling.


You have several tools for achieving a wanted modus-operandi:

Read about a weaker form, using just a thread-based concurrency ( which, due to a Python-specific GIL-locking, yet executes on the physical hardware as a [CONCURRENT]-processing, but GIL-interleaving ( which was knowingly implemented as a very cheap form of a collision-avoidance for each and every case, that this [CONCURRENCY] might introduce ) will finally interleave each of the ( now ) [CONCURRENT]-streams, so as to principally avoid colliding access to any Python object at the same time. If you are fine with this execute-just-one-instruction-stream-fragment-at-a-time ( and round-robin their actual order of GIL-stepped execution ), you can live in a safe and collision-free world.

Another tool, Python may use, is the joblib.Parallel()( joblib.delayed() ), where you will have to master a bit more things to make these ( now a set of fully spawned subprocesses, each ( yes, each ) having a full-copy of python-state + all variables ( read: a lot of time and memory needed to spawn 'em ) and no mutual coordination ).

So decide about which form is just-enough for the kind of your use-case, and better check the new Amdahl's Law re-formulation carefully ( implications on costs of going distributed or parallel )

user3666197
  • 1
  • 6
  • 50
  • 92