3

I was trying to replace a for-loop with coroutines to move the stars:

--fine
function _update()
 for c in all(boids) do
  move_boid(c)
 end
end

--broken
function _update()
 for c in all(boids) do
  coresume(cocreate(move_boid),c)
 end
end

Notice that a fixed number of stars are frozen (I'm pretty sure the number is fixed):

boids

But why? How can I handle this? The complete code is on itch.

knh190
  • 2,744
  • 1
  • 16
  • 30
  • Wrap the call to `coresume` with an `assert` to catch any possible runtime errors produced by coroutines. Btw, the way it is called now, there's no point in using coroutines, it's only a waste of time. – Vlad Jan 15 '19 at 07:36
  • @Vald I checked it with `costatus` and `co!=null`, no errors. I need coroutines because each boid takes some calculations that is logically wrong to do it one by one. – knh190 Jan 15 '19 at 07:46
  • Running coroutine without `yield` somewhere in the middle is like calling a plain function, just takes some more resources. It's not a parallel thread, you won't save time by creating lots of coroutines. – Vlad Jan 15 '19 at 07:57
  • 2
    An interesting quote from lexaloffle's forum: `coroutines seem to yield automatically if PICO-8 runs out of cycles while running the routine. Not sure if this has been documented anywhere`. In other words, your system is overloaded. Try to reduce amount of snowflakes. – Egor Skriptunoff Jan 15 '19 at 07:59
  • Playing a bit with sources. It feels that it's not about exceptions, it's like your model for boids make it stuck because of high boids density. If I reduce the number of boids, then sometimes I see few of them occasionally slowing down to a freeze point, but after few second they continue to move. – Vlad Jan 15 '19 at 08:00
  • @EgorSkriptunoff Yeah I'm aware that fewer boids are fine. Also tried increase the number of boids, and frozen number of boids increased (still, with some portion, a fixed number). @Vlad if you try replace it with `move_boid(c)` it'll be slower. Surely here I'm parallelizing it. – knh190 Jan 15 '19 at 08:18
  • @EgorSkriptunoff still no idea how to handle this. Create a table for coroutines and loop it in `_update`? Seems not working if there're a lot. (but actually, 40 is not large.) – knh190 Jan 15 '19 at 08:32
  • You're not parallelizing it. If it's true that pico-8 yields coroutines on its own, then you skip some steps in your simulation, making it faster, but incomplete. Coroutines do not run in parallel. – Vlad Jan 15 '19 at 08:33

1 Answers1

2

Thanks for @Vald and @Egor's comments. Seems the problem is caused by "too-long coroutines" to finish in a PICO-8 cycle. So the solution is that I store unfinished coroutines in a table and resume them if not finished. But somehow the movement is changed, kinda like "lost frame".

Here's my edited code:

function _init()
 -- code
 cors={}
end

function _update()
 for i=1,#boids do
  local co=cocreate(move_boid)
  local c=boids[i]
  add(cors,co)
  coresume(co,c)
 end
 for co in all(cors) do
  if (co and costatus(co)!="dead") then
   coresume(co)
  else
   del(cors,co)
  end
 end
end

And also modify the calculation function, adding a new line in the middle:

function move_boid(c)
 -- code
 yield()
 -- code
end

Just to yield before it's completed.


Update: another way to do it is reusing coroutines.

function _init()
 -- code
 -- create coroutines
 cors={}
 for i=1,#boids do
  local co=cocreate(move_boid)
  local c=boids[i]
  add(cors,co)
  coresume(co,c)
 end
end

function _update()
 foreach(cors,coresume)
end

-- and wrap the move function with a loop
function move_boid(c)
 while true do
  -- code
  yield()
  -- code
  yield()
 end
end
knh190
  • 2,744
  • 1
  • 16
  • 30