0

This may seems pretty basic, but I'm really wondering what's the best way to make sure a game has the time to update all its logic before rendering?

If we take the NES for example, if you know the cost of each instructions you can calculate how much time your logic takes before the VBlank so you are using most of the processing time available. Nowadays, it's pretty much impossible to do that kind of thing with modern computers.

I just want to know what I can use to scale my game and to be able to keep track of the cost of the functionalities I'm implementing so I can assure a solid framerate.

HEYIMFUZZ
  • 85
  • 1
  • 6
  • I do it so update and render blocks each other (in case of multithread) and update measure time elapsed which is used for the update (for D'Lambert usually) this way does not matter if update method is called with inaccurate rates ... I do not handle vertical sync as the double buffering do it for me automatically (of coarse that poses risk skipping some frames but that is not too important for dynamic scene you would need to know what to look for and look hard to even spot it). I use timers for the call generation apart of obvious events... – Spektre May 28 '17 at 07:10
  • there are also other possibilities for timing/sync demanding tasks like this: [Question about cycle counting accuracy when emulating a CPU](https://stackoverflow.com/a/33317202/2521214). Also your assumption about NES are correct only until you not using interrupts and HW waits .... – Spektre May 28 '17 at 07:16

0 Answers0