2

I want to write an emulator for a particulary slow CPU which runs at 600 or so kilohertz. If I were to write an emulator for the CPU in the naïve way (i.e. emulating one instruction at a time without anything else), the emulation would be much faster than 600 kilohertz.

How do I program an emulator to emulate a CPU at the correct speed, regardless of host's speed? What technique is usually used by real-world emulators to do this? How do I avoid jitter slowing down the emulation?

fuz
  • 88,405
  • 25
  • 200
  • 352
  • What exactly emulator are you talking about? qemu, FPGA, etc? I mean, which programming environment you are gonna use for this emulator? – Sam Protsenko Apr 11 '15 at 12:17
  • The typical technique (using a slow timer to emulate 6k cycles ten times per second) only works "on average" and has lots of jitter, so I assume it is not good enough here? – harold Apr 11 '15 at 12:18
  • @SamProtsenko My concrete use case is to write an emulator for an 8 bit CPU (think home computer) that runs on a POSIX operating system. – fuz Apr 11 '15 at 12:18
  • @harold I'm not sure what the standard technique is and research didn't turn up anything interesting. Would you mind elaborating on the standard technique in an answer so I can upvote it and potentially award a bounty? – fuz Apr 11 '15 at 12:19
  • @FUZxxl so it's gonna be just your own user-space application written in C and using POSIX API, correct? Which OS you are gonna use (e.g. Linux, FreeBSD)? And also which kernel version? – Sam Protsenko Apr 11 '15 at 12:22
  • @SamProtsenko Correct. I'm currently using Linux with the intent to migrate to FreeBSD soon, but I'm interested in portable solutions with respect to POSIX. – fuz Apr 11 '15 at 12:23
  • @chuex Indeed, it is! – fuz Apr 16 '15 at 18:21

1 Answers1

3

On a typical platform, the only available "periodic events" are inaccurate and low-frequency, certainly nothing like 0.6MHz. But using a "slow" timer (maybe 100Hz or so) you can "run many short sprints", with enough time "resting" in between that on average you're emulating the right amount of cycles per second. Time can usually be measured fairly accurately, so you can emulate exactly the right number of cycles in every "sprint".

At a high level, that could look something like this:

int cycle_budget = 0;
time last_sprint = something;

// on timer fire
cycle_budget += (current_time - last_sprint) * clock_rate;
last_sprint = current_time;
while (cycle_budget >= slowest_instruction)
    tick(); // emulates one instruction, subtracts from cycle_budget

There are some obvious variations, for example you can let the budget go negative instead of testing whether there is enough to run a slow instruction. Or you might decode the instruction and then test whether there is enough budget to run it. This all assumes an instruction won't take arbitrarily long, but as far as I know that's never a problem (even something like z80's string instructions, they actually loop by branching back and re-executing itself).

Sam Protsenko
  • 14,045
  • 4
  • 59
  • 75
harold
  • 61,398
  • 6
  • 86
  • 164
  • That sounds sensible. Some systems use precisely timed interrupts to play sound. I understand that this is not going to be reproducible with the emulation approach you describe. In my use case, I can emulate the part that does sound in a different way but is there any trick to get such things depending on precise timing right? – fuz Apr 11 '15 at 12:40
  • @FUZxxl it gets trickier there, one approach I've seen labels "external events" with the precise time they would have (based on a cycle counter), and then they are processed as if they happened at that time. But that doesn't necessarily work for everything – harold Apr 11 '15 at 12:48
  • @harold What about preemption? Does your method should take it into the account? I mean, if we know [time slice](http://en.wikipedia.org/wiki/Preemption_%28computing%29#Time_slice) value of scheduler (which we can get by `sched_rr_get_interval()`) -- do we need to use this value in your calculations, so that one cycle of emulated CPU don't get break by scheduler? – Sam Protsenko Apr 11 '15 at 12:48
  • 2
    @SamProtsenko the elapsed time will reflect that and allocate a bigger budget, it does that will all fluctuations in timing (which is usually inaccurate) – harold Apr 11 '15 at 12:49