1

I'm writing an emulator as a side project right now, and I would like to emulate the machine I've chosen at the proper rate as the original hardware. My system should be more powerful to the extent that the time to perform a single instruction is negligible, so if I just have a function, say tick, which performs a single instruction, it will run way too quickly.

I was wondering if there is any way, in C, to call a function at some given interval (in MHz). For context, I'm writing this on a mac, so anything POSIX or in the OS X SDK would work (I looked through libdispatch but couldn't see anything).

Would it be best to simply have a loop running and calculating the time delta since the last loop? This seems rather inefficient (also preemption might become a factor here). What would some other ways of doing this be? Thanks.

beeselmane
  • 1,111
  • 8
  • 26

4 Answers4

1

Using clock_gettime() and nanosleep() are the way to go. Some other mechanism calling your function periodically will definitely be slower than that. You might even consider looping and counting cycles instead of using nanosleep(). Consider some numbers:

At 1 MHz, your function has 1 microsecond to run. At 10 MHz, your function has 100 nanoseconds to run.

Some experimental data on timings of system calls and context switches: https://blog.tsunanet.net/2010/11/how-long-does-it-take-to-make-context.html

So it looks like over 50 nanoseconds for a system call and over a microsecond for a context switch. Some other thing calling your function, besides your own code in the same process, will probably take "too long".

Pierce
  • 526
  • 3
  • 4
  • FYI: the macOS `clock_gettime()` has a resolution of one microsecond (see `clock_getres()`). Thus, for frequencies greater than 1 MHz, you can't really use those facilities — or the behaviour you get will be erratic. – Jonathan Leffler Jan 19 '19 at 08:16
1

The timing on non-RTOS are quite erratic, due to other tasks entering the system scheduler and so. Imagine if during your emulation, you open a web browser window, all your timings will go away.

So maybe you could take a different approach: don't rely on your system's time, but use also an emulated time.

Since you are emulating also the core, processing all the instructions, you know how many instructions you have executed, so you can count the actual ticks your emulated system has executed. Then, you can use this ticks to calculate the time on your system, update any hardware timer interruption you may emulate and so...

Will one particular function be faster than in real life? Doesn't matter, your system will know that XX ticks has gone by and will execute any interrupts or anything based on that.

With this approach, a real second will not be equal to the simulated second, but your emulation will be always the same, independently of the other applications or system scheduling issues.

Also, if you want to be sync with the real time, you can, from time to time (i.e. after executing any return from function or from exception), sync your ticks with the real time, just by halting the execution of the next instruction.

Finally, take a look into this question, since they give a lot of information on emulation worth reading. Particularly, my described approach above would correspond to the "interpretation" approach described on that link.

LoPiTaL
  • 2,495
  • 16
  • 23
1

This will be difficult to get accurate, because OS X is not a real time system. If absolute accuracy isn't required then I would use an interval timer that expires every 1/n, e.g. at a rate of "n per seconds", and then execute tick in the expiration handler.

A starting point would be POSIX setitimer() and calling tick in the signal handler

Stefan Becker
  • 5,695
  • 9
  • 20
  • 30
-1

I afraid it is very difficult (or even impossible) to archive on the hosted system if the OS is not a real time one. OS X is not the real time system and your timings will be rather "random" as your app will be given execution time by the system scheduler and it will not control the CPU execution.

Context switching timings and latency will not be the main issues here.

If you emulate the target system behavior your need too associate the target system instruction timings and during the emulator execution (according to the current elapsed time) amend the the speed the target instructions are emulated

0___________
  • 60,014
  • 4
  • 34
  • 74