On a typical platform, the only available "periodic events" are inaccurate and low-frequency, certainly nothing like 0.6MHz. But using a "slow" timer (maybe 100Hz or so) you can "run many short sprints", with enough time "resting" in between that on average you're emulating the right amount of cycles per second. Time can usually be measured fairly accurately, so you can emulate exactly the right number of cycles in every "sprint".
At a high level, that could look something like this:
int cycle_budget = 0;
time last_sprint = something;
// on timer fire
cycle_budget += (current_time - last_sprint) * clock_rate;
last_sprint = current_time;
while (cycle_budget >= slowest_instruction)
tick(); // emulates one instruction, subtracts from cycle_budget
There are some obvious variations, for example you can let the budget go negative instead of testing whether there is enough to run a slow instruction. Or you might decode the instruction and then test whether there is enough budget to run it. This all assumes an instruction won't take arbitrarily long, but as far as I know that's never a problem (even something like z80's string instructions, they actually loop by branching back and re-executing itself).