2

This might sound like a weird question, but how can you program time without using the API of a programming language? Time is such an abstract concept, how can you write a program without using the predefined function for time.

I was thinking it would have to be calculated by a count of processor computations, but if every computer has a different speed performance how would you write code to iterate time.

Assuming the language the program is written in doesn't matter, how would you do this?

EDIT: I should also say, not using system time, or any pre-generated version of time from the system

Arian Faurtosh
  • 17,987
  • 21
  • 77
  • 115
  • 3
    Time is an abstract concept if you are in a philosophical debate over the universe and space travel. In programming it is counted as seconds and microseconds. – Rottingham Sep 13 '13 at 18:43
  • 1
    In C++, there are things like `std::chrono::high_resolution_clock` or `std::chrono::system_clock` or `std::chrono::steady_clock`, etc. used for measuring time. The implementation is provided by the standard library, so the user doesn't need to bother with the TICK_PER_CYCLE, and so on. – Nemanja Boric Sep 13 '13 at 18:43
  • 1
    But can you do it without using those C++ methods is what I am asking... No use of system time from the OS. – Arian Faurtosh Sep 13 '13 at 18:55
  • 2
    So you want to measure time, but don't want to use anything relating to the built-in system clock? ... are you high? – Sammitch Sep 13 '13 at 19:00
  • I want to know if it is possible to do just using written code and no hardware output for time... and no I am not high. – Arian Faurtosh Sep 13 '13 at 19:02
  • 1
    Are you sure? Because the only way a computer has *any* concept of time is based on the hardware clock which is basically the same as what's in your wristwatch. Without that hardware clock and the system calls that poll it your computer wouldn't even be able to tell you what its clock speed is. All it could tell you was "I have completed X cycles since the last arbitrary poll". – Sammitch Sep 13 '13 at 19:07
  • Actually time is not an abstract concept. Time is defined as what a clock measures. – ggb667 Oct 11 '13 at 15:31

7 Answers7

8

Typically time is provided to the language runtime by the OS layer. So, if you're running a C/C++ program compiled in a Windows environment, it's asking the Windows OS for the time. If you're running a Java program that is executing in the JVM on a Linux box, the java program gets the time from the JVM, which in turn gets it from Linux. If you're running as JavaScript in a browser, the Javascript runtime gets the time from the Browser, which gets its time from the OS, etc...

At the lower levels, I believe the time the OS has it based on elapsed clock cycles in the hardware layer and then that's compared to some root time that you set in the BIOS or OS.

Updated with some more geek-detail:

Going even more abstract, if your computer is 1GHz, that means it's cpu changes "state" every 1/1Billion(10-9) second (the period of a single transition from +voltage to -voltage and back). EVERYTHING in a computer is based on these transitions, so there are hardware timers on the motherboard that make sure that these transitions happen with a consistent frequency. Since those hardware timers are so precise, they are the basis for "counting" time for things that matter about the Calendar time abstraction that we use.

I'm not a hardware expert, but this is my best understanding from computer architecture classes and building basic circuits in school.

Clarifying based on your edit:

A program doesn't inherently "know" anything about how slow or fast it's running, so on its own there is no way to accurately track the passage of time. Some languages can access information like "cycle count" and "processor speed" from the OS, so you could approximate some representation of the passage of time based on that without having to use a time api. But even that is sort of cheating given the constraints of your question.

greatwolf
  • 20,287
  • 13
  • 71
  • 105
reblace
  • 4,115
  • 16
  • 16
  • 1
    Windows will also occasionally synchronize itself with a time server over the internet. I'd assume linux and mac do as well. – Mooing Duck Sep 13 '13 at 18:49
  • Yes! Good point, I forgot to mention that. That's how you can deal with the fact that not every computer has an atomic clock in it to keep perfect timing. – reblace Sep 13 '13 at 18:54
  • So basically there is really no way, unless you use the hardware system that calculates time. – Arian Faurtosh Sep 13 '13 at 18:58
  • Does server time calculate time the same way as the OS, or is it more accurate? – Arian Faurtosh Sep 13 '13 at 18:59
  • 2
    Yeah, that's the bottom line. It makes sense too, if you think about it cause a program doesn't know anything other than the information built into it at compile time, or what it is told by an external source at runtime. Server time is usually provided as a service where the hardware that provides it for a lot of other computers and therefore can justify using more expensive/precise hardware due to economies of scale. Also, if really precise timing is required, you can use something like this: http://en.wikipedia.org/wiki/Atomic_clock – reblace Sep 13 '13 at 19:00
  • Not all computers have clocks, some are asynchronys and use less power but don't have known constant run times. So you can't say "all computers", you can say all synchronys computers. – ggb667 Sep 13 '13 at 19:58
5

Simply put, you can't. There's no pure-software way of telling time.

Every computer has a number of different hardware timers. These fire off interrupts once triggered, which is how the processor itself can keep track of time. Without these or some other external source, you cannot keep track of time.

devicenull
  • 484
  • 1
  • 4
  • 14
  • 1
    "There's no pure-software way of telling time" -- That isn't quite 100% correct. You can query an NTP server or something similar, if you have network access. Yes, I know the server uses hardware, but if your (embedded?) client doesn't have time hardware it could be a possible work-around. – Michael J Sep 13 '13 at 18:59
  • That's not really a solution. It gives you the time at that exact moment in time, but you still have no way to increment your time as time goes by. Unless you rely on the remote server to keep sending you packets with updated time as fast as possible. – devicenull Sep 13 '13 at 20:46
  • All computers have some form of internal clock; they cannot easily operate without one. Some internal clocks are, however, very inaccurate. If a computer does not maintain time of day when switched off, then on startup it has no idea what time it is. That can possibly be solved by querying an NTP server. The internal clock can be used to maintain (possibly inaccurate) time of day, with periodic NTP queries to keep it from getting too far out of synch. Note, though, that I was just splitting hairs. I agree that a modern computer will very likely rely on clock hardware. – Michael J Sep 14 '13 at 00:51
5

The hardware clock in your motherboard contains a precisely tuned quartz crystal that vibrates at a frequency of 32,768 Hz [2^15] when a precise current is passed through it. The clock counts these vibrations to mark the passage of time. System calls reference the hardware clock for time, and without the hardware clock your PC wouldn't have the faintest idea if, between to arbitrary points in execution, a second, a day, or a year had passed.

This is what the system calls reference, and trying to use anything else is just an excercise in futility because everything else is the computer is designed to simply function as fast as possible based on the voltage it happens to be receiving at the time.

You could try counting CPU clock cycles, but the CPU clock is simply designed to vibrate as fast as possible based on the input voltage and can vary based on load requirements and how stable the voltage your power supply delivers is. This makes it wholly unreliable as a method to measure time because if you get a program to monitor the clock speed in real time you will notice that it usually fluctuates constantly by +/- a few MHz.

Even hardware clocks are unreliable as the voltage applied to the crystal, while tightly controlled, is still variable. If you turned off the NTP services and/or disconnected it from the internet the time would probably drift a few minutes per month or even per week. The NTP servers reference atomic clocks that measure fundamental properties of physics, namely the oscillations of cesium atoms. One second is currently defined as:

the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.

oh... and that's measured by a computer as well.

Sammitch
  • 30,782
  • 7
  • 50
  • 77
2

In Java, you will notice that the easiest way to get the time is

System.currentTimeMillis();

which is implemented as

public static native long currentTimeMillis();

That is a native method which is implemented in native code, c in all likelihood. Your computer's CPU has an internal clock that can adjust itself. The native call is an OS call to the hardware to retrieve that value, possibly doing some software transformation somewhere.

Sotirios Delimanolis
  • 274,122
  • 60
  • 696
  • 724
2

Without a clock or reference to the OS you can't measure anything relative to the outside of the world. However you can of course measure internally knowing that the task is 1/3 of the way done or whatever. But depending on system load, CPU throttling from thermal requirements, other programs running, etc. the last 1/3 might take as long as the first 2/3rds or longer. You can apply huristics to load balanace long running tasks against themselves (only) so that for instance things will be smooth if the number of tasks relative to threads varies to achieve a desired performance characteristic, but the PC has to get it's time from somewhere. Really cheap clocks get their time from the fact that power is 60hz, so every 60 cycles a second goes by. But the actual number of hz varies a bit and is likely to constantly vary in a single direction, so clocks like that get out of synch pretty fast, seconds per day, or more. I guess with a camera and a hole in a box you could determine when the sun was at a particular position in the sky and determine time that way, but we're getting pretty far afield here.

ggb667
  • 1,881
  • 2
  • 20
  • 44
1

I think you kind of answered your own question in a way: "Time is such an abstract concept". So I would argue it depends what exactly you're trying to measure? If it's algorithmic efficiency we don't care about a concrete measure of time, simply how much longer does the algorithm take with respect to the number of inputs (big O notation). Is it how long does it take for the earth to spin on it's axis or some fraction of that, then obviously you need some thing external to the computer to tell it when one iteration started and one ended thereafter ignoring cpu clock drift the computer should do a good job of telling you what the time in the day is.

rainkinz
  • 10,082
  • 5
  • 45
  • 73
1

It is possible, however, I suspect it would take quite some time.

You could use the probability that your computer will be hit by a cosmic ray.

Reference: Cosmic Rays: what is the probability they will affect a program?

You would need to create a program that manipulates large amounts of data in the computer's memory, thus making it susceptible to cosmic ray intrusion. Such data would become corrupted at a certain point in time. The program should be able to check the integrity of the data and mark that moment when its data becomes partially corrupted. When this happens, the program should also be able to generate another frame of refference, for example how many times a given function runs between two cosmic rays hits.

Then, these intervals should be recorded in a database and averaged after a few billion/trillion/zillion occurences thereby reducing the projected randomness of an occuring cosmic ray hit.

At that point and on, the computer would be able to tell time by a cosmic ray average hit coeficient.

Offcourse, this is an oversimplified solution. I am quite certain the hardware would fail during this time, the cosmic rays could hit runnable code zones of memory instead of raw data, cosmic rays might change occurences due to our solar system's continuous motion through the galaxy etc.

However, it is indeed possible...

Community
  • 1
  • 1
Florin Mircea
  • 966
  • 12
  • 24
  • 1
    I like this solution in theory, and with appropriate space rated hardware (CPU's that are fault tolerant and OS's that are deterministic even when randomly corrupted), it could be done in practice, but the comments below about power stability, thermal equilibrium, etc, still require this to basically be a clock, just one where the ticks are long. – ggb667 Oct 11 '13 at 15:22