Photograph of the blue and gold face of an astronomical clock in Prague

It’s perhaps easiest to think of JavaScript time existing in four parts:

  1. Microsecond time from the internal system clock, used in the Web Performance API.
  2. The “system-level” time of milliseconds and Unix time, on which the others ultimately depend.
  3. Timers: setTimeout and setInterval, which typically set delays of milliseconds to seconds.
  4. “Human-readable” date formats, such as “January 1, 2015”.

JavaScript doesn’t make these distinctions itself: with the exception of the Web Performance API, all time in JavaScript is derived from a standard known as the Unix epoch.

JavaScript time conventions

Most every expression of time on a digital device is derived from the microsecond “tick-tock” of a chip, which keeps track of the so-called Unix epoch: the number of milliseconds that have passed since midnight, January 1, 1970 UCT. You can see this number for yourself in a browser: open up a console window and type:

Date.now()

The number you’ll get back after hitting Enter look something like this: > 1413067397613

This ever-increasing number, also known as the Unix timestamp, is the means by which most computer time is measured, with everything else – time in the future, past, and the difference between dates – resolved by conversion into this format, after taking into account other factors such as time zones and leap seconds.

High resolution (1 millionth of a second) timing is also available via the Web Performance API, although that is excessive for most purposes.

This series will explore JavaScript time from the top down, starting with the human-readable formats and working to the micro level. We’ll start by making a simple clock, progressing to more advanced examples.

Photograph by Juan Pedro Asencio Flores, used under a Creative Commons Attribution 3.0 license.

Enjoy this piece? I invite you to follow me at twitter.com/dudleystorey to learn more.