1. Overview

Most computer systems in the world use 1 January 1970, as an epoch time. In this tutorial, we’ll learn why the world agreed to use it.

2. Epoch

Epoch, by definition, is a point in time that we use as a reference point for measuring time. It doesn’t necessarily have to have a specific meaning, so long as people agree on it.

There are many epoch dates throughout history. Let’s look at a few notable ones in the computing world:

  • 1 January 1900 – Network Time Protocol (NTP), a protocol to coordinate time between computers, uses this date
  • 1 January 1901 – the Ada programming language uses this; in the first version of the language (1980), the date range was 1901–2099. In later versions, the lower bound was left the same to maintain backwards compatibility, with the upper bound being set to 2399
  • 1 January 1970 – this date is also known as the Unix epoch or POSIX time, and is used by many Unix and Unix-like systems and programming languages, such as C/C++, Java, JavaScript, Perl, PHP, Python, and Ruby and many more.

3. The Unix Epoch

Unix was developed in 1969 and first released in 1971. Initially, Unix didn’t use 1 January 1970 as the epoch.

3.1. 1 January 1971

In early versions, Unix measured system time in 60-hertz intervals, and the system used a 32-bit unsigned integer to store the value. The data type could only represent a span of time of less than 829 days, or about 2.5 years. Because of this, the time represented by 0 (the epoch) had to be set in the very recent past, and 1 January 1971 was selected.

3.2. 1 January 1970

In later versions in the early 1970s, Unix system time incremented every second. As a result, this increased the span of time a 32-bit unsigned integer could represent to around 136 years.

After that, according to Dennis Richie, Unix engineers arbitrarily selected 1 January 1970 00:00:00 UTC as the epoch because it was considered a convenient date to work with.

4. Limitations

Even though most systems in the world use Unix time, it also has limitations.

4.1. Greater Precision

Since Unix time increments every second, we need to use a different data type to represent timestamps with greater precision.

4.2. Year 2038 Problem

Many Unix programs are still using 32-bit integers to store the time, which can only represent integer values from −(231) to 231 − 1 inclusive. Consequently, the latest time that we can store is 231 − 1 (2,147,483,647) seconds after the epoch, which is 03:14:07 on Tuesday, 19 January 2038. Systems that attempt to increment this value by one more second will cause an integer overflow.

Using the 64-bit integer data type to store Unix time solves this issue, as the range of dates representable with it is over 584 billion years.

4.3. Pre-1970 Timestamps

The Unix epoch 0 is at 00:00:00 UTC on 1 January 1970, so the system uses a negative number to represent any timestamp before that.

4.4. Leap Seconds

In Unix time, every day is precisely 86,400 seconds. When a leap second occurs, it has to make an adjustment to increase or decrease a second unlike UTC or TAI.

5. Conclusion

In this article, we learned about some notable epoch dates in the computing world, a brief history of Unix time and its limitations, and the reason why Unix engineers selected 1 January 1970 as the epoch time.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.