Back to article
Y2K and Linux
In the Beginning...
January 1, 1999
The root cause of the Y2K problem was economics, pure and simple. Back in the bad old days of computing, computer storage, both RAM and disk space, were unimaginably expensive by today's standards. (Talk to enough long-time programmers, and you'll hear stories about mainframe programs designed to run in just a few thousand, or even hundred, bytes of memory.) Many programmers therefore had little choice but to conserve every byte possible, and one common trick was to store just part of the year for dates, so 1960 was shortened to 60. Programs that used this trick "knew" to add 1900 to the year, so they worked properly, and everyone was happy.
Of course, things changed, as they always do in this industry; storage got cheaper, and the need to resort to such tricks eventually went the way of the buggy whip and the 8-inch floppy disk. So why are we still facing the Y2K problem? There's no single answer. In some cases programmers were just lazy and used the old techniques. In others, they had no choice, such as when writing a new program that had to deal with these altered dates in huge databases, and their employers weren't able or willing to spend the time and money to update the database and all the programs that used it. This highlights one of the reasons why Y2K is so difficult and expensive to fix--most large computing environments depend on a lot of heavily interdependent programs and data, so even a single low-level change like adding those leading digits to a years will require changes throughout a lot of other software and data.