The Y2K Bug - Was it A Real Danger ??

                                                   

                                         In computing, a bug is an error in the source code that causes a program to produce unexpected results or crash altogether. Computer bugs can affect an application’s performance, so developers need to make sure they are corrected before the software gets sold to customers. Back when mainframe computers were still state-of-the-art, some programmers kept getting wrong results from their programs. When they checked under the hood, they discovered that a moth got into the circuitry, causing errors in computations. That’s why programming errors are called “bugs.”.



What causes bugs in software?

A flaw or failure in a software program could occur due to the following reasons.

  1. Program errors that programmers create while coding the application. These could be logical errors, syntax errors and semantic errors.
  2. Lack of testing due to limited time or not having skilled testers to thoroughly test the application for issues and defects.
  3. Frequent changes in the requirements and miscommunications among the clients, business analysts, developers and testers.
Bugs, in contrast, are errors in code and do not tend to be transmitted from one computer to the next, in the same way as a virus. Most bugs come from mistakes made in either the program's design or use of  incorrect code. Software bugs can, however, cause programs to behave in ways the software manufacturer never intended. The Y2K BUG famously caused the display of the wrong date, because the programs were not designed to handle dates after the year 1999.


                                                        The term Year 2000 bug, also known as the millennium bug and abbreviated as Y2K, referred to potential computer problems which might have resulted when dates used in computer systems moved from the year 1999 to the year 2000.Over two decades ago  , the industrialized world panic over so called Y2K bug . Some feared all computers may crash , Jetliners to fall from sky , hospital equipment to stop working and the global financial system to grind halt after New Year Eve that rang in year 2000 . It was genuinely scary time that many have forgotten . 

Y2K was both a software and hardware problem. Software refers to the electronic programs used to tell the computer what to do. Hardware is the machinery of the computer itself. Software and hardware companies raced to fix the bug and provided "Y2K complaint" programs to help. The simplest solution was the best: The date was simply expanded to a four-digit number. Governments, especially in the United States and the United Kingdom, worked to address the problem.

In the end, there were very few problems. A nuclear energy facility in Ishikawa, Japan, had some of its radiation equipment fail, but backup facilities ensured there was no threat to the public. The U.S. detected missile launches in Russia and attributed that to the Y2K bug. But the missile launches were planned ahead of time as part of Russia's conflict in its republic of Chechnya. There was no computer malfunction .Countries such as Italy, Russia, and South Korea had done little to prepare for Y2K. They had no more technological problems than those countries, like the U.S., that spent millions of dollars to combat the problem.

Due to the lack of result, many people dismissed the Y2K bug as a hoax .




What Cybersecurity Lessons Can We Learn From Y2K?

The Y2K event was unique in human history and can provide rare insights into how computer systems and microprocessor-based devices function under unusual and unpredictable stress. And that should be instructive for cybersecurity professionals.




  1. Fixing a vulnerability may create a new vulnerability problems came from the patches and fixes for the Y2K bug, not the bug itself. While testing for Y2K bug problems was thorough, testing the fixes was sometimes less so. Always test the fixes thoroughly.
  2. Fixing your own vulnerabilities also improves cybersecurity for connected systemsWith the Y2K bug, the patches applied in the United States for global systems controlling financial systems, for example, protected countries that took far less action to prepare for Y2K. Likewise, the cybersecurity fixes applied by a supplier may also help protect you, and vice versa. Take a big-tent approach to cybersecurity and make sure everybody is doing their part.
  3. Don’t expect everyone to give you credit for averting disaster. Cybersecurity people are in an unhappy position, and it’s just part of the job. If you fail to avert disaster, many will blame you for the failure. But if you succeed, they may blame you for being alarmist, spending too much time and money on the problem and misrepresenting a threat. The best you can do is do the best job you can communicating both the risks, the remedies and the benefits of averting crises after the fact.
  4. The biggest risks come from not one, but multiple points of failure or vulnerability. It’s easy to form tunnel vision about vulnerabilities. But most major cybersecurity failures result from multiple points of failure — a lack of employee training combined with inadequate tools, for example. Think holistically.
  5. Testing is everything. During Y2K, a regulation that forced mandatory testing enabled the fixes that prevented the most serious problems. Red-team exercises and their many variants are valuable exercises for figuring out in advance where the vulnerabilities lie. Be obsessive about testing.
  6. Investment to prevent catastrophe is expensive but often money-saving in the long run. Most of the damage caused by cyberattacks is, in the end, expressed in financial terms. But the costs of preventing or minimizing cyberattacks can also be costly. Make sure the cost-benefit analysis of cybersecurity investment is clearly stated in dollars and cents (as well as in other contexts, such as reputational damage). While cybersecurity tools, programs and staff cost money, breaches and attacks can cost far more.
  7. Old systems can create new problems. Legacy systems or programming languages that had fallen out of vogue meant that the people charged with fixing the problem may not understand how they worked. That was certainly the case with the Y2K bug. (Companies dragged retired programmers out of retirement to help fix the problem.) While it’s easy to ignore or overlook legacy systems that have been churning away for many years, always consider how they might contribute to new problems in the future.










0 Comments