The Internet Was Designed To Survive a Nuclear Strike

With so much business being conducted over the internet a significant outage could cause issues. For example supermarkets rely on timely and frequent deliveries, if their in store systems are unable to tell their distribution systems what stock they need shelves would rapidly empty. This is why it is wise to keep in mind how reliable the internet is and whether sufficient backup procedures are in place.

There is a commonly quoted piece of general knowledge about the internet having being designed to be capable of surviving a nuclear strike. In this article I’ll be looking at whether this fact or is instead an urban myth. I’ll start be delivering a quick history lesson before looking at the reality of the internet today and finally giving my answer to the question in the title.

The History Lesson

During the 1960s DARPA (Defence Advanced Research Projects Agency) in America started doing research in packet switched networks. At the time DARPA were funding projects in several universities and labs, most of which involved a computer. In order for DARPA to have access to the information on these machines a terminal connected to the machine would be installed in a terminal room in DARPA’s offices in the Pentagon.

They realised that it would allow for greater data sharing between their projects if instead these computers could be networked, this would also mean that a room full of different terminals could be replaced with a single terminal. “Communicating with that community from the terminal room next to Taylor's office was a tedious process. The equipment was state of the art, but having a room cluttered with assorted computer terminals was like having a den cluttered with several television sets, each dedicated to a different channel. “It became obvious,” Taylor said many years later, “that we ought to find a way to connect all these different machines.”” (Reference 2.), quoting Robert Taylor(Director of ARPA’s Information Processing Techniques Office)).

To this end DARPA went about creating ARPANET to allow these computers to be connected. ARPANET relied on sending data across telephone lines between nodes, the computers and terminals being plugged into these nodes. As telephone lines were far from reliable it was realised early on that the network would need redundant connections to allow data to be routed around a disconnected link. This is where the packet switching element of ARPANET came in – the data to be sent would be cut up into packets, each packet traversing the network independently of the others and being reassembled at the other end. Each packet would be passed from node to node eventually reaching its destination.

The Internet Today

The internet (as we know it) today grew organically out of ARPANET. Slowly other computer networks joined ARPANET, becoming the internet. This organic growth maintained the concept of redundant connections as these remained the weakest link in the communication chain. This redundancy has proved to be useful even with the significantly more reliable data lines of today. I’m going to share just a few stories of incidents which have affected internet connectivity, several more are available by using a search engine.

On 21st July 2010 LINX (London Internet Exchange) showed that the internet is managed by humans. The technicians were upgrading a port and managed to mess it up. The result of this is that for 20 minutes the UK was effectively off the internet. Many ISPs (Internet Service Providers) have peering arrangements with each other, only most of these go through LINX meaning that the other links became busy – especially the links of ISPs which have their own connections out of the UK. (Reference 3.).


Image 1 - The amount of data flowing through LINX 21-22 July 2010.

During March 2010 BT (British Telecom) experienced a flood which caused an electrical fire at a key exchange in London. The outage caused by this resulted in 437 local exchanges and 37 500 data circuits being unusable for data. Due to the flood and fire it was several days before BT were able to restore service to normal. Consumers as far away as Birmingham were affected by this outage which would have only caused a slow down if the BT data network had been designed to be as redundant as the rest of the internet. (Reference 4.).

In January 2008 a ships anchor severed several submarine data cables off the coast of Egypt. Due to the amount of redundancy on the internet traffic locally (or accessing local sites) was significantly slowed due to the re-routing and use of bottlenecks but globally the internet didn’t even notice. (Reference 5.).

Conclusion

Due to the internet’s organic growth it has numerous redundant routes between each node of the network, this is what allows it to be such a reliable data carrier. This growth means that I have every confidence that if a nuclear blast were to happen that the internet would function as if nothing happened except locally. However those local to the blast would have bigger issues that the internet being unavailable (assuming that emergency response plans in the area are up to scratch). It should be noted however that this is not something which was designed in – we got it by accident. It is in fact an urban myth that the internet was designed to survive a nuclear strike. A conclusion which is confirmed by Charles Herzfeld (Director of DARPA 1965-1967) – “The potential military applications (including the potential for robust communications) were well in our minds, but they were not our primary responsibility. In fact there existed a significant Air Force program devoted to Strategic Command and Control, and related pieces of work were done under that aegis. My involvement was modest; I had to approve the program, and did so enthusiastically. As time went on, I became one of its strong supporters and explicators, especially before Congress.” (Reference 6.).

Wikipedia has this to say about the creation of the urban myth – “It was from the RAND study that the false rumor started, claiming that the ARPANET was somehow related to building a network resistant to nuclear war. This was never true of the ARPANET, only the unrelated RAND study on secure voice considered nuclear war. However, the later work on Internetting did emphasize robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.” (Reference 1.).

It is my feeling that over the next few decades we will let accountants decide that the internet has too much redundancy and remove it (or fail to put it back as links permanently fail or are used to increase throughput). This will be unnoticed by the average internet user until we have a failure of part of the internet. A failure which in today’s internet would go unnoticed and be routed around but which becomes head line news because we’d lost the redundancy and therefore the capability to route around it.

Bibliography and References

1. ARPANET, Wikipedia. http://en.wikipedia.org/wiki/ARPANET (Accessed 23rd November 2010).

2. Katie Hafner & Matthew Lyon. Where wizards stay up late. Page 13. (Formatting adjusted to emphasize the words of Taylor).

3. Linx outage caused by upgrade, The Register. http://www.theregister.co.uk/2010/07/22/linx_downtime/ (Accessed 24th Nov 2010).

4. Flood, fire at BT Paddington node causes widespread problems, The Register. http://www.theregister.co.uk/2010/03/31/burne_house_burns/ (Accessed 24th Nov 2010).

5. World Tech Podcast 182 (Available from http://64.71.145.108/pod/tech/podcast182.mp3) (Accessed 4th Jan 2008, this was not reaccessed for the purposes of this article).

6. Charles Herzfeld on ARPAnet and Computers, About.com. http://inventors.about.com/library/inventors/bl_Charles_Herzfeld.htm (Accessed 11 December 2010).

Tags: