HISTORY OF THE PRE-WEB INTERNET

COMMUNICATING COMPUTERS

The first computers were machines used to compute numerical data in binary form. In the 1960s, a way to create text on computers was devised. Letters of the alphabet were assigned numerical equivalents. This system, known as ASCII (American Standard Code for Information Exchange), added a new dimension to the computers’ function. These machines became tools for creating and processing not only numerical data but also textual information. The information thus created and manipulated with the help of a ‘keyboard’, displayed on a ‘monitor’ or ‘terminal’, and finally sent to a printer. Keyboards, monitors, terminals and printers are known as peripherals connected with cables to the computer. So there was communication and information exchange between the peripherals and the computer through these cables. That is, the keyboard sent signals to the computer and monitor received instructions from the computers. Hypothetically, can a computer in New York be connected to a printer in Los Angeles? Yes, with a long enough cable. But scientists found that there is a greater probability of text and numbers getting garbled when it travels long distances through wires. If there is no confidence that the message sent may not be received as sent, communication becomes problematic. There was a practical limit to the distance between computer and its peripherals. So computer-to-computer information exchange or computer-to-peripheral communication was limited in scope until a new technique of sending data through wires was developed. This new technique known as packet switching enabled computers to exchange information reliably without errors corrupting data in transit.

PACKET SWITCHING

Packet switching is a technical process of delivering data from one point in a telecommunication network to another. To understand packet switching, let us start with one of the earlier methods of data transmission – circuit switching. A wired connection is established between two points A and B. Data flows through this channel from point A to point B. It is that simple. Telephone companies have used this type of switching to connect telephone users. When one subscriber requests connection to another, the telephone exchange sets up a wired connection between the two. If it is a long distance call, several telephone exchanges may be involved in setting up the ‘circuit’ or connection between the two people. After the connection has been established, the conversation flows back and forth through this circuit. This method worked well for computer to computer data exchange if the computers were within a mile of each other. As the distance between the computers increased, the chances of data getting corrupted during transmission increased. In order for reliable transmission of data between computers to occur, a better system had to be developed. Paul Baran and his associates developed a new system called packet switching. Data is broken up into small packets, numbered and addressed before being sent out to the network. Because each packet has the destination address, it can travel through any available route. Eventually all the packets will reach the destination computer and reassembled. Error checking mechanisms are integrated into each packet. So the receiving computer can check if any packet is corrupted or lost. If a packet is lost or corrupted, a request to resend the packet can be made.

TCP/IP

TCP/IP or Transmission Control Protocol/Internet Protocol is the communication protocol developed by AT&T to conduct computer to computer communication. Protocol is essentially a set of rules that guide communication. If two parties want to communicate, they should agree on a common protocol, for instance, both should speak in the same language. Similarly, before two computers can communicate with each other, there should be an agreement on the protocol to be used. In the early 70s, even among the few mainframe computers in existence, each manufacturer came up with their own unique operating system and communication protocol that were not compatible with computers from other manufacturers. ARPA recognized this incompatibility to be a problem that will hinder communication on ARPANet. Therefore, ARPA adopted TCP/IP as the protocol for ARPANet. Any computer connected to the network has to use TCP/IP to communicate on the network.

MODEMS

The word modem comes from two words – modulation & demodulation. These are two functions of a modem – to modulate and to demodulate. Let us try to understand the two processes. When a computer is connected to another computer, the cable used for the connection should be capable of carrying digital data because that is what a computer generates and processes. Building a separate network that is capable of transporting digital data was time consuming and costly. Instead scientists at AT&T worked on a solution to transport data through the telephone network that reached almost every home in the U.S. The problem however was this: telephone network was designed to carry analog electric signals generated by human voice and not digital computer data. AT&T engineers developed a device to convert the digital computer data into an analog electric signal so that it could travel through the telephone network. When the signal reaches the receiving computer, another device will convert the analog electric signal back into digital computer data. They called this device – MODEM because of its dual function: first, converting digital data into analog signal(modulation) and second, converting the analog signal back into digital data (demodulation). This small device expanded the reach of the computer network.

ARPA

The first computer network was ARPANet established by ARPA – Advanced Research Projects Agency of the Department of Defense in the early 1970s. Why did ARPA invest in a computer network? To enable researchers in the U.S. universities working on ARPA projects to share research findings with each other. ARPA spent the money to install the connection between the university computers. Thus the first network was born. When ARPA built ARPANet, it could have connected all the university nodes to the ARPA headquarters in Washington DC so that it could control and monitor the exchange of information on the network. Then it would have been a ‘centralized network’ – a network with a ‘center’ to which all nodes are connected. An example of a ‘centralized network’ is a local telephone exchange. All the telephones are connected to a central exchange – up to 10,000 telephones. In such a telephone network, the path to any other telephone is always through the central exchange. Adding new subscribers is as easy as running a wire from the new subscribers’ premises to the exchange. However, the main drawback of a ‘centralized network’ is that if the center gets damaged, the whole network goes down. In the 1960s and 70s, at the peak of the Cold War, when a third World War between the U.S. and the USSR was a real possibility, ARPA decided to make its computer network a ‘decentralized network’. This means, there is no center to the network. There are multiple paths from any node to any other node on the network. ARPA was just one of the nodes on the network so that if the USSR destroyed the Pentagon, the network would be operational and could be used for military communication.

ARPANET – THE DECENTRALIZED NETWORK

Now we know that ARPANet was the first computer network developed by ARPA to enable researchers to share research documents with each other. One of the unique features of the ARPANet was that it was a de-centralized network. What is a de-centralized network? It is the opposite of a centralized network. A centralized network has a central node to which every other node is connected. The central node, as you can imagine, is more important to the network than any other. A network with such a central node is known as a centralized network. An example of centralized network is a local telephone exchange. Every telephone in a locality – yours, your neighbors, etc. is connected to the same switchboard at the telephone exchange. All communication between the network nodes goes through the central node. For instance, When you telephone your neighbor, a request for connection goes to the switchboard. The switchboard establishes the connection between your phone and your neighbor’s. A centralized network is easier to maintain and expand. For instance, if a new house is built in your neighborhood and your new neighbor wants telephone connection, all that needs to be done is run a wire from the new house to the telephone exchange and he/she becomes part of the network and can communicate with anyone in the network. The biggest disadvantage of a centralized network is this – if the central node breaks down, let us say the telephone exchange burns down, the whole network goes down.

Some people suggest that the reason ARPANet was designed as de-centralized network – without central node, was precisely this. ARPANet was developed in the 1960s at the peak of the Cold War. Everyone expected a third world war between the two super powers – the US and the USSR who were both armed to their teeth with nuclear weapons. In the event of a war, anyone who gets the first strike will be at an advantage. If ARPANet were to be used as a communication system for the Department of Defense and if it were a centralized network, the USSR could attack the central node and disable the whole communication system. This, some people argue, was the reason for developing ARPANet into a decentralized network. As you can see from the diagram above, the network does not have one single central node. This feature became the most distinguishing feature of the Internet later on. Internet does not have single center which makes it a very robust network. If any one of the nodes is broken, communication on the network will not be affected as there are other paths to your destination. Such redundancy of paths makes it very difficult for governments or agencies to block the flow of information on the Internet. There are so many computers and so many paths to them that it is difficult to block the flow of information between them.

By 1980, there were 213 computers connected to ARPANet. As more and more computers got connected to the network, researchers and scholars started using it for exchanging personal messages (emails) with each other. Increasingly, the network was being used by faculty members and researchers not only to exchange documents pertaining to defense related research but for other academic and personal communication.

INTERNET

By the mid 1980s there were several hundred nodes on ARPANet belonging to universities and research institutions. Many of these institutions were not engaged in ARPA research but were using the network for academic work. A computer network to facilitate communication between researchers and scholars was a great tool but ARPA felt that maintaining an academic network was not what they should be doing with their budget. So they split the network into two. First, they formed a network of all military related computers and called it MILNET; second, they linked together the rest of the civilian computers (universities, research institutions) and handed over the new network to National Science Foundation (NSF). NSF called it NSFNet but the name didn’t stick; instead this civilian part of the original ARPANet came to be known as the Internet.

INTERNET SERVICE PROVIDERS (ISP) AND BACKBONE PROVIDERS

An Internet Service Provider is the company that provides physical connection to the Internet. ISP serves as the customer’s gateway to the Internet. Just like the local telephone company that provides the physical connection to the telephone network. With this connection, you can reach any telephone subscriber anywhere in the world. A backbone provider is the company or agency that builds and maintains the broadband connection between main nodes that enables long distance networking. For instance, ARPA and then NSF were the backbone providers of the network who took care of the long distance traffic between different computer nodes. It is similar to the long distance network of AT&T that allowed telephone subscribers connected to local telephone exchanges to talk to anyone in the country.

THE MAIN NODES OF NSFNET IN 1991

WORLD WIDE WEB

In the late 1980s, it was possible to go from your university computer to other university computers and access documents, download software, etc. But it was not at all a user-friendly experience. You had to type in the appropriate text commands without typos. You had to know exactly where the document was located. Overall, it required a great deal of computer proficiency to navigate the Internet. In the early 1990s, Tim Berners-Lee developed a new interface that simplified the process. He called this new interface the World Wide Web. His Web enabled Internet documents to be linked to other documents so that users can go from one document to the other with the click of a mouse. The communication protocol HTTP (HyperText Transmission Protocol) and the coding scheme HTML (HyperText Markup Language) are the core technologies of the World Wide Web. You can format the document and give links to other documents by using HTML. Internet servers that use HTTP can interpret HTML code; as a result, a click on a link in an HTML document will take you directly to another document hosted in another computer, perhaps in another country. World Wide Web made Internet navigation user-friendly. As a result, people started buying computers to access the Internet leading to what we call the ‘Internet revolution’. People who created Internet documents began invariably to format documents using the HTML coding scheme. This collection of HTML formatted documents on computers all over the world is what we know today as the World Wide Web – and there are billions of them. However, the Web and the Internet are not the same thing. The Web is a subset of the Internet consisting of the more recent (of the last 15 years) information interlinked with similar documents and formatted with the HTML coding scheme. Internet is much larger because the documents in computer servers that were put there from 1970-90 are still there, and is part of the Internet. The term ‘Hypertext’ appears in the above paragraph. What exactly is Hypertext? In 1945, Dr. Vannewar Bush wrote an article in the Atlantic Monthly in which he challenged scientists to develop a new technology that will make it easier for scientists and scholars to consult knowledge on a specific topic. Even when scientists work in teams they are not aware of all the work being done by other scientists elsewhere on related topics. There was so much new knowledge being developed after the industrial revolution that it was not humanly possible for scientists to keep abreast with all that was going on in his/her field. Hence, Dr. Bush argued that if we don’t come up with an appropriate technology to distribute knowledge, scientists will end up reinventing the wheel several times over. Dr. Bush proposed that we develop a system of interconnected knowledge modules that will allow us to consult all work done in an area by moving from one document to the next related document. Dr. Bush did not invent the term ‘Hypertext’ but Ted Nelson did. Ted Nelson coined the term ‘Hypertext’ to represent such a system of interconnected knowledge modules.

BROWSER

Tim Berners Lee developed the protocol to navigate from one document to another effortlessly and a coding scheme to link documents to one another. The Web as we experience it today is a multimedia platform. Lee’s basic and coding scheme became the basis for many new developments, the first of which was a browser. A Browser is a computer program that receives the code (HTML code) of a webpage, interprets the commands, and creates the webpage on a computer screen. The first browser Mosaic was created by a team of graduate students at University of Iliinois, Champaigne-Urbana. Later, one of the students- Marc Andreesen, developed the first commercial browser – Netscape. Bill Gates got into the browser business later and released Microsoft Internet Explorer.

WEB AND THE INTERNET: ARE THEY THE SAME THING?

Often people use these two terms interchangeably as if they were the same thing. In fact, they are not the same thing. Internet has been around since the early 1970s. For the last 40 years, researchers and scholars have used this network to share knowledge and communicate with each other. Of course, the methods used for sharing and communicating were crude and by today’s standards but they did serve a valuable purpose. As soon as the Web appeared on the scene, content providers started using the web coding scheme to format the documents. Hence, more recent information on the Internet formatted with Lee’s coding scheme constitutes the Web. But the content in networked computers posted before 1993 is still there. No one has time go back and reformat the old information but it is still there. So the web is the newer information on the Internet, information that has been formatted according to web standards so that documents can be linked to one another. In short, web is not the Internet, instead web is a subset or part of the Internet.

CURRENT STATE OF THE INTERNET

After the development of the World Wide Web, the Internet caught on among the general public. In ten years from 1996-2006 the number of Internet users went from 20 million to 600 million worldwide. By 1995, NSF recognized that the Internet had lost its identity as an academic network. E-commerce became the latest fad. Hence, NSF decided to hand over the responsibility of sustaining the long distance data network to private telecommunication companies in the U.S. Today, the Internet data traffic and connections are entirely in the private telecommunication companies such as MCI, Sprint, UUNet, and DataXchange.

NETWORK ACCESS POINTS

It is true that multiple telecommunication companies provide the backbone service but these networks are interconnected at Network Access Points (NAP) through which they exchange data traffic. There are four original NAPs, funded by the NSF (National Science Foundation) when the NSF decided to privatize the Internet backbone service. NAPs are the Internet exchanges where traffic is exchanged among backbones. This is similar to the telephone situation. Your long distance carrier may be AT&T and the party you are calling may be subscribing to Sprint. As a customer you don’t have to worry about the route your call takes or where the AT&T and Sprint networks connect and exchange your conversation. You experience a seamless telephone network. It is the same situation with the Internet.

WEB’S POPULARITY

Dot.com businesses mushroomed in the late 1990s but the bubble bust by the turn of the century. Many of the early predictions of radical transformation in the way we do business have not come true but the Internet and the World Wide Web have certainly affected the way businesses conduct their operations, institutions communicate with their constituents, and media institutions produce and distribute news and entertainment. Tim Berners-Lee’s Web protocol has emerged the as the de facto network communication standard. Even corporate intranets have adopted the same web interface and protocol for a couple of reasons. First, it is a user-friendly interface; secondly, employees are already familiar with the interface and hence do not require training. However, we have to remember that even though a majority of computer networks use the same technical standards developed by Lee, only 5% of the information on the computers is available to the public. 95% of information in computer networks is restricted and open only to those who pay. Bear in mind that when we speak of the ‘Information Superhighway’, most of it is ‘private highways’ or ‘toll roads’.

VANNEWAR BUSH’S VISION AND TODAY’S INTERNET

Dr. Bush or even those who actually developed the first computer network technologies could never have imagined the scope and size of the global communication network that started out as the Internet. However, the predictions by early Internet futurists like Howard Rheingold that the Internet will lay the foundations for a radically different social structure have not come true. They had predicted that the Internet-based communication will further the formation and development of communities founded on shared interests rather than geographic proximity. Such virtual communities will be the dominant form of social organization, they predicted. There were such virtual communities in the early 1990s but as the Internet and the Web became an integral part of the work place and home, real life uses of the network became more dominant. Early Internet aficionados predicted that because of the decentralized architecture of the Internet, it will further freedom all over world. Dictators and censors cannot block the free flow of information on the network. Citizens of repressed nations will discover that they can circumvent information blockades and access information on the Net. As a result, democracy will flourish all over the world. The power of the governments will diminish. This prediction has not really come true. Governments especially those who controlled the flow of information in their countries found technological solutions to control the flow of information on the networks in their country and blocked their citizens access to sites that they don’t approve of. Global internet companies realized that they are subject to local laws and that they can be taken to court in any country if they violated the laws of that country. For instance, a few years ago Yahoo was sued by a citizen of France in a French court for violating a French law that prohibited the distribution of pro-Nazi literature; and the French citizen had found such material on Yahoo’s site. Although Yahoo argued that they could not be expected to obey French law, eventually they had to concede and agree to abide by the law if they wanted to operate in France. Similarly, both Yahoo and Google have agreed to abide by the stipulations of the Chinese government in exchange for being allowed to operate in China. Hence, real governments have shown that they can mark the boundaries of cyberspace. Internet is no longer the virtual world where free spirits congregate and cut loose from structural constraints of the real world; Internet is increasingly bound down and tamed by the harsh realities of the real world.

BANDWIDTH PROBLEM

In the late 1990s, as the Web increasingly became a source of information and entertainment for more and more people, one of the features of the original network became a handicap. Remember our discussion of modems – how modems expanded the reach of computer mediated communication? Instead of building a new network for computer to computer communication, modems allowed us to use the existing telephone network. This worked well in the early years when the most popular application of the web was email. When people started sharing and accessing high resolution graphics, music and video on the network, it became apparent that the telephone lines were did not have enough capacity or bandwidth to transport all that data. Some people started calling the World Wide Web, the World Wide Wait. The frustration caused by slow connections threatened to limit the potential of the Internet. A couple of new technologies have been developed in the last decade to address the problem of bandwidth. We discuss two of the popular ‘broadband’ services that provide faster speeds on the Internet.

DSL (DIGITAL SUBSCRIBER LINE)

This is the broadband service created by telephone companies to provide digital data transmission over the wires of a local telephone network. DSL allows the subscriber to use the existing telephone wires for internet communication and voice communication simultaneously.

CABLE MODEM

This is broadband service developed by cable companies. Bandwidth has never been an issue for cable because cables carry hundreds of video signals without any problem. There was however one glitch. In the early years of cable, the cable architecture was a one-way transmission system. Internet requires two way communication facilities. In the 1990s, Cable companies developed Hybrid fiber optic/coaxial (HFC) networks that use both coaxial cable and fiber optic cables. This new network could be used to relay data to and from the users allowing cable companies to become ISPs.

Short Asnwer Questions

  1. What is the process of transfering of delivering data from one point in a telecommunication network to another?
  2. Who developed Transmission Control Protocol/Internet Protocol
  3. What are the main functions of a modem?
  4. ARPA stands for what?
  5. What is the purpose of an Internet service provider(ISP)?
  6. Who developed a new user friendly interface to the internet?
  7. Who coined the term Hypertext?
  8. What's the difference between the Web and the Internet?
  9. How many Network Access Points(NAPs) are there?
  10. What does a Digital Subscriber Line(DSL) do?
  11. Assignment: Format the above document with the HTML tags that you have learned (Use at least 20 formating tags). Create an ordered list of 10 short answer questions you would ask your students if you were the instructor. Save the HTML document as history.html and upload it to your WMU account.