Unit-12
Unit-12
Structure
12.0 Objectives
12.1 Introduction
12.5.2 Internet 3
12.10.5 Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP)
12.10.6 Z39.50
291
12.10.7 Dublin Core or Z39.85
Internet Resources 12.11 Organisation of Internet
and Services
12.11.1 Internet Addressing
12.1l.2 IP Addresses
12.11.3 DornainName
12.12.9 Extranet
12.13 Summary
12.15 Keywords
12.0 OBJECTIVES
After reading this Unit, you will be able to acquire knowledge on the following components
of Internet:
•. Internet history, growth and development and its management;
• Internet architecture, methods of accessing Internet and software and hardware
requirement for Internet for accessing Internet;
• Internet standards and protocols and organisation of Internet; and
• Internet security.
12.1 INTRODUCTION
The Internet has emerged as one of the most powerful media of commbnication. The
Internet and associated technologies have created a global environment that has
transformed the world into a global village defying the limitations of geographical boundaries.
About two decades ago, most of the world knew very little or nothing about the Internet.
Originating from a network owned by the US Defence project, the Internet, till 1970s
connected a limited number Of computers accessible only to computer scientists and
researchers in USA and allied countries for the purpose of defence research. Initially, it
became accessible to universities and research institutions to facilitate easier and faster
communication between scientists and researchers. With winding of operations of
ARPANET in 1990, the combined infrastructure of ARPANET and NSFNET became
available coriunercially. The Internet, since then, is growing at leaps and bounds around
the world and broadband is rapidly becoming a large part of the growth. The number of
292 hosts is considered an accurate measure of the size of the Internet. The Domain Survey,
sponsored by the Internet Systems Consortium that discovers every server on the Internet Basics of Internet
reports that Internet host count has reached over 233 million and has grown by 35 percent
in the past year. The InternetWorldStats.com reports that there are more than 745 million
Internet users worldwide, with an average of 3.2 users per host. With the high quality of
service in the United States there are approximately 2,4 Internet users per host, whereas
in some developing countries such as China and India, there are more than 100 Internet
users per host. the Internet statistics can be used as a barometer of the global economy.
For example, in case of India, the total number of users increased from 5 million in the
year 2000 to 16.58 million in the beginning of 2004. This 231.6% increase in the growth
of number of users in four years is an indication of Internet-based economic activity in
India as well as overall economic changes in the country.
The Internet has completely revolutionised the modes and methods of the computer and
communications world like never before. The Internet is an integration of several functions
and services rolled into one. It is a world-wide broadcasting facility, a mechanism for
information storage and dissemination, a media for electronic publishing of scholarly
literature and a medium for collaboration and interaction between individuals and their
computers without barriers of geographic location. The Internet represents one of the
most successful examples of the benefits of sustained investment and commitment to
research and development in information infrastructure. Beginning with the early research
in packet switching, the government, industry and academia have been partners in evolving
and deploying this exciting new technology.
Internetting project and the system of networks that emerged from the research was
known as the 'Internet'. The system of protocols which was developed over the course
of this research effort became known as the TCPIIP Protocol Suite, after the two initial
protocols developed: Transmission Control Protocol (TCP) and Internet Protocol (IP).
The operation management of the emerging Internet was handed over to Defence
Communication Agency (DCA) in1975. The Unix to Unix Copy Program (UUCP) was
developed at the Bell Labs (AT & T) in the year 1976. The year 1977 witnessed the
development of mail specifications (RFC 733). Usenet was established in the same year
using UUCP (Unix to Unix Copy Program) between Duke and the University of North
Carolina (UNC). DARPA also established the Internet Configuration Control Board
(ICCB) in the year 1977.
293
Internet Resources In 1981, CSNET (Computer Science Network) was built with collaboration of a number
and Services
of universities and industries in USA. The National Science Foundation gave financial
support to the CSNET to provide networking services. CSNET used the Phonenet MMDF
protocol for telephone-based electronic mail relaying and, in addition, pioneered the first
use of TCPIIP over X.2S using commercial public data networks. The CSNET server
provided an early example of white pages directory service and this software is still in use
at ,numerous sites. At its peak, CSNET had approximately 200 participating sites and
international connections to approximately fifteen countries. Another important development
in the same year was the creation of BITNET (Because it's time network). The BITNET
was started as a cooperative network at the City University of New York with first
connection to University of Yale. BITNET adopted t"e IBM RSCS protocol suite that
connected participating sites through leased lines ~"-:-,stof the original BITNET connections
linked IBM mainframes in university data centres. From the beginning, BITNET has
been multi-disciplinary in nature with users in all academic areas. It has also provided a
number of unique services to its users (e.g., LISTSERV). Today, BITNET and its parallel
networks in other parts of the world (e.g., EARN in Europe) have several thousand
participating sites. In recent years, BITNET has established <:J I). .kbone, which uses the
TCPIIP protocols with RSCS-based applications running abov c -rCP
The year 1982 was a year of great significance in the growth and development of Internet.
Defence Communication Agency (DCA) and DARPA adopted Transmission Control
Protocol (TCP) and Internet Protocol (lP) suite (commonly known as TCPIIP) as the
official protocol suite for ARPANET. This led to one of the first definition of Internet as
connected set of networks using TCPIIP. In the same year, the Eunet (European UNIX
Network) was created to provide e-rnail and Use net services in Europe. The External
Gateway Protocol (EGP) was also developed in the same year, which defines protocols
for connecting networks that were not based on TCP/IP with the Internet. The University
of Wisconsin developed 'Name Server' in 1982 that facilitated translation of names into
strings of numbers. This development led to the practice of assigning domain names for
the sites that is being practiced even now. Other significant development that took place
in 1982 included splitting of ARPANET into ARPANET and MILNET. The MlLNET
was later integrated with the Defence Data Network created in 1981.
Launch of desktop computers in 1982 led to major shift from having a single, large main
frame computer connected to the Internet on each site to the entire local areas network
connected to the Internet. In the same year, the Internet Activities Board (lAB) replaced
ICCB with a primary mission to guide evolution of the TCP / IP protocol suite and to
provide research advice to the Internet community.
In 1986, the U.S. National Science Foundation (NSF) initiated the development of the
NSFNET, which today provides a major backbone communication service for the Internet.
The National Aeronautics and Space Administration (NASA) arid the U.S. Department
of Energy contributed additional backbone facilities in the form of the NSFNET and
ESNET. respectively. The Network News Transfer Protocol (NNTP) was designed to
enhance news performance over TCPIIP.
In 1987, the NSF signed a cooperative agreement to manage the NSFNet backbone with
Merit Networks, Inc. Merit, IBM and MCI later founded Advanced Network and Services,
Inc. (ANS). In the same year, BITNET and CSNET merged to form the Corporation for
Research and Educational Networking (CREN). In the fall of 1991, CSNET service was
discontinued having fulfilled its important early role in the provision of academic networking
service. A key feature of CREN is that its operational costs were fully met through dues
294
paid by its member organisations.
A computer virus fo,"the first time affected approximately 6,000 of total 60,000 hosts on Basics of Internet
the Internet in the year 1988. The vulnerability of Internet and the need for more security
was realised for the first time. DARPA formed the Computer Emergency Response.
Team (CERT) in response. In the same year, .Department of Defence adopted Open
Systems Interconnection (OSI).
The total number of hosts on the Internet rose to 100,000 in 1989. The year also witnessed
first relays between a commercial electronic mail carrier and the Internet. MCI Mail
connected through the Corporation for the National Research Initiative (CNRI) and'
CompuServe connected through Ohio State University. The Corporation for Research
and Education Networking (CREN) was formed with the merger of CSNET and BITNET.
The Internet Engineering Task Force (IETF) and Internet Research Task Force (IRTF)
also came into existence under the lAB in the year 1989. In the same year, several other
countries got connected to the NSFNet including Australia, Germany, Israel, Italy, Japan"
Mexico, the Netherlands, New Zealand, Puerto Rico and the United Kingdom. In Europe,
major international backbones such as NORDUNET and others provide connectivity to
over one hundred thousand computers on a large number of networks. During the course
of its evolution, particularly after 1989, the Internet system began to integrate support for
other protocol suites into its basic networking architecture. The present emphasis in the
system is on multi-protocol internetworking, and in paiticular, with the integration of the
Open Systems Interconnection (OSI) protocols into the architecture.
During the early 1990's, OSI protocol implementations also became available and, by the
end of 1991, the Internet has grown to include some 5,000 networks in over three dozen
countries, serving over 700,000 host computers used by over 4,000,000 people. The
ARPANET ceased to exist in 1990. Commercial network providers in the U.S. and
, Europe began to offer Internet backbone and access support on a competitive basis to
interested parties. Access to Internet was first offered on commercial basis by 'World'
(world.std.com), thus it became the first Internet Service Provider (ISP) of Internet dial-
up access. Several other countries got connected to the Internet in 1990 including Argentina,
Austria, Belgium, Brazil, Chile, Greece, India, Ireland, South Korea, Spain and Switzerland.
Wide Area Information Servers (WAISs) were invented in 1991 by Brewster Kahle and
released by the Thinking Machines Corporation, These servers became the basis of
indices to information available on the Internet. The indexing and search techniques
implemented by these engines allow Internet users to find information using keywords,
across vast resources available on the net.
The most significant development in the history of Internet was the invention of World
Wide Web (WWW) by Tim Bemers-Lee at the CERN Laboratory in 1991. The first
web browser called 'Mosaic' was released in 1993 that took the Internet by storm.
Several other countries got connected to the Internet in the year 1993. The InterNIC
was created tu 1993 to provide specific Internet services including i) Directory of database
services; ii) Registration services; and iii) Information services.
In 1994, the Internet (ARPANET) celebrated its 25th anniversary. Internet shopping and
e-commerce commenced its operation on the net. Growth on the Internet traffic became
geometric, i.e., NSFNet traffic passed 10 trillion bytes/month during 1994. WWW became
the second most popular service on the net (behind FTP) leaving Telnet at third place. In
March 1995, the WWW surpassed FTP as the service with greatest traffic on NSFNet
based on packet count.
Several traditional dial-up systems in USA including CompuServe, America Online, Prodigy
began to provide Internet access for services other than e-mail, i.e., WWW, Gopher, FTP
and so on.
The technologies of the decade were WWW and search engines. New technologies
emergedin late 1990s including client-based code loaded from web servers such as Java,
295
I
Internet Resources JavaScript and ActiveX, etc. The research and development on the Internet and related
and Services
technologies continues even today.
A great deal of support for the Internet community has come from the U.S. Federal
Government, since the Internet was originally part of a federally-funded research program
and, subsequently, has become a major part of the U.S. research infrastructure. During
the late 1980's, however, the population ofInternet users and network constituents expanded
internationally and began to include commercial facilities. Indeed, the bulk of the system
today is made up of private networking facilities in educational and research institutions,
businesses and in government organisations across the globe.
J) What role did DARPA and NSF played in the growth and development of Internet?
The Internet has functioned as a collaborative effort among cooperating parties. The key
function of this collaborative effort is to develop and evolve specifications for TCP / IP
protocol that was originally developed in the DARPA research program mentioned above.
In the last five or six years, this work has been undertaken on a wider basis with support
from Government agencies in many countries, industry and the academic community.
T~e Internet Activities Board (lAB) was created in 1983 to guide the evolution of the :
TCPIIP Protocol Suite and to provide research advice to the Internet community.
During the course of its existence, the lAB has been reorganised several times. It now
, has two primary components: the Internet Engineering Task Force (IETF) and the Internet
Research Task Force (IRTF). The IETF is primarily responsible for further evolution of
the TCP/IP protocol suite, its standardisation with the concurrence of the lAB, and the
integration of other protocols into Internet operation (e.g. the Open Systems Interconnection
Protocols). The Internet Research Task Force (IRTF) continues to organize and explore
advancedconcepts in networking under the guidance of the Internet Activities Board and
.
with , support from various government agencies.
The Internet Activities Board and Internet Engineering Task Force have a secretariat to
manage its day-to-day functions, Two other functions that are critical to lAB operation
are publication of documents describing the Internet and the assignment and recording of
various identifiers needed for protocol operation, Throughout the development of the
Internet, its protocols and other aspects of its operation have been documented first in a
296 series of documents called Internet Experiment Notes and, later; in a series of documents
I
called Requests for Comment (RFCs). The latter were used initially to document the Basics. of Internet
protocols of the first packet switching network developed by DARPA, the ARPANET,
beginning in 1969, and have become the principal archive of information about the Internet.
At present, the publication function is provided by an RFC editor.
The recording of identifiers is provided by the Internet Assigned Numbers Authority (IANA)
who has delegated one part of this responsibility to an Internet Registrywhich acts as a
central repository for Internet information and which provides central allocation of network
and autonomous system identifiers, in some cases to subsidiary registries located in various
countries. The Internet Registry (IR) also provides central maintenance of the Domain .
Name System (DNS) root database which points to subsidiary distributed DNS servers
replicated throughout the Internet. The DNS distributed database is used, inter alia, to
associate host and network names with. their Internet addresses and is critical to the
operation of the higher level TCP/IP protocols including electronic maiL
There are a number of Network Information Centers (NICs) located throughout the Internet
to serve its users with documentation, guidance, advice and assistance. As the Internet
continues to grow internationally, the need for high quality NIC functions increases. Although
• the initial community of users of the Internet were drawn from the ranks of computer
science and engineering, its users now comprise of a wide range of disciplines in the
sciences, arts, business, military and government administration.
I The htane:
I
I
ht~ ('.:)ceti Irtemd Colp::nti:n ror
(ISOC) ,Il.ssigotid Names & 1Unb:r:::
(ID.NN)
I I
I I
lrterret l:rglneen n;; hterrrel h'ese:rc1"1 hta-rEt ,ll,ss g-ed lEt'l.4::!rK ::id uiors
Task Force Task Forc~ 'lmttlr::: Au1h:fit}l . Dman [tt;t.ase
UHF) [IRTF) ·~r·UJ.) • Root Sao.Er ~.~
PESG] [liSG]
I',c;;edte:t
Re;i:Stf<j1S
...................................................... ································t~··············.·····
.
I
Internet Resources
and Services 12.4 DEFINITIONS OF INTERNET·
The term Internet.has been coined from two terms, i.e., interconnection and network. A
network is simply a group of computers that are connected together for sharing informa-
tion and resources. Several such networks have been joined together across the world to
form what is called as the Internet. The Internet is thus a network of networks. It refers
to the vast collection of interconnected networks that use the TCPIIP protocols and that
evolved from the ARPANET of the late 60's and early 70's (http://1001resources.com/
hostinglglossary.html).
The Internet is the world's largest computet network that enables computers of all kinds
to share services and communicate directly with each other, as if they were part of one
giant seamless global computing machine. It is vast and sprawling network reaching into
computer sites world-wide. The Internet comprises thousands of local area networks,
groups of computers including government supercomputers, campus-wide information
systems; local area networks and individual workstations. Each of these different computers,
connected on Internet running on different platforms or operating systems, follows certain
standards or rules of communication called protocols. The standard protocol used for
Internet communication is called Transmission Control Protocol/Internet Protocol or
TCPIIP. Standardised communication protocols allow similar, dissimilar, near and distant
computers to communicate with one another,
The Federation National Council (FNC) in 1995 referred the Internet as Global Information
System that - (i) is logically linked together by a globally unique address space based on
the Internet Protocol (IP) or its subsequent extensions; (ii) is able to support communications
using Transmission Control Protocol/Internet Protocol (TCPIIP) suite or its subsequent
extensions, and/ or other lP-compatible protocols; and (iii) provides, uses or makes
accessible, either publicly or privately, high level services layered on the communications
and related infrastructure described herein. It may be seen that FNC has described the
Internet as a global information system. The definition not only includes the underlying
concepts of communications technology, but also higher-level protocols and end-user
applications, the associated data structures and the means by which the information may
be processed, manifested, or otherwise used. In many ways, this definition supports the
characterisation of the Internet as an 'Information Superhighway'.
Internet Society (ISOC) defines Internet as a 'global network of networks' enabling
computers of all kinds 'to directly and transparently communicate and share services
throughout the world using a common .communication protocol. It should not be seen as
merely a collection of networks and computers. The Internet is an architecture that provides
for both communications capabilities and information services. Because the Internet is an
enormously valuable, enabling facility for so many people and organisations, it also
constitutes a shared global resource of information, know ledge, and means of collaboration,
and cooperation among countless diverse communities.
Self Check Exercise
3) Define Internet. How are different platforms or operating systems connected to the
Internet?
Note: i) Write your answer in the space given below.
ii) Check your answer with the answers given at the end of the Unit.
7
•......••••••..••••••••• 1•••••••••••••••••••• - .
298
Basics of Internet
12.5 GROWTH OF INTERNET
The Internet was initially set up with 4 hosts. In early 1980s there were only 213 registered
hosts on Internet. By 1986, the number had risen to 5089 hosts connected throughout the
world. It was by 1989, the number of networks connected to the Internet rose to five
hundred. The Network Information Centre of the Defence Data Network Information
Centre found 2,218 networks connected as of January 1990. By June 1991, the National
Science Foundation Network Information Centre pegged it at close to four thousand.
The number of networks connected to the Internet by the end of 2003 was more than
60,000 .
The number of hosts accessible on the Internet is considered as a fair measure of its
growth. Since the early 80s, when the USA government began to share their network
technology with the world, there has been growth on a scale that is hard to imagine. To
put it into better perspective, the number of hosts on the Internet grew from 4 in 1969 to
213 in 1981 and 233 million in early 2004. Table 1 provides growth in number of servers
from 1969 to 2004.
According to the Internet Society, a non-profit society that studies and promotes the use
of the Internet, 134 countries had full Internet connection and an additional 5 countries
had limited access in 1996. Surveys performed by International Data Corporation and
Matrix Information and Directory Services found that as of September 1997 there were
between 53 and 57 million users of the Internet worldwide. By January 1999 there were
about 50 million Internet connections worldwide and have grown to 200 million users in
200 countries and territories by the year 2001.
The technologies of past two decades are WWW and search engines. While the number
of web pages available on the Net reached 800 million, the number of search engines
grew to a several hundreds.
250,000,000
Internet Domain Survey Host Count
200,000,000
. 50,000,000
. 00,000,000
so.noe.ooo
Lt) r--.. eo en o
cr.o enI enI enI o
I I
~ c c C
qj
"""')
(Ij
-:0
16
-:0
(Ij
-:0
(Ij
-:0
Source: Internet Software Consortium (w"",.isc.org)
12.5.1 Internet 2
Building on the tremendous success ofInternet and associated technologies for academic
needs, the university community has joined together with government and the i'loustry to
accelerate the next stage of Internet development in academia called Internet2 or 1-2.
The transition from government-supported backbones to a totally privatised system in the
U.S. has led to the development of a new system of backbones called Internet 2, the Next
Generation Internet. Proponents of Internet2 believe that privatisation of the Internet has
shifted the focus of Internet development more towards business profits and less on
academics, research and teaching needs.
Internet2 is a research and development consortium led by over 206 US universities working
in partnership with industry and government to develop and deploy advanced network
applications and technologies, accelerating the creation of tomorrow's Internet. Internet2
is recreating the partnership among academia, industry and government that fostered
today's Internet in its infancy.
In addition to university members, the Internet2 community includes over 70 companies
and more than 40 affiliated organisations, including US government research laboratories.
Internet2 members are working with more than 30 other similar research and education
networking organisations in countries around the world. Supported by a core central staff,
Internet2 activities are led by its members. Tnternet2 members work in concert with national,
state and regional initiatives in the United States, and are coordinated with international
organisations such as the Internet Engineering Task Force. Internet2 efforts are focused
on the following activities:
• Advanced network applications to enable collaboration among people and provide
interactive access to information and resources in ways not possible on today's
commercial Internet. Interactive distance learning, remote access to unique scientific
instruments, real-time access to large databases, and streaming high-definition video
are all possible with high-performance networks.
• New network capabilities such as quality of service and multicasting are being
aggressively tested and deployed in the networ.ks used by Internet2 members. These
capabilities support advanced network applications today, and will enable tomorrow's
commercial Internet to provide the reliable performance advanced applications
required.
.• Middleware, the behind-the-scenes software, is providing security, directories and other
services required by advanced network applications. In today's Internet, applications
usually have to provide these services themselves, which leads to competing and
incompatible standards. By promoting standardisation and interoperability, middleware
will make advanced network applications much easier to use.
300
• High-performance networks are linking the campuses and Laboratories of over 206 Basics of Internet
Internet member institutions. The high-performance networks participating in the
Internet2 project provide the environment in which new network applications and
capabilities can be deployed and tested.
The greatest benefit of Internet2 is the large bandwidth. The Internet2 is already
operational. It connects 206 participating universities and institutioris in US enabling transfer
of large volumes of video and audio and other applications used for research purposes.
12.5.2 Internet 3
Similar to the origin of Internet, the root of the emerging Internet 3 also lies with the US
government and academics. These include the USA government's Next Generation
Internet (NGI) initiative (http://www.ngi.gov/). the National Science Foundation (NFS)
and Very High Bandwidth Network Service (VBNS). As computer and communication
corporate grants such as IBM, CISCO and Intel will eventually benefit with the development
of Internet 3, they too are active participants in this new Internet project.
Initiated in October 1996, NGI aimed to foster partnership between academia, industry
and governmentto develop technologies that will be essential to sustain USA's technological
leadership in computing and communications and enhance the country's economic
competitiveness. The NGI aimed to demonstrate the new inter-network with a capacity
of 1 Terra-bps and over 10 advanced applications to leverage this bandwidth. The Internet
3 promised a large number of new applications on a very high-speed network. The Next
Generation Internet (NGI) Program has been successfully completed and the Federal
agencies are currently coordinating advanced networking research programs under the
Large Scale Networking (LSN) Coordinating Group. The NGI Program met all of its
goals except for its goal of Terabit per second networking in 2002 that is expected to
meet by the current LSN research activities.
12.6 "INTERNETARCIDTECTURE
The Internet uses client / server model. A server is a computer system that is accessed
by other computers and / or workstations at remote locations. Usually, a server contains
data, datasets, databases and programs. The server computers are also called 'host'
since these computers are configured to host datasets, files and databases, receive requests
for it from the client machines and serve it. The term 'host' means any computer that has
full two-way access to other computers on the Internet. All computers that host web
sites are host computers or servers since they 'host' information and 'serve' client
machines. For example, a computer that hosts a website for 'GoogJe' (http://
www.google.com/) is a host or a server computer. There are millions of host computers
that are linked on Internet for communicating with each other. The connectivity from one
computer to another computer is being provided using some standard mode of linkages
called Internet Protocols. A protocol can be defined as special set of rules governing
connectivity for telecommunication connections. Protocols may exist at several levels
and in order to communicate both end points must recognise and observe standard protocols.
Peer to Peer and client/server are two popular systems of communication.
12.6.1 Peer to Peer Communication
Peer-to-peer is a communications model in which each party has the same capabilities and
either party can initiate a communication session. In some cases, peer-to-peer
communication is implemented by giving each communication node both server and client
capabilities.
On the Internet, peer-to-peer (referred to as P2P) is a type of transient Internet network
that allows a group of computer users with the same networking program to connect with
each other and directly access files from one another's hard drives. Napster and Gnutella
are examples of this kind of peer-to-peer software. Corporations are looking at the
advantages of using P2P as a way for employees to share files without the expense
involved in maintaining a centralized server and as a way for businesses to exchange
information with each other directly. These are usually operated in small offices. IBM's 301
Internet Resources Advanced Peer-to-Peer Networking (APPN) and Gnutellanet are the examples of
and Services
products that support peer-to-peer communication model.
12.6.2 Client Server Architecture
The Client-Server Architecture is based on the principle where 'client' program installed
on the user's computer (called client) communicates with the 'server' program installed
on the host computer to exchange information through the network. The client -server
model involves two separate but related programs, i.e., client and server. The client
program is loaded on the PCs of users hooked to the Internet where as the server program
is loaded on to the 'host' (usually a PC with large storage capacity and RAM, a mini-
computer or a main-frame computer) that may be located at a remote place. The concept
of client I server computing has particular importance on the Internet because most of
the programs are built using this design, A server is a program that 'serves' (or delivers)
something, usually information, to a client program. A server usually runs on a computer
that is connected to network. The size of that network is not important in the client I
server concept - it could be a small local area network or the global Internet. The advantage
of this type of design is that a server has to store the information in one format, which
could be accessed by various clients working on multiple platforms, and located at different
places. In the client I server model multiple client programs share the services of a
common server program. Both client programs and server programs are often part of a
larger program or application. For example, Internet Explorer (web browser) is a part of
Windows Operating System, Internet Information Server (lIS) is a part of Windows 2000
I Windows NT and 'Apache' (web server) comes integrated with Linex Operating System. /
In case of World Wide Web (WWW) services, the web browser (Internet Explorer or
Netscape Navigator) is a client program that resides on a PC of a user. The web browser
requests services from a web server loaded on to a host machine. The 'server' program
is designed to interact with 'client' programs so that a user can determine whether the
information they want is available on a server, and if so, the server is programmed to
serve it to the client.
Software tools in a client-server environment work in pair. For every application in client
server environment, there is a client program that is responsible for facilitating users to
interact with 'server' program and explore information hosted on it. The client application
works as an interface between ,the user and the host, collecting information about the
requirement of user, translating the request into agreed language of communication
between the client and the server and sending it to the relevant server computer. The
server program is responsible for hosting the data and accompanied programs, receiving
the request from clients, finding the information requested by client and returning it to the
client. A 'server' is generally programmed to organise information stored on it, create
indices and search information. The responses or data sent by the server machines are
received by the client machines that decode it and convert it for appropriate display on
the user's machine.
Client Server
Network
Internet
Or
Intranet
Communication
Client Software Protocol Server Software
Applications
.
I~
1~•
Server Software
,
•-
Client Software
I
Internet Resources
and Services
Self Check Exercise
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
The most common method of accessing Internet for most users is through the
telephone lines. A user dials the service provider from his / her residence or from office
and the service provider puts them on to the Internet service network. This arrangement
allows a user to connect to telephones almost anywhere in the world. Now that more and
more service providers have started offering faster and faster options for Internet
connections, users looking for faster-than-dialup connection have several options to
choose from.
The connections to the Internet fall under two basic categories: dial-up access and
direct or dedicated access. There are two categories of dial-up access, i.e., analogue
and digital. Regular telephone lines support analogue mode of data transmission that uses
a continuous wave form to transmit data. Analogue connections use modems to convert
digital signals [0 analogue signals and then analogue signals back to the digital signals.
Digital transmissions such as fibre optic devices pass data along using discrete, on / off
pulses. Unlike analogue connections, digital connections do not require a modem at each
end of the connection.
Dial-up connections are less expensive as compared to the dedicated leased connection.
It is the least expensive means of accessing the Internet. In India, dial-up connection
from a typical Internet Service Provider costs anywhere between Rs. 300 to Rs. 1,000
per month for 100 to 500 hours. Dial-up connection requires very modest and low-cost
hardware and software.
Although least expensive, dial-up connections to the Internet have certain disadvantages.
Dial-up connection suffers from the disadvantage of low speed and less reliability. The
speed of an analog dial-up connection is determined by the speed of the modem. Regular
telephone lines used for accessing Internet may be slow, unreliable and busy during peek-
hours. There are two types of accounts that one can establish with an analog dial-up
304 connection: SLIPIPPP and shell accounts.
Hardware Requirements Basics of Internet
The hardware requirements common for all types of connections for user (client) and
server are dealt separately. Dial-up connection to Internet specifically requires a Modem
(Modulator / Demodulator).
A MODEM is a device that enables a computer to transmit data over telephone lines.
Modulator converts the discrete stream of digital 'on-off' electric pulses used by the
computers into the analog wave patterns used for transmission of human voice.
Demodulator recovers the digital data from the transmitted analog signal. A modem can
be fixed internally into PCs or it can also be bought as external device.
A reliable and high-quality Modem is a critical requirement to ensure quality and reliability
of transmitted information. The Modem should incorporate error-correction protocols
and should be supported by local telecommunication facilities. It should work in
asynchronous mode with a speed of 33.6 to 56 KBPS.
Software Requirements
The software requirements common for all types of connections for user (client) and
server are dealt separately. Dialup connection specifically requires communication
software.
The DSL technology has several variants such as ADSL, SDSL, VDSL, HDSL, SHDSL
and RADSL. A brief description of these variants are given below:
• Asymmetric DSL (ADSL): This technology facilitates use of the whole bandwidth
of a standard telephone copper cabling. It allows a subscriber to receive data
(down load) at speeds of up to 1.544 Megabits per second, and to send (upload) data
at speeds of 128 kilobits per second. Thus, the speeds of upload and down load are
'asymmetric'. While the POTS (Plain Old Telephone Service) uses the frequencies
between 300 and 3100Hz, the higher frequencies normally remain unused. ADSL
uses the frequencies between 30KHz and 1.1 MHz to transport data, leaving the
telephone connection, as crystal and pure as ever. ADSL can be easily and seamlessly
combined with the existing ISDN lines. 305
Internet Resources • Symmetric DSL (SDSL): This connection, used mainly by small businesses, does
and Services
not allow simultaneous use of telephone at the same time, but the speed of receiving
and sending data is the same.
• VDSL (Very high bit-rate DSL): This is a fast connection, but works only over a
short distance.
• HDSL (High data rate DSL): HDSL is the earliest variation of DSL. The main
characteristic of HDSL is that it is symmetrical or in other words an equal amount of
bandwidth is available in both directions.
• Rate Adaptive DSL (RADSL): This is a variation of ADSL, but the modem can
adjust the speed of the connection depending on the length and quality of the line.
• The speed is much higher than a regular modem (1.5 Mbps Vis 56 Kbps).
• DSL does not necessarily require new wiring; it can work on the existing phone
lines.
• The companies that offer DSL usually provide the modem as part of the installation.
• A DSL connection works better when it is closer to the provider's central office.
• The connection is faster for receiving data than it is for sending data over the Internet.
Hardware Requirements
The hardware requirements common for all types of connections for user (client) and
server are dealt separately later. A DSL connection to Internet specifically requires DSL
modem or network terminators.
DSL modems or network terminators are digital devices that are used to connect a computer
or network to a larger network via telephone wiring using DSL techniques. Modem is a
misnomer in this case since there is no conversion from digital to analog. The DSL
technologies use sophisticated modulation schemes to pack data onto copper wires. They
are sometimes referred to as last-mile technologies because they are used only for
connections from a telephone switching station to a home or office, not between switching
stations.
Most DSL devices connect to USB-port on desktop or notebook computer. It does not
require any additional network interface card. Most DSL devices support multiple operating
306 systems.
I
Basics of Internet
Software Requirements
The software requirements common for all types of connections for user (client) and
server are dealt separately later. DSL connection to Internet specifically requires DSL
Installation Software. Ho v:;:. er, since both telephone and ADSL service are
simultaneously available from the same copper pair, a central splitter or distributed filters
is required at the exchange for decoupling ADSL and telephone signals.
Since the Internet connection in DSL is available virtually, the DSL connection does not
require a communication package.
Dedicated leased lines have many advantages. The major advantage is the high speed
and better reliability. With a dedicated single leased line, an organisation can have many
users of a local area network connected to the Internet. Being a dedicated leased line,
users need not dial to connect to the Internet. All computers on a local area network
using a dedicated leased line are always connected to the Internet. This type of connection
is appropriate for organisations that transfer large amounts of data and have many users
and workstations that must be connected to the Internet. This option requires that
dedicated lines be leased through a network provider (such as Department of
Telecommunication or VSNL in India) and special network hardware be installed on
site, making this a complicated operation. The only disadvantage of dedicated leased line
is high cost of communication and difficulties involved in its maintenance.
• Single connection can support voice, data and images. An ISDN subscriber can
establish two simultaneous independent calls which could be voice, data, image or
combination of any two whereas only one call is possible on ordinary telephone lines.
• High quality services being digital right from premises of subscribers (end to end)
are available.
The hardware requirements common for all types of connections for user (client) and
server are dealt separately later. Hardware required for an ISDN connection to Internet
are given below:
The ISDN telephone line is terminated on a common box called the Network Termination
(NT) that is installed at the subscriber's premises. The Network Termination unit along
with accessories is generally provided by the Internet Service Provider or can be procured
by the subscriber. The terminal equipment has to be procured by the subscriber.
ISDN supports voice, data and image transmission over the telephone line. As such, its
application is not restricted to Internet access. The hardware required for an ISDN
connection would therefore depend on the applications that a user wants to run on an
ISDN connection. Some of the ISDN equipment may be:
• ISDN feature phone: this is a simplest type of ISDN phone which has an LCD
display and some additional keys
• Terminal adapter
• G4 fax
The hardware requirements common for all types of connections for user (client) and
server are dealt separately later. Cable connection to Internet specifically requires a
cable modem.
A cable modem is an external device that hooks up to the Pc. It interacts with a Cable
Modem Termination System (CMTS) installed at a central location. Cable modems use
various technologies like TDMA based DOCSIS standard or more robust and modern
SCDMA based TERAYON proprietary technology.
Installing cable TV also involves modification and upgradation of existing cable TV network
to handle two-way data. The process involves adding signal amplifiers and coaxial cable
by the local cable provider.
Other mobile telephone services in India like Tata Mobile also offer such features .
•• •
~"'Uj· Of ••••, ~l
•
I
DI.I.up
••• •••
•t;'U'_'f
....- __ ESL
181)'"
•••
•
•, ~ CabIOConn.,tlOn
ft •••
QA TEWAY: WNL &
ERNET (NeST)
Clitnb·
Shell Account
In case of shell account, a user logs on to an intermediary computer (host) to access the
Internet. The host computer that is connected to the Internet provides connectivity to the
user. There may be several shell accounts on a host computer. The primary disadvantage
of a shell account is that it limits access to the Internet applications running on the service
-, provider's computer. Moreover, a user has to learn to use the operating commands on the
host computer. Moreover, shell accounts support only text-based access to the Internet.
Transferring information to or from the Internet using shell account, is a two-step process.
In the first step, the file is transferred from a remote machine on the Internet to the host
machine and then from host machine to the personal computer of user in the second step.
Shell accounts are cheaper as compared to the TCP / IP accounts. With cheaper availability
of Internet connections and popularity of graphic browsers, most users do not prefer shell
accounts.
TCP / IP account
The TCP / IP account facilitates a user to configure his system as a host machine. It
supports graphical interface for surfing the net. TCP lIP accounts cost more than plain
shell accounts.
An Internet Service Provider (ISP) gives software, specialised hardware and technical
help to connect to the Internet. Many ISPs provide electronic-mail account, host customers'
Web pages, and offer other services as a package deal to their customers.
I
Internet Resources Pacific Internet www.pacific.net.in
and Services
Pioneer Online www.pol.net.in
Pionet Onlien www.pionetindia.com
Reliance Infocom Ltd. www.onlysmart.com
Roltanet www.roltanet.com
Sampark Online www.samparkonline.com
Satyam Online www.satyamonline.com
Sigma Online www.sigmaonline.com
Southern Online www.sol.net.in
Sify Ltd. www.sifycorp.com
Spectra Net limited www.spectranet.com
Software_ Tech.Parks www.stpi.soft.net
SIT! Cable Network Ltd. www.zeenext.com
Tata Internet www.tatanova.com
VSNL www.vsnl.net.in
W3C www.w3c.com
Weikfield www.wmi.net.in
ZeeNext www.zeenet.com
Input Devices
Image-based web sites require input devices like scanners, digital cameras, video cameras
and PhotoCD systems. A large range of choices are available for these image capturing
devices. Scanners are available in all sizes and shapes. Flatbed scanners or digital cameras
mounted on book cradle are more suitable for libraries.
Storage Devices
Servers hosting large web sites may require large amounts ofstorage; particular attention
needs to be given to the storage solution. Intelligent storage networks and snapservers
-are now available in which the physical storage devices are intelligently controlled and
made available to a number of servers. Although harddisc (fixed and removable) solutions
are increasingly available at an affordable cost, optical storage devices including WORM,
CD-R, CD ROM, DVD ROM or opto-magnetic devices in stand alone or networked
mode, are attractive alternatives for long-term storage of digital information. Optical drives
record information by writing data onto the disc with a laser beam. The media offers
312 enormous storage capabilities. A number of RAID (Redundant Array of Inexpensive
Basics of Internet
Disks) models are also available for greater security and performance. The RAID
technology distributes the data across a number of disks in a way that even if one or
more disks fail, the system would still function while the failed component is replaced.
Web Servers
Setting-up a web server requires a web server program. Many server programs are
available for different platforms, each with different features and cost varying from free
to very expensive. Some of the important web server programs are listed below:
Apache http://www.apache.org/
These computers can further run on Windows, Unix, Linux or such other operating systems.
All these popular operating systems now have built-in support for connecting to the Internet.
In order to access the data from server computers a large number of client software are
available to suite various operating systems.
The hardware devices attached to client computer also play a role in providing proper
Internet connectivity. These can be either a modem or network connection. In order to
connect on a Local Area Network, normally used in offices or universities/ colleges,
there is a need to have Network Interface Card (NIC). These cards are designed to
handle different speeds and network architecture. In order to connect from home or a
small office a modem is connected to a computer. A modem can be an external device or
fitted inside the computer i.e internal modem. The modems come with different speeds
i.e., 14,400 bits per second (bps), 28,800 bps or 58,600 bps, etc. A modem provides
connectivity to external world through various types of communication lines.
A user also requires multimedia PC (or Macintosh) equipped with an Internet Browser
like Internet Explorer or Netscape Navigator to access Internet and its services. Web
browsers are computer programs that facilitate a user to access the World Wide Web.
They provide a graphical interface that facilitates users to click buttons, icons, and menu
options to view and navigate web pages. By using web browsers, a user can locate
servers of Internet, send a query, process the query results, and display them. Web
browser, as a client application, is designed for a particular computing platform (for example,
Windows, Macintosh, UNIX) to take advantage of the strengths of the platform. Netscape
Navigator and Microsoft Internet Explorer are popular Web browsers. The client-side
PCs may also require the following software packages (plug-ins) to download format-
specific deliverables from the Internet:
Table 12.5: Format Specific Deliverables
Internet protocols are sets of rules or standard procedures that are followed to interconnect
and communicate between computers in a network. The Internet Protocol allows dissimilar
hosts to connect to each other through the Internet and transport information across the
Internet in packets of data using the Transmission Control Protocol (TCP). This protocol
also determines how to move messages and handle errors. The protocol allows creation
of standards independent of hardware system. The data on Internet is transmitted from
one computer to another using standard protocols.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages
the disassembling of a message or file into smaller packets that are transmitted over the
Internet and received by a TCP layer that reassembles the packets into the original
message. The lower layer, Internet Protocol, handles the address part of each packet so
that it gets to the right destination. Each gateway computer on the nett.rork checks this
address to see where to forward the message. Even though some packets from the same
message are routed differently than others, they will be reassembled at the destination.
The Internet Protocol (IP) is the method or protocol by which data is sent from one
computer to another on the Internet. Each computer (known as a host) on the Internet
has at least one IP address that uniquely identifies it from all other computers on the
Internet. When a user sends or receives data (for example, an e-rnail note or a web
page), the message gets divided into little chunks called packets. Each of these packets
contains both the sender's Internet address and the receiver's address. Packets are sent
first to a gateway computer that understands a small part. of the Internet. The gateway
computer reads the destination address and forwards the packet to an adjacent gateway
that in turn reads the destination address and so forth across the Internet until one gateway
315
Internet Resources recognizes the packet as belonging to a computer within its immediate neighbourhood or
and Services
domain. That gateway then forwards the packet directly to the computer whose address
is specified.
Because a message is divided into a number of packets, each packet can, if necessary, be
sent by a different route across the Internet. Packets can arrive in a different order than
the order they were sent in. The Internet Protocol just delivers them. It's up to another
protocol, the Transmission Control Protocol (TCP) to put them back in the right order.
Another advantage of TCPIIP is that it is not bound in any way to the physical medium.
Whether its wireless, token-ring, ordinary phone lines LAN or other network, one can
transmit data using TCPIIP.
The Hypertext Transfer Protocol (HTTP) is a set of rules for exchanging files (text,
graphic images, sound, video, and other multimedia files) on the World Wide Web. As its
name implies, essential concept of HTTP is the idea that files can contain links or references
to other files whose selection would lead to transfer of requests from one file to another.
Any web server machine contains, in addition to the HTML and other files it can serve, an
HTTP daemon, a program that is designed to wait for HTTP requests and handle them
when they arrive. The web browser is an HTTP client, sending requests to server machines ..
When a user requests for a file through browser by either 'opening' a web file (typing in
a Uniform Resource Locator) or clicking on a hypertext link, the browser builds an HTTP
request and sends it to the Internet Protocol address indicated by the URL. The HTTP
daemon in the destination server machine receives the request and, after any necessary
processing, the requested file is transmitted.
A web page (also called a document) consists of objects. An object may be a HTML file,
an image, a Java applet, a video clip, etc. that is addressable by a single URL. Most web
pages consist of HTML files and several referenced objects or links.
HTTP defines how browsers request web pages from servers and how servers transfer
web pages to clients i.e., in essence it defines the interaction between the web client and
the web server. When a user requests a web page (for example, clicks on a hyperlink), the
browser sends HTTP request messages for the objects in the page to the server. The
server receives the requests and responds with HTTP response messages .that contain
the objects.
316
Basics of Internet
Fig. 12.6: HTTP Protocol used in the Browser to Browse Web Pages
......................................................................................................................
......................................................................................................................
......................................................................................................................
......................................................................................................................
.......................................................................................................................
......................................................................................................................
i\£
i~
it;:
:~;:~:;~i:~~:-;,i:
'''('.;;,.t<l"d~l"'·'
lill>);
11))):0·1
l0
;;', .,.,
ilT < "",e,-,)'''",'', ., I i):n,n,:. ( •• 51
ift .: !..,.n,'.·, '!' l lfL<n,): "', 11
,E ,.));:., s) 9: t 10))10'; C' ,)
!K h;:;;:;:e ~nt HL.u:o~ ..1 W'
it.::: ~'t=-t id'-~71q ~f 1(j) ,J.:.:);:. (: )1
HZ :-l~"-*''-: 9:Jt ')91::'O::'.:~ !~.;.
mt t.L=!il:-l(·,.,,·' 11)')),): 'il "1
iE "~, ,.' 9) f 11)')):% (.: 52
~r::,~<t:a ~ HLiJ: U: i , :d
'.-I[-;-l
;.::i 1--·- J
i -,1--)
Model of Transfer:
ASCI Binary
andAuto
Fig.12.7: WS_FTP Client:Window Client used for Transferring Files from Client to
Server and from Server to Client
SLIP is a TCP/IP protocol used for communication between two machines that are
previously configured for communication with each other. For example, your Internet server
provider may provide you with a SLIP connection so that the provider's server can respond
to your requests, pass them on to the Internet, and forward ..your requested Internet
responses back to you. Your dial-up connection to the server is typically on a slower serial
line rather than on the parallel or multiplex lines such as a line of the network you are
hooking up to.
I
PPP provides layer 2 (data-link layer) service. Essentially, it packages your computer's Basics (If Internet
TCP/IP packets and forwards them to the server where they can actually be put on the
Internet. PPP is a full-duplex protocol that can be used on various physical media, including
twisted pair or fibre optic lines or satellite transmission. It uses a variation of High Speed
Data Link Control (HDLC) for packet encapsulation.
PPP is usually preferred over the earlier de facto standard Serial Line Internet Protocol
(SLIP) because it can handle synchronous as well as asynchronous communication.
PPP can share a line with other users and it has error detection that SLIP lacks. Where
a choice is possible, PPP is preferred.
12.10.6 Z39.50
.
Z39.50 is an American National Standard for Information Retrieval (IR). Prepared by
the National Information Standards Organisation (NISO), Z39.50 defines how one system
can cooperate with other systems for the purpose of searching databases and receiving
records. ANSIINISO Z39.50-1995 (ISO 23950) is one of a set of standards produced to
facilitate the interconnection of computer systems. As a network protocol, the Z39.50
standard provides a set of rules that govern the formats and procedure used by computers
to interact with one another. The Standard establishes the permissible sequences of events
at each of the two computers. Systems specify the content and structure of information
parcels that are exchanged between systems.
The standard specifies formats and procedures governing the exchange of messages
between a client and a server,enabling the user to search remote databases, identify
records which meet specified criteria, and to retrieve some or all of the identified records.
It is concerned, in particular, with the search and retrieval of information in databases.
This protocol is not used by the Internet search engines (they use http). It is more complex
and more comprehensive and powerful than http.
One of the major advantages of using Z39.50 is that it enables uniform access to a large
number of diverse and heterogeneous information sources. Z39.50 does offer one true
interface to a variety of databases. Some of these products are starting to shift to office
Unicode support, which will become increasingly important as we move into multi-
lingual record displiys. They are functionally rich because this \kind of product can
support simultaneous searching of multiple databases. This is a very valuable feature in
that it greatly compresses the amount of time required to sequentially query multiple
databases.
The name Z39 came from the ANSI committee on libraries, publishing and information -
services which was named Z39. NISO standards are numbered sequentially and Z39 is
the 50th standard developed by the NISO. The current version of Z39.50 was adopted in
1995 superceding earlier versions adopted in 1992 and 1988.
11) What is Z39.50 protocol? What application does it have in a library environment?
ii) Check your answer with the answers given at the end of the Unit.
....................................................................................................................
•• 0 ••••••••••••••••••••••••••••••••••••••••••••••••••••••••• 01 ••••••••••••••••••••••••••••••••••••••••••••••••••••••
.......................................................................... , .
....................................................................................................................
....................................................................................................................
....................................................................................................................
The same sort of decision-making is made for all packets that traverse the Internet. Each
time a packet reaches a router, its address is examined and the packet is forwarded either
to another router nearer its ultimate destination or to that destination if the router is the
final router on the path. The destination computer is/the one that unpacks and merges all
the packets, strip away the distribution and routing addresses and pass the data.
Domain name: Each computer must have a unique name, such as www.iitd.ac.in
12.11.2 IP Addresses
Every host on the Internet is assigned a unique identifier called an IP address or Internet
Protocol address. The JP address is a numerical address consisting of four numbers
separated by periods. An IP address looks like this: 202.54.26.82 and is read as, '"202 dot
320 54 dot 26 dot 82."
The IP address is a set of numbers that expresses the exact physical connection between Basics of Internet
a computer and the network on the Internet. They are unique and can be equated to
telephone numbers in a way: a phone number uniquely describes a user's connection to
the telephone network. IP addresses work somewhat similarly but are more complex than
phone numbers because there are literally millions of network connections possible and
because IP addresses are intended for use by computers rather than people.
The domains names have been designed in such a way that they broadly describe
organisational or geographic realities. They indicate what country the network connection
is in, what kind of organisation owns it, and sometimes further details. Servers or host
computers have special names for each country. All countries in the world have a country
suffix, except the USA. The UAE uses ae, New Zealand uses .nz, while Canada's is ea.
The domain name of a host computer looks like
• Organisation name
• Type of organisation
• Country name
IETF who designed the addressing system have planned a system, which looks like
words. These words roughly map to a parallel system of numerical addresses called IP
Addresses. Every computer on the Internet has both a domain name and an IP address,
and when you use a domain name, the computers translate that name to the corresponding
IP address.
In actuality, a host computer uses only numbers, turning all domain name addresses into
numbers. This translation process is taken care of behind the scenes by software. The
reason domain names exist in the first place is because names are more convenient for
people to use and easier to remember than numbers. Domain names are used for addressing
hosts rather than IP addresses. 321
s
Internet Resources A third level can be defined to identify a particular host server at the Internet address. In
and Services
our example, 'www' is the name of the server that handles Internet requests. (A second
server might be called 'www2'.) A third level of domain name is not required. For
example, the fully qualified domain name could have been "totalbaseball.com" and the
server assumed.
Second-level domain names must be unique on the Internet and registered with one ofthe
ICANN-accredited registrars for the CaM, NET and ORG top-level domains. Where
appropriate, a top-level domain name can be geographic. (Currently, most non-U.S. domain
names use a top-level domain name based on the country the server is in.) To register a
U. S. geographic domain name or a domain name under a country code, contact an
appropriate registrar.
More than one domain name can be mapped to the same Internet address. This allows
multiple individuals, businesses, and organisations to have separate Internet identities while
sharing the same Internet server.
Top-level Domain Names:- On the Internet, a Top-Level domain (TLD) identifies the
most general part of the domain name in an Internet address. A TLD is either a Generic
Top-Level Domain (gTLD), such as 'corn' for 'commercial,' 'edu' for 'educational,' and
so forth, or a Country Code Top-Level Domain (ccTLD), such as 'fr' for France or 'is'
for Iceland.
A second-level domain (SLD) is the portion of a Uniform Resource Locator (URL) that
identifies the specific and unique administative owner associated with an JP address.
The second-level domain name includes the top-level domain name. For example, in:
whatis.com, 'whatis' is a second-level domain. "whatis.corn" is a second-level domain
name (and includes the top-level domain name of 'com'). Second-level domains canbe
divided into· further domain levels. These sub domains sometimes represent different
computer servers within different departments. More than one second-level domain name
can be used for the same IP address.
The top level domain names include country names and are known as Geographic Domains
and type of organisations known as Non-Geographic Domains. The geographically based
top-level domains use two-letter country designations. For example, .us is used for the
United States, .ca for Canada (not California), .uk for the United Kingdom or Great
Britain, and .il for Israel. Each domain-has a number of hosts. A few more examples are
given in the following table.
Abbreviation Meaning
au Australia
be Belgium
ge Germany
jp Japan
rnx Mexico
nz New Zealand
uk United kingdom
Non-Geographic Domains
There are six common top-level domain types that are non-geographical:
.com for commercial organisations such as netcom.com, apple.com, sun.com,etc .
.net for network organisations, such as internic.net
.gov for parts of governments within the United States, such as nasa.gov,
Oklahoma.~ov, etc .
.edu for organisations of higher education, such as harvard.edu, ucdavis.edu, rnit.edu,
etc.
322
Basics of Internet
.mil for nonclassified military networks, such as army.mil,etc. (The classified
networks are not connected to the wider Internet).
.org for organisations that do not otherwise fit the commercial or educational
designations.
.int international organisation
The lowest level in the domain name system is the host name. Host names identify a
computer on the Internet. For example, in the URL www.cse.iitd.ac.in, cse is the name
of computer, iitd is the host name in the ac domain.
Host Name: In some instances, there are second-level domains delegated to
organizations such as K-12 schools, community colleges, private schools, libraries,
museums, as well as city and country governments. Examples of second-level domains
are shown here:
CC - Community colleges
TEC - Technical colleges .
LIB - Libraries
K12 _ Kindergarten through 12th grade schools and districts
STATE - State Government
MUS - Museums
http://www.iitd.ac.inlacadllibrary/index.html
which describes a web page that can be accessed with an HTTP (web browser)
application that is located on a computer named www.iitd.ac.in. The specific file is in the
director- named lacad and subdirectory !library and is the default page in that directory
(which, on this computer, is named as index.htrnl).
An HTTP URL can be for any web page (not just a home page) or any individual file.
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
.....................................................................................................................
...................................................................................................................... 323
Internet Resources
and Services 12.12 INTERNET SECURITY
The security of computer and data transmitted on Internet is recognised as having a major
threat. In fact, threat to security is the biggest hurdle to expansion of e-commerce on the
Internet. Internet security is recognised as methods used by an organisation to protect its
institutional network from intrusion. A system administrator has to ensure that intruders or
hackers do not reach and manipulate data kept on the servers. The best way to keep an
intruder from entering the network is to provide a security wall between the intruder and
the institutional network. Most often, an intruder enters the network through a software
program (such as a virus, trojan horse or a worm) or through a direct connection. Methods
such as firewalls, data encryption and user authentication are used to check a hacker from
entering the network.
Personal computer fire wall - Black ICE Agent, e-safe Desktop, McAfee Internet
Guard Dog, Norton Internet Security and Zone Alarm.
Office firewalls - D-Link Residential Gateway, Linksys Eatherfort cable, Netgear and
Sonic wall.
Corporate fire wall - Check Point, CISCO Secure PIX Firewall, e-Soft-Interceptor
and Sonic Wall Pro.
A proxy server receives a request for an Internet service (such as a web page request)
from a user. If it passes filtering requirements, the proxy server, assuming it is also a
cache server, looks in its local cache of previously down loaded web pages. If it finds the
page, it returns it to the user with out forwarding it to the Internet. If the page is not in the
cache, the proxy server, acting as a client on behalf of the user, ~ses one of its own IP
addresses to request the page from the server out on the Internet. When the page is
returned, the proxy server relates it to the original request and forwards it on to the user.
When a firewall is used to stop company workers from accessing the Internet, a proxy
server is used to provide access. It also acts as a security device by providing buffer
between inside and outside (on Internet) computers. The steps in the functioning of a
typical proxy server are given below:
ii) The proxy server contacts the web server to get the file;
iii) The proxy server keeps a copy of the file in its cache; and
The functions of proxy server, firewall and caching can be segregated on to separate
programs or combined in a single package. These programs can be hosted on to different
servers or on a single server. For example, a proxy server may be in the same machine
with a firewall server or it may be on a separate server and forward requests through the
firewall. To the user, the proxy server is invisible; all Internet requests and returned
responses appear to be directly coming to the Internet server.
I
Internet Resources 12.12.5 Digital Certification
and Services
Digital certificate is an electronic credit card that establishes credentials for doing business
or other transaction on the net. Digital certificates are issued by a government agency
called "certification authority". This certificate contains user's name, a serial number,
expiration date, a copy of the certificate holder's public key and the digital signature of
the certificate-issuing authority. Digital certificates are similar to watermarks on a bank
note. Digital certificates not only substantiate the authenticity of a message and its sender
but also alert the recipient if the message was altered while in transit.
The computer virus is a piece of programming code usually disguised as something else,
which causes unexpected and usually undesirable events. A virus is often designed so
that it automatically spreads to other computer users. Viruses can be transmitted as
attachments to e-mail note, as downloads, or they can be present on a diskette or CD.
The source of the e-mail note, downloaded file, or diskette is often the source of viruses.
Some viruses wreak their effect as soon as their code is executed; others lie dormant
until circumstances cause their code to be executed by the computer. Some viruses are
playful in intent and effect ("Happy Birthday, Ludwig!") and some can be quite harmful,
erasing data or infecting hard disk to require reformatting.
Viruses are inactive until infected applications are executed. A virus can also get activated
when a computer is booted with a floppy disk that is infected by a boot sector virus.
There are three main classes of virus:
File Infeetors: Some file infector viruses attach themselves to program files, usually
selected .COM or .EXE files. Some can infect any program for which execution is
requested, including .SYS, .OYL, .PRG, and .MNV files. When the program is loaded,
the virus is sent as an attachment to an e-mail note.
System or boot-record infectors: These viruses infect executable code found in certain
system areas on a disk. They attach to the DOS boot sector on diskettes or the Master
Boot Record on hard disks. A typical scenario is to receive a diskette from an innocent
source that contains a boot disk virus. When the operating system is running, files on the
diskette can be read without triggering the boot disk virus. However, if the diskette is left
in the drive, and then the computer is turned-off or reload the operating system, the
computer will look first in the A drive, find the diskette with its boot virus, load it, and
make it temporarily impossible to use the hard disk.
Basics of Internet
Macro Viruses: These are among the most common viruses, and they tend to do the
least damage. Macro viruses infect Microsoft Word application and typically insert
unwanted words or phrases.
The best protection against a virus is to know the origin of each program or file that
loaded into the computer or opened from the e-rnail program. Since this is difficult, anti-
virus software can be bought that can screen e-rnail attachments and also check all the
files periodically and remove any viruses that are found.
Anti-Virus Software
Along with commercial products, there are a few and shareware products that are
available, too. Windows and DOS users can check out the virus products offered by
MCAfee by either going to their FTP site at ftp.mcafee.corn or their web site at http://
www.mcafee.com. Another product that offers protection against viruses is called
F_PROT. F-Prot is available for Windows 3.x, Windows 95/ NT, DOS, Netware and
OS/2 (16-bit and 32-bit). Grisoft also offers its anti-virus programs free of charge from
its site http://www.grisoft.com.
The virus protection software should be updated periodically for protection against new
viruses that keep coming. Most anti -vuus programs release their updates from their web
sites.
If floppies are shared with others or files are down loaded from the online services. there
is no guarantee of protection against viruses. To help reduce the risk of your computer
being infected, follow these tips:
i) Run an anti-virus program and keep it updated often.
iv) Scan your system regularly with the full scanning engine.
vi) Write-protect floppies by sliding the little tab to expose the hole on 3.5-inch disks.
vii) Never boot computer from unknown diskettes. If it happens by accident and virus
is suspected on the diskette, shut down the computer, boot up from a clean system
diskette and check the system with an anti-virus program.
viii) Utilise your anti-virus program's memory resident scanners to check all files as
they are accessed, even from the Internet.
The Extranet can be defined as "a network that links business partners to one another
over the Internet by tying together their corporate intranets". Extranets may be used to
allow inventory database searches. for example. or to transmit information on the status
of an order. They are being used by businesses of all types such as banks, airlines, railways,
large corporate offices having several branches, etc.
12.13 SUMMARY
The Unit introduces Internet as one of the most powerful media of communication that
has completely revolutionized the modes and methods of computer and communication.
It traces the history ofInternet from 1957 when erstwhile Soviet Union launched its first
satellite, Sputnik I, prompting US President Dwight Eisenhower to launch Defence
Advanced Research Project Agency (DARPA). With the collapse of erstwhile Soviet
Union in 1990, the ARPANET ceased to exist and the Internet was made available
commercially to anyone for asking. The Internet is defined as a network of networks
that use TCPIIP protocols and that was evolved from the ARPANET. The Unit elaborates
upon management of Internet through well-known organisation, i.e., Internet Activities
Board (JAB) and its components that work together in a relatively well-structured and
roughly democratic environment to collectively participate in the research. development
and management of the Internet.
The Unit traces growth of Internet in terms of the number of hosts accessible on the
Internet as a fair measure of it~ grov.th. The Internet grew from 4 hosts in 1962 to 233
million hosts in early 2004 spread over in 200 countries. The number of users of Internet
has grown to 200 million. It highlights developments in Internet 2 and Internet 3 projects.
The Unit explains Internet architecture as a client-server model in contrast to the terminal-
host model and peer-to-peer communications vis-a-vis the client server architecture. Client-
server model and functions of client and server as pieces of interdependent software is
328 described. Taking examples from various applications. the Unit reiterates that software
I
tools in client-server environment work in pairs. For every application in client-server Basics of Internet
environment, there is a client pro~ram responsible for facilitating users to interact with
server programs and a server program that is responsible for hosting data and serving the
client.
The Unit describes the following six methods for accessing the Internet:
i) Dial-up access
It describes two types of accounts i.e., shell account vis IP account and their advantages
and disadvantages. With graphic-based applications coming in a big way, shell accounts
are not being preferred.
The Unit describes Internet Service Providers (ISP) as companies that provide access to
the Internet and the Internet Service Providers (ISPs) in India. There are around 200
ISPs that are providing services to 2.5 million users in the country. Important ISPs in
India are enlisted. The Unit provides hardware and' software requirements both at the
server end as well as at the client's end.
The protocols are described as a formal description of formats and rules that two or
more computers must follow to exchange data. The Unit elaborates upon the number of
protocols including: TCP I lP, HTTP, FTP, SLIP, PPP, X39.50 and Z39.85.
2) The Internet does not have a central authority to control its activities. There are,
however, several well-known organizations that work together in a relatively well-
structured and roughly democratic environment to collectively participate in the
research, development. and management of the Internet. The Internet Activities
Board (lAB) was created in 1983 to guide the evolution of the TCP/IP Protocol
suite and to provide research advice to the Internet community. lAB has two primary
components: 'the Internet Engineering Task Force (IETF) and the Internet Research
Task Force ({RTF). While the IETF is primarily responsible for further evolution of
the TCPIIP protocol suite. the Internet Research Task Force (lRTF) continues to 329
Internet Resources organize and explore advanced concepts in networking under the guidance of the
and Services
Internet Activities Board.
The Internet Assigned Numbers Authority (!ANA) and Internet Registry (IR) facilitate
central allocation of network and autonomous system identifiers. The Internet Registry
also provides central maintenance of the Domain Name System (DNS) root database
which points to subsidiary distributed DNS servers replicated throughout the Internet.
Besides, there are a number of Network Information Centers (NICs) located
throughout the Internet to serve its users with documentation, guidance, advice and
assistance.
3) The term Internet has been coined from two terms, i.e., interconnection and network.
A network is simply a group of computers that are connected together for sharing
information and resources. Several such networks have been joined together across
the world to form what is called as the Internet. The Internet is thus a network of
networks. It refers to the vast collection of interconnected networks that »se the
TCPIIP protocols and that evolved from the ARPANET of the late 1960's and early
1970's. Use of the common protocol called TCP lIP makes it possible for different
platforms and operating systems to be a part of the Internet.
4) Client-server model distributes the processing of a computer application between
two computers, the client and the server. The client is normally a PC. The application
program accesses data and performs processing on the server and using the data
obtained via the server more processing tasks are performed at the client. In a host-
terminal model, a server computer does all the work and terminals give each user
access to the contents on the server. The advantage of a host-terminal model is that
all the maintenance is performed at one place, i.e .. on the server. As computers
became more powerful and readily available, the client-server model became popular.
9) TCPIIP (Transmission Control Protocol I Internet Protocol) Tills is the suite of protocols
that defines the Internet. TCP/IP is a standard format for transmitting data from one
330
DNS Short for Domain Name Server, used to map names to IP addresses Basics of Internet
and vice versa. Domain Name Servers maintain central lists of
domain name / IP addresses.
DSL Short for Digital Subscriber Line. A method for moving data over
regular phone lines. A DSL circuit is much faster than a regular
phone connection, and the wires coming into the subscriber's
premises are the same (copper) wires used for regular phone
service.
Data Eneryption : It is a security procedure that encodes data so that it cannot easily
be understood. To be usable, data must be decrypted into its original
form by reversing the procedure that was used to encrypt it.
IDS Stands for Intrusion Detection System, designed to protect 'a specific
portal, volume or area, using technologies designed to sense
movement, sound or a specific act such as opening a door. This
security alarm system consists of various types of sensors (vibration,
capacitance, volumetric, etc.) to detect the unauthorized intrusion
into a facility. Typical systems include ultrasonic, infrared,
microwave sensors, and door switches. IDS systems can be local
or connected to a central station.
Dawson, A. (1997). The Internet for Library and Information Professionals. London:
Library Association Publishing.
Dern, Daniel (1994). The Internet Guide for New Users. New York: McGraw Hill.
Ellsworth, JiII and Barron, Billy, let al.] (1997). The Internet 1997. Indianapolis: Sams.net
Publishing.
Hahn, Harley (1997). Internet: Complete Reference. 2"d ed. New Delhi: Tata McGraw
Hill.
Johnson,.Dave (1998). Internet Explorer 4: Browsing and Beyond. New Delhi: Tata
McGraw Hill.
Kane , Kevin. Choosing your ISP and Internet Connection Type. (http://
www.arts.uwaterloo.ca!ACO/newsletters/sOl/articles/isp3nd_connection_type.html) .
Kumar, PSG and Vashishth, CP. CALIBER - 99: Academic libraries in Internet era:
Paper presented at the Sixth NationaL Convention for Automation of Libraries
in Education and Research, Nagpur, 18 - 20 Feb., 1999. Ahmedabad, INFLIBNET,
1999.
McBride, P.K. (1999). Internet Made Simple. 2nd ed. Oxford: Butterworth-Heineman.
Mehta, Subhash. (1996). Understanding and Using Internet. Delhi: Global Business
Press. 335
I
Internet Resources
and Services
Nair, R. Raman.(2002). Accessing Information through Internet. New Delhi: Ess Ess
Publications.
Randall, Neil. (2002). Teach Yourself the Internet in a Week. New Delhi: Prentice Hall
of India.
Turpen, Aaron. Different Internet Connection Types and their Pros and Cons.
Teachnology, Inc. (http://www.teach-nology.comltutorials/connections/)
H6