Client Server
Client Server
a system of globally unique identifiers for resources on the Web and elsewhere,
the universal document identifier (UDI), later known as uniform resource
locator (URL);
the publishing language Hypertext Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).
With help from Cailliau he published a more formal proposal on 12 November 1990
to build a "hypertext project" called World Wide Web (abbreviated "W3") as a "web"
of "hypertext documents" to be viewed by "browsers" using a client–server
architecture.
The proposal was modelled after the Standard Generalized Markup
Language (SGML) reader Dynatext by Electronic Book Technology, a spin-off from
the Institute for Research in Information and Scholarship at Brown University. The
Dynatext system, licensed by CERN, was considered too expensive and had an
inappropriate licensing policy for use in the general high energy physics community,
namely a fee for each document and each document alteration.[citation needed]
At this point HTML and HTTP had already been in development for about two
months and the first web server was about a month from completing its first
successful test. Berners-Lee's proposal estimated that a read-only Web would be
developed within three months and that it would take six months to achieve "the
creation of new links and new material by readers, [so that] authorship becomes
universal" as well as "the automatic notification of a reader when new material of
interest to him/her has become available".
By December 1990, Berners-Lee and his work team had built all the tools necessary
for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup
Language (HTML), the first web browser (named WorldWideWeb, which was also
a web editor), the first web server (later known as CERN httpd) and the first web
site (http://info.cern.ch) containing the first web pages that described the project itself
was published on 20 December 1990.
Responsive Web Design: Make your site look good on everything, from
computers to phones.
Performance Optimization: Make your site fast so people don't get bored
waiting.
Security Measures: Keep your site safe from sneaky online troublemakers.
Content Management and SEO: Write good stuff and help people find it.
Search engine optimization (SEO) is the practice of orienting your website to rank
higher on a search engine results page (SERP) so that you receive more traffic.
The most common example of on-page SEO is optimizing a piece of content to a
specific keyword. For example, if you're publishing a blog post about making
your own ice cream, your keyword might be “homemade ice cream.” You'd
include that keyword in your post's title, slug, meta description, headers, and
body.
Protocol Stack
The HTTP method defines the action the client wants to take on the requested
resource at the given URI. HTTP request methods are usually referred to as verbs,
although they can also be nouns. Each HTTP method implements different semantics,
but some common properties are common to them: for example, they can be secure,
idempotent, or cacheable.
Method Action
GET Retrieve information from the server. Should not modify the data on the server. It can be
cached, bookmarked, and may remain in the browser history.
HEAD Similar to GET, except it transfers the status line and headers only. Should not modify
the data on the server. It cannot be bookmarked and does not remain in the browser
history.
POST Send data to the server, including images, JSON strings, file downloads, etc. It cannot
be cached, bookmarked, and not stored in the browser history.
PUT Replace or update an existing resource on the server. May change server status. It
cannot be cached, bookmarked, and not stored in browser history.
PATCH Partially modify the specified resource on the server. It is faster and requires less
resources than the PUT method. It cannot be cached, bookmarked, and not stored in
browser history.
DELETE Delete a resource from the server. May change server status. It cannot be cached,
bookmarked, and not stored in browser history.
OPTIONS Used by browsers for CORS operations. Describes the communication options available
for the requested resource. Does not change data on the server. It cannot be cached,
bookmarked, and not stored in browser history.
CONNECT Establishes two-way communication with the server by creating an HTTP tunnel
through a proxy server.
TRACE It is designed for diagnostic purposes. When used, the web server sends back to the
client the exact request that was received.
(Internet layer)
FTP is used to upload files on server and download files from server.
Internet Protocol(IP):
This mapping procedure is important because the lengths of the IP and MAC
addresses differ, and a translation is needed so that the systems can recognize one
another. The most used IP today is IP version 4 (IPv4). An IP address is 32 bits long.
However, MAC addresses are 48 bits long. ARP translates the 32-bit address to 48
and vice versa.
The MAC address is also known as the data link layer, which establishes and
terminates a connection between two physically connected devices so that data
transfer can take place. The IP address is also referred to as the network layer or the
layer responsible for forwarding packets of data through different routers. ARP works
between these layers.
MAC address is a unique identifier of a network device at the data link layer. It is a
48-bit hardware number that works at the Media Access Control sublayer of the Data
Link Layer
Well-designed websites offer much more than just aesthetics. They attract visitors and
help people understand the product, company, and branding through a variety of
indicators, encompassing visuals, text, and interactions. That means every element of
your site needs to work towards a defined goal.
But how do you achieve that harmonious synthesis of elements? Through a holistic
web design process that takes both form and function into account.
1. Goal identification: Where I work with the client to determine what goals the
new website needs to fulfill. I.e., what its purpose is.
2. Scope definition: Once we know the site's goals, we can define the scope of
the project. I.e., what web pages and features the site requires to fulfill the
goal, and the timeline for building those out.
3. Sitemap and wireframe creation: With the scope well-defined, we can start
digging into the sitemap, defining how the content and features we defined in
scope definition will interrelate.
4. Content creation: Now that we have a bigger picture of the site in mind, we
can start creating content for the individual pages, always keeping search
engine optimization (SEO) in mind to help keep pages focused on a single
topic. It's vital that you have real content to work with for our next stage:
5. Visual elements: With the site architecture and some content in place, we can
start working on the visual brand. Depending on the client, this may already be
well-defined, but you might also be defining the visual style from the ground
up. Tools like style tiles, moodboards, and element collages can help with this
process.
6. Testing: By now, you've got all your pages and defined how they display to
the site visitor, so it's time to make sure it all works. Combine manual
browsing of the site on a variety of devices with automated site crawlers to
identify everything from user experience issues to simple broken links.
7. Launch: Once everything's working beautifully, it's time to plan and execute
your site launch! This should include planning both launch timing and
communication strategies — i.e., when will you launch and how will you let
the world know? After that, it's time to break out the bubbly.
Typically, client-server architecture is arranged in a way that clients are often situated
at workstations or on personal computers, while servers are located elsewhere on the
network, usually on more powerful machines. Such a model is especially beneficial
when the clients and server perform routine tasks. For example, in hospital data
processing, a client computer can be busy running an application program for entering
patient information, meanwhile, the server computer can be running another program
to fetch and manage the database in which the information is permanently stored.
Here are some of the client-server model architecture examples from our daily life.
Hope it helps you to understand the concept better.
Mail servers
Email servers are used for sending and receiving emails. There are different software
that allows email handling.
File servers
File servers act as a centralized location for files. One of the daily life examples to
understand this is the files that we store in Google Docs. The cloud services for
Microsoft Office and Google Docs can be accessed from your devices; the files that
you save from your computer can be accessed from your phone. So, the centrally
stored files can be accessed by multiple users.
Web servers
Web servers are high-performance computers that host different websites. The server
site data is requested by the client through high-speed internet.
Workstations
Servers
Networking devices
Now that we know about the roles that workstations and servers play, let us learn
about what connects them, networking devices. Networking devices are a medium
that connects workstations and servers in a client-server architecture. Many
networking devices are used to perform various operations across the network. For
example, a hub is used for connecting a server to various workstations. Repeaters are
used to effectively transfer data between two devices. Bridges are used to
isolate network segmentation.
Workstations
Servers
Networking devices
Now that we know about the roles that workstations and servers play, let us learn
about what connects them, networking devices. Networking devices are a medium
that connects workstations and servers in a client-server architecture. Many
networking devices are used to perform various operations across the network. For
example, a hub is used for connecting a server to various workstations. Repeaters are
used to effectively transfer data between two devices. Bridges are used to
isolate network segmentation.
1-tier architecture
2-tier architecture
The best environment is possessed by this architecture, where the client’s side stores
the user interface and the server houses the database, while either the client’s side or
the server’s side manages the database logic and business logic.
The 2-tier architecture outpaces the 1-tier architecture due to its absence of
intermediaries between the client and server. Its primary application is to eliminate
client confusion, and an instance of its popularity lies in the online ticket reservation
system.
3-tier architecture
All three layers are controlled at different ends. While the presentation layer is
controlled at the client’s device, the middleware and the server handle the application
layer and the database tier respectively. Due to the presence of a third layer that
provides data control, 3-tier architecture is more secure, has invisible database
structure, and provides data integrity.
N-tier architecture
the scaled form of the other three types of architecture. This architecture has a
provision for locating each function as an isolated layer that includes presentation,
application processing, and management of data functionalities.
Internet and its History:
In its infancy stage the Internet was originally conceived by the Department of
Defense as a way to protect government communication systems in the event of
military strike. The original network dubbed ARPANet (for the Advanced Research
Projects Agency that developed it) evolved in to a communication channel among
contractors, military personnel and university researchers who were contributing to
ARPA projects.
The network employed a set of standard protocols to create an effective way for
these people to communicate and share data with each other. ARPAnet’s popularity
continued to spread among researchers and in 1980 the National Science Foundation
linked several high speed computers and took charge of what came known to be as the
Internet. By the late 1980’s thousands of cooperating networks were participating in
the Internet. The NREN (National Research Education Network) took up the initiative
to develop and maintain high speed networks for research and education and to
investigate the commercial uses of the Internet.
Internet is a worldwide collection of networks. The Internet has different tools and
services that are provided:
E-mail
Voice mail
FTP
WWW
E-Commerce
Chat
Search Engine
Electronic mail (email): E-mail is an electronic mail. The messages can be sent
electronically over a network. For sending or receiving an email, the user must have
an email address; email address is given as:
username@location