0% found this document useful (0 votes)
30 views

DSCCCCC

The document discusses RPC architecture and its five main components: client, client stub, RPC runtime, server stub, and server. It describes the steps in the RPC process and how the components interact to make remote procedure calls. It also discusses advantages of distributed computing over standalone applications.

Uploaded by

libir12689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

DSCCCCC

The document discusses RPC architecture and its five main components: client, client stub, RPC runtime, server stub, and server. It describes the steps in the RPC process and how the components interact to make remote procedure calls. It also discusses advantages of distributed computing over standalone applications.

Uploaded by

libir12689
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

1) Explain RPC architecture with its components

RPC Architecture
RPC architecture has mainly five components of the program:

Client
Client Stub
RPC Runtime
Server Stub
Server

Following steps take place during the RPC process:

Step 1) The client, the client stub, and one instance of RPC run time execute on the client
machine.

Step 2) A client starts a client stub process by passing parameters in the usual way. The client
stub stores within the client’s own address space. It also asks the local RPC Runtime to send
back to the server stub.

Step 3) In this stage, RPC is accessed by the user by making regular Local Procedural Cal.
RPC Runtime manages the transmission of messages between the network across client and
server. It also performs the job of retransmission, acknowledgment, routing, and encryption.

Step 4) After completing the server procedure, it returns to the server stub, which packs
(marshalls) the return values into a message. The server stub then sends a message back to
the transport layer.

Step 5) In this step, the transport layer sends back the result message to the client transport
layer, which returns a message to the client stub.
Step 6) In this stage, the client stub demarshalls (unpack) the return parameters, in the resulting
packet, and the execution process returns to the caller.

2) Mention the advantages of a distributed computing environment over standalone


applications.
Ans:
Distributed System:
Distributed System is a collection of autonomous computer systems that are physically
separated but are connected by a centralized computer network that is equipped with
distributed system software. The autonomous computers will communicate among each
system by sharing resources and files and performing the tasks assigned to them.
Advantages of Distributed System:
• Applications in Distributed Systems are Inherently Distributed Applications.
• Information in Distributed Systems is shared among geographically distributed
users.
• It has a better price performance ratio and flexibility.
• It has shorter response time and higher throughput.
• It has higher reliability and availability against component failure.
• It has extensibility so that systems can be extended in more remote locations and
also incremental growth.
Advantages of distributed computing environment over standalone applications:
1. Data Sharing:
Distributed systems allow many users to access to a common database. It increases the
scale of the system as a number of processors communicate with more users by
accommodating to improve the responsiveness of the system.
2. Resource Sharing:
It is the ability to use any Hardware, Software, or Data anywhere in the System.
In distributes system, autonomous systems can share resources from remote locations.
Example: Peripheral device like color printer.
3. Communication:
Distributed computing system enhance human-to-human communication.
Example: Email, Chats, etc.
4. Flexibility:
Distributed system spreads the workload over the available machines.

3) Discuss about Issues in designing Distributed Systems in detail.


Ans:
Distributed System:
The distributed information system is defined as “a number of interdependent
computers linked by a network for sharing information among them”. A distributed
information system consists of multiple autonomous computers that communicate or
exchange information through a computer network.
Design issues of distributed system –
1. Heterogeneity :
Heterogeneity is applied to the network, computer hardware, operating system and
implementation of different developers. A key component of the heterogeneous
distributed system client-server environment is middleware. Middleware is a set of
services that enables application and end-user to interacts with each other across a
heterogeneous distributed system.

2. Openness:
The openness of the distributed system is determined primarily by the degree to
which new resource-sharing services can be made available to the users. Open
systems are characterized by the fact that their key interfaces are published. It is
based on a uniform communication mechanism and published interface for access to
shared resources. It can be constructed from heterogeneous hardware and software.

3. Scalability:
Scalability of the system should remain efficient even with a significant increase in
the number of users and resources connected. It shouldn’t matter if a programme has
10 or 100 nodes; performance shouldn’t vary. A distributed system’s scaling
requires consideration of a number of elements, including size, geography, and
Management.

4. Security :
Security of information system has three components Confidentially, integrity and
availability. Encryption protects shared resources, keeps sensitive information
secrets when transmitted.

5. Failure Handling:
When some faults occur in hardware and the software program, it may produce
incorrect results or they may stop before they have completed the intended
computation so corrective measures should to implemented to handle this case.
Failure handling is difficult in distributed systems because the failure is partial i, e,
some components fail while others continue to function.

6. Concurrency:
There is a possibility that several clients will attempt to access a shared resource at
the same time. Multiple users make requests on the same resources, i.e read, write,
and update. Each resource must be safe in a concurrent environment. Any object that
represents a shared resource in a distributed system must ensure that it operates
correctly in a concurrent environment.

7. Transparency :
Transparency ensures that the distributes system should be perceived as a single
entity by the users or the application programmers rather than the collection of
autonomous systems, which is cooperating. The user should be unaware of where
the services are located and the transferring from a local machine to a remote one
should be transparent.

4) Explains Client – Server Model with the help of a diagram in distributed systems.
Ans:
Client Server Model:
The Client-server model is a distributed application structure that partitions task or
workload between the providers of a resource or service, called servers, and service
requesters called clients.
In the client-server architecture, when the client computer sends a request for data to the
server through the internet, the server accepts the requested process and deliver the data
packets requested back to the client.
Client:
A client is a program that runs on the local machine requesting service from the server. A
client program is a finite program means that the service started by the user and terminates
when the service is completed.
Server:
A server is a program that runs on the remote machine providing services to the clients.
When the client requests for a service, then the server opens the door for the incoming
requests, but it never initiates the service.
A server program is an infinite program means that when it starts, it runs infinitely unless
the problem arises. The server waits for the incoming requests from the clients. When the
request arrives at the server, then it responds to the request.
Working of Client-Server Model:
There are few steps to follow to interacts with the servers a client.
• User enters the URL(Uniform Resource Locator) of the website or file. The
Browser then requests the DNS(DOMAIN NAME SYSTEM) Server.
• DNS Server lookup for the address of the WEB Server.
• DNS Server responds with the IP address of the WEB Server.
• Browser sends over an HTTP/HTTPS request to WEB Server’s IP (provided
by DNS server).
• Server sends over the necessary files of the website.
• Browser then renders the files and the website is displayed. This rendering is done
with the help of DOM (Document Object Model) interpreter, CSS interpreter
and JS Engine collectively known as the JIT or (Just in Time) Compilers.

Advantages of Client-Server model:


• Centralized system with all data in a single place.
• Cost efficiency requires less maintenance cost and Data recovery is possible.
• The capacity of the Client and Servers can be changed separately.
Disadvantages of Client-Server model:
• Clients are prone to viruses, Trojans and worms if present in the Server or
uploaded into the Server.
• Server are prone to Denial of Service (DOS) attacks.
• Data packets may be spoofed or modified during transmission.

5)Message Passing:

• Message passing means how a message can be sent from one end to the other end.
Either it may be a client-server model or it may be from one node to another node.
• Message Passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
• Message passing model allows multiple processes to read and write data to the message
queue without being connected to each other. Messages are stored on the queue until their
recipient retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.

In the above diagram, both the processes P1 and P2 can access the message queue and
store and retrieve data.
Fundamentals of Message Passing:
1. In message-passing systems, processors communicate with one another by sending
and receiving messages over a communication channel.
2. The pattern of the connection provided by the channel is described by some topology
systems.
3. The collection of the channels is called a network.
4. So, by the definition of distributed systems, we know that they are geographically set
of computers. So, it is not possible for one computer to directly connect with some
other node.
5. So, all channels in the Message-Passing Model are private.
6. The sender decides what data has to be sent over the network. An example is, making
a phone call.
Advantages of Message Passing:
1. Easier to implement.
2. Quite tolerant of high communication latencies.
3. Easier to build massively parallel hardware.
4. It is more tolerant of higher communication latencies.
5. Message passing libraries are faster and give high performanc

UNIT 2

1) Differentiate between Physical and Logical Clocks?


2) Reasons for Migrating Code :

● Traditionally, code migration in distributed systems took place in the form of


process migration in which an entire process was moved from one machine to
another. Moving a running process to a different machine is a costly and intricate
task, and there had better be a good reason for doing so. That reason has always
been performance. The basic idea is that overall system performance can be
improved if processes are moved from heavily-loaded to lightly-loaded machines.

● Load is often expressed in terms of the CPU queue length or CPU utilization, but
other performance indicators are used as well. Load distribution algorithms by
which decisions are made concerning the allocation and redistribution of tasks
with respect to a set of processors, play an important role in compute-intensive
systems. However, in many modern distributed systems, optimizing computing
capacity is less an issue than, for example, trying to minimize communication.
Moreover, due to the heterogeneity of the underlying platforms and computer
networks, performance improvement through code migration is often based on
qualitative reasoning instead of mathematical models.
● Consider, for example, a client-server system in which the server manages a huge
database. If a client application needs to do many database operations involving
large quantities of data, it may be better to ship part of the client application to the
server and send only the results across the network. Otherwise, the network may
be swamped with the transfer of data from the server to the client. In this case,
code migration is based on the assumption that it generally makes sense to process
data close to where those data reside.
● This same reason can be used for migrating parts of the server to the client. For
example, in many interactive database applications, clients need to fill in forms
that are subsequently translated into a series of database operations. Processing the
form at the client side, and sending only the completed form to the server, can
sometimes avoid that a relatively large number of small messages need to cross
the network. The result is that the client perceives better performance, while at the
same time the server spends less time on form processing and communication.
● Support for code migration can also help improve performance by exploiting
parallelism, but without the usual intricacies related to parallel programming. A
typical example is searching for information in the Web. It is relatively simple to
implement a search query in the form of a small mobile program that moves from
site to site. By making several copies of such a program, and sending each off to
different sites, we may be able to achieve a linear speed-up compared to using just
a single program instance
3 ) Mutual Exclusion
When a process is accessing a shared variable, the process is said to be in a critical section
(CS). No two processes can be in the same critical condition at the same time. This is called
mutual exclusion.
Mutual exclusion will ensure that there will be no other process that should enter the critical
section i.e. use the already shared resources at the same time.
Eg: When both the processes P1 and P2 are asking for the same item at the same time from
the system, it is said to be in the critical section.
This algorithm provides a timestamp for distributed mutual exclusion.

Distributed Mutual Exclusion


Assume there is an agreement on how a resource is identified.
● Pass identifier with the requests.
● Create an algorithm to allow a process to obtain exclusive access to a resource.
At the sender side:
● When a process wants to access a shared resource, it builds a message containing the
name of the resource, it's process number and the current time (logical time).
● Then it sends the messages to all other processes including itself.

At the receiver side:


● When a process receives a request message from another process, the action it takes
depends on its own state with respect to the resource named in the message.
At the receiver side there are three different cases. We will look to the cases one by one.
● If the receiver is not accessing the resource and does not want to access it, it simply
sends back an OK message to the sender.
● If the receiver is already accessing the shared resource, it simply does not reply. Instead,
it queues the request of other processes.
● If the receiver wants to access the resource but has not yet done. So, it compares the
timestamp of the incoming message with the one contained in the message that it has sent to
everyone.
● The lowest one always wins.
The different algorithms based on message passing to implement mutual exclusion in
distributed system are:
● Centralized algorithm
● Token Ring algorithm

1) Centralized Algorithm
In a centralized algorithm, one process is elected as the coordinator.
Whenever a process wants to access a shared resource, it sends a request to the
coordinator to ask for permission.
Coordinator checks whether the queue is empty or not, if the queue is empty it sends the
message.
Coordinator may queue the requests.
Coordinator takes only one request at a time, until the queue is not empty. It won’t reply
to any other permission asked by any other processes.
Mutual understanding will be there between the processes.

Requirements of centralized mutual exclusion algorithms :


● The primary goal of the centralized algorithms is to request only one access to the
critical section at a time.
● Freedom from the deadlocks.
● Freedom from starvation.
● Fairness.
● Fault tolerance.

Define CORBA and their Components in detail.

The Common Object Request Broker Architecture (CORBA) is a standard defined by


the Object Management Group (OMG) that enables software components written in
multiple computer languages and running on multiple computers to work together.
CORBA is a standard for distributing objects across networks so that operations on
those objects can be called remotely. CORBA is not associated with a particular
programming language, and any language with a CORBA binding can be used to
call and implement CORBA objects. Objects are described in a syntax called Interface
Definition Language (IDL). CORBA includes four components:
Object Request Broker (ORB) The Object Request Broker (ORB) handles
the communication, marshaling, and unmarshaling of parameters so
that the parameter handling is transparent for a CORBA server and
client applications.
CORBA server The CORBA server creates CORBA objects and initializes
them with an ORB. The server places references to the CORBA objects
inside a naming service so that clients can access them. Naming service
The naming service holds references to CORBA objects.
CORBA Request node The CORBA Request node acts as a CORBA
client. The following diagram shows the layers of communication
between IBM® Integration Bus and CORBA.
How Developing Application using CORBA? Discuss.
Terminology: This section defines some of the basic terms used
in this chapter. See also Appendix D, "Abbreviations and
Acronyms" for a list of common acronyms used in Java and
distributed object computing.
Client: A client is an object, an application, or an applet that
makes a request of a server object. Remember that a client need
not be a Java application running on a workstation or a
network computer, nor an applet downloaded by a web
browser. A server object can be a client of another server object.
"Client" refers to a role in a requestor/server relationship, not
to a physical location or a type of computer system.
Marshaling: In distributed object computing, marshaling refers
to the process by which the ORB passes requests and data
between clients and server objects.
object adapter: Each CORBA ORB implements an object
adapter (OA), which is the interface between the ORB and the
message-passing objects. CORBA 2.0 specifies that a basic
object adapter (BOA) must exist, but most of the details of its
interface are left up to individual CORBA vendors. Future
CORBA standards will require a vendor-neutral portable object
adapter (POA). Oracle intends to support a POA in a future
release.
Request: A request is a method invocation. Other names
sometimes used in its stead are method call and message.
server object: A CORBA server object is a Java object activated
by the server, typically on a first request from a client.
Explain Overview of EJB S/W Architecture in detail.

ANS: Java beans incorporate a set of objects into one accessible object that
can be accessed easily from any application. This single accessible object
is maintainable, customizable, and reusable. The setter/getter method
and the single public constructor are used to govern that single accessible
object. We can update and read the value of any variable of any object by
using the setter and getter, respectively.
The EJB stands for Enterprise Java beans that is a server-based
architecture that follows the specifications and requirements of the
enterprise environment. EJB is conceptually based on the Java
RMI(Remote Method Invocation) specification. In EJB, the beans are run
in a container having four-tier architecture. This architecture consists of
four layers, i.e., Client layer, Web layer, Application layer, and Data layer.
The EJB architecture has two main layers, i.e., Application
Server and EJB Container, based on which the EJB architecture exist. The
graphical representation of the EJB architecture is given below.

In the above diagram, the logical representation of how EJBs are invoked
and deployed by using RMI(Remote Method Invocation) is defined. The
containers of the EJB cannot be self-deployed. In order to deploy the
containers, it requires the Application server.
Define the cloud and write the benefits of the cloud in business and IT
perspective
ANS: In Simplest terms, cloud computing means storing and accessing the data
and programs on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to as
Internet-based computing. Cloud Computing Architecture: Cloud computing
architecture refers to the components and sub-components required for cloud
computing. These components typically refer to:
1. Front end (fat client, thin client)
2. Back-end platforms (servers, storage)
3. Cloud-based delivery and a network(Internet, Intranet, Intercloud).
Hosting a cloud: There are three layers in cloud computing. Companies use these
layers based on the service they provide.
• Infrastructure
• Platform
• Application
Benefits of Cloud Hosting:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the
number and size of servers based on the need. This is done by either
increasing or decreasing the resources in the cloud. This ability to alter
plans due to fluctuation in business size and needs is a superb benefit of
cloud computing, especially when experiencing a sudden growth in
demand.
2. Instant: Whatever you want is instantly available in the cloud.
3. Save Money: An advantage of cloud computing is the reduction in
hardware costs. Instead of purchasing in-house equipment, hardware
needs are left to the vendor. For companies that are growing rapidly,
new hardware can be large, expensive, and inconvenient. Cloud
computing alleviates these issues because resources can be acquired
quickly and easily. Even better, the cost of repairing or replacing
equipment is passed to the vendors. Along with purchase costs, off-site
hardware cuts internal power costs and saves space. Large data centers
can take up precious office space and produce a large amount of heat.
Moving to cloud applications or storage can help maximize space and
significantly cut energy expenditures.
4. Reliability: Rather than being hosted on one single instance of a physical
server, hosting is delivered on a virtual partition that draws its resource,
such as disk space, from an extensive network of underlying physical
servers. If one server goes offline it will have no effect on availability, as
the virtual servers will continue to pull resources from the remaining
network of servers.
5. Physical Security: The underlying physical servers are still housed
within data centers and so benefit from the security measures that those
facilities implement to prevent people from accessing or disrupting them
On-site.

Explain about cloud and virtualization with an example.


Cloud: The definition for the cloud can seem murky, but essentially, it’s
a term used to describe a global network of servers, each with a unique
function. The cloud is not a physical entity, but instead is a vast network
of remote servers around the globe which are hooked together and
meant to operate as a single ecosystem. These servers are designed to
either store and manage data, run applications, or deliver content or a
service such as streaming videos, web mail, office productivity software,
or social media. Instead of accessing files and data from a local or

personal computer, you are accessing them online from any Internet-
capable device—the information will be available anywhere you go and

anytime you need it.


Cloud Examples: Amazon Web Services (AWS), Google Cloud
Platform (GCP), Microsoft Azure

Virtualization: Virtualization uses software to create an abstraction layer


over computer hardware that allows the hardware elements of a single
computer—processors, memory, storage and more—to be divided into
multiple virtual computers, commonly called virtual machines (VMs).
Each VM runs its own operating system (OS) and behaves like an
independent computer, even though it is running on just a portion of the
actual underlying computer hardware.
Virtualization Examples: IBM CP/CMS
Write down the characteristics of cloud computing.
1. On-demand self-services:
The Cloud computing services does not require any human
administrators, user themselves are able to provision, monitor
and manage computing resources as needed.
2. Broad network access:
The Computing services are generally provided over standard
networks and heterogeneous devices.
3. Rapid elasticity:
The Computing services should have IT resources that are able
to scale out and in quickly and on as needed basis. Whenever
the user require services it is provided to him and it is scale
out as soon as its requirement gets over.
4. Resource pooling:
The IT resource (e.g., networks, servers, storage, applications,
and services) present are shared across multiple applications
and occupant in an uncommitted manner. Multiple clients are
provided service from the same physical resource.
5. Measured service:
The resource utilization is tracked for each application and
occupant, it will provide both the user and the resource
provider with an account of what has been used. This is done
for various reasons like monitoring billing and effective use of
Resource.

Discuss about cloud infrastructure self-service


Self-service provisioning in cloud computing is enabled by many
public cloud providers so that you can pay as you go to use public
resources. Enterprises configure self-service provisioning by setting up a
user web portal typically with a catalog of cloud computing resources
that have been pre-configured for them to use. The backend complexity
and accounting is taken care of by central IT.
The demand is growing for the self-service in cloud environments,
which allows knowledge workers to do for themselves what once took
weeks, or even months of coordinated activity: provision the IT
resources needed to complete their tasks.
Self-service platforms do more than allow end-users to provision their
own resources, they also streamline both IT infrastructure and
operations. By default, a self-service portal must be backed by highly
effective automation and orchestration, which can even be augmented
with artificial intelligence and machine learning. Not only does this
produce a more fluid user experience, it cuts management overhead and
frees administrators to concentrate on high-value processes such as
managing the system architecture instead of managing the end-users. By
shifting the provisioning platform onto a public, private, or hybrid
cloud, organizations can take advantage of lower infrastructure costs by
using software-defined architectures built on commodity hardware.
But the benefits don’t end there. Self-service provides end-users with a
wealth of opportunities that cannot be supported by traditional IT
infrastructure, with ripple effects felt across a wide range of enterprise
functions. This not only improves efficiency and performance of today’s
digital environment, but unlocks new services and even new markets in
the emerging digital economy.
The popularity of self-service provisioning has gained much
momentum because of agile delivery of software and services. DevOps
engineers need access to infrastructure on a continuous basis so that a
self-service option provides a much faster workflow than having to
make requests to central IT service.
Demand self-service in cloud computing can be configured in public
cloud environments to handle peak usage automatically. When the
computing power of resources running in the cloud needs to scale to
more capacity, the resources can be provisioned for the extra demand. It
is important to monitor any demand self-service capability so that a pay
as go service does not end up costing a lot more than expected.

Write a note on dynamic cloud infrastructure.


ANS: Dynamic Infrastructure is an information technology concept
related to the design of data centers, whereby the underlying hardware
and software can respond dynamically and more efficiently to changing
levels of demand. In other words, data center assets such as storage and
processing power can be provisioned (made available) to meet surges in
user’s needs.
Dynamic infrastructures take advantage of intelligence gained across the
network. By design, every dynamic infrastructure is service-oriented
and focused on supporting and enabling the end users in a highly
responsive way. It can utilize alternative sourcing approaches, like cloud
computing to deliver new services with agility and speed.
Global organizations already have the foundation for a dynamic
infrastructure that will bring together the business and IT infrastructure
to create new possibilities.
For example:
1. Service management: This type of special facility or a functionality
is provided to the cloud IT services by the cloud service providers.
This facility includes visibility, automation and control to delivering
the first class IT services.
2. Asset-Management: In this the assets or the property which is
involved in providing the cloud services are getting managed.
3. Virtualization and consolidation: Consolidation is an effort to
reduce the cost of a technology by improving its operating
efficiency and effectiveness. It means migrating from large number
of resources to fewer one, which is done by virtualization
technology.
4. Information Infrastructure: It helps the business organizations to
achieve the following: Information compliance, availability of
resources retention and security objectives.
5. Energy-Efficiency: Here the IT infrastructure or organization
sustainable. It means it is not likely to damage or effect any other
thing.
6. Security: This cloud infrastructure is responsible for the risk
management. Risk management Refers to the risks involved in the
services which are being provided by the cloud-service providers.

You might also like