DSCCCCC
DSCCCCC
RPC Architecture
RPC architecture has mainly five components of the program:
Client
Client Stub
RPC Runtime
Server Stub
Server
Step 1) The client, the client stub, and one instance of RPC run time execute on the client
machine.
Step 2) A client starts a client stub process by passing parameters in the usual way. The client
stub stores within the client’s own address space. It also asks the local RPC Runtime to send
back to the server stub.
Step 3) In this stage, RPC is accessed by the user by making regular Local Procedural Cal.
RPC Runtime manages the transmission of messages between the network across client and
server. It also performs the job of retransmission, acknowledgment, routing, and encryption.
Step 4) After completing the server procedure, it returns to the server stub, which packs
(marshalls) the return values into a message. The server stub then sends a message back to
the transport layer.
Step 5) In this step, the transport layer sends back the result message to the client transport
layer, which returns a message to the client stub.
Step 6) In this stage, the client stub demarshalls (unpack) the return parameters, in the resulting
packet, and the execution process returns to the caller.
2. Openness:
The openness of the distributed system is determined primarily by the degree to
which new resource-sharing services can be made available to the users. Open
systems are characterized by the fact that their key interfaces are published. It is
based on a uniform communication mechanism and published interface for access to
shared resources. It can be constructed from heterogeneous hardware and software.
3. Scalability:
Scalability of the system should remain efficient even with a significant increase in
the number of users and resources connected. It shouldn’t matter if a programme has
10 or 100 nodes; performance shouldn’t vary. A distributed system’s scaling
requires consideration of a number of elements, including size, geography, and
Management.
4. Security :
Security of information system has three components Confidentially, integrity and
availability. Encryption protects shared resources, keeps sensitive information
secrets when transmitted.
5. Failure Handling:
When some faults occur in hardware and the software program, it may produce
incorrect results or they may stop before they have completed the intended
computation so corrective measures should to implemented to handle this case.
Failure handling is difficult in distributed systems because the failure is partial i, e,
some components fail while others continue to function.
6. Concurrency:
There is a possibility that several clients will attempt to access a shared resource at
the same time. Multiple users make requests on the same resources, i.e read, write,
and update. Each resource must be safe in a concurrent environment. Any object that
represents a shared resource in a distributed system must ensure that it operates
correctly in a concurrent environment.
7. Transparency :
Transparency ensures that the distributes system should be perceived as a single
entity by the users or the application programmers rather than the collection of
autonomous systems, which is cooperating. The user should be unaware of where
the services are located and the transferring from a local machine to a remote one
should be transparent.
4) Explains Client – Server Model with the help of a diagram in distributed systems.
Ans:
Client Server Model:
The Client-server model is a distributed application structure that partitions task or
workload between the providers of a resource or service, called servers, and service
requesters called clients.
In the client-server architecture, when the client computer sends a request for data to the
server through the internet, the server accepts the requested process and deliver the data
packets requested back to the client.
Client:
A client is a program that runs on the local machine requesting service from the server. A
client program is a finite program means that the service started by the user and terminates
when the service is completed.
Server:
A server is a program that runs on the remote machine providing services to the clients.
When the client requests for a service, then the server opens the door for the incoming
requests, but it never initiates the service.
A server program is an infinite program means that when it starts, it runs infinitely unless
the problem arises. The server waits for the incoming requests from the clients. When the
request arrives at the server, then it responds to the request.
Working of Client-Server Model:
There are few steps to follow to interacts with the servers a client.
• User enters the URL(Uniform Resource Locator) of the website or file. The
Browser then requests the DNS(DOMAIN NAME SYSTEM) Server.
• DNS Server lookup for the address of the WEB Server.
• DNS Server responds with the IP address of the WEB Server.
• Browser sends over an HTTP/HTTPS request to WEB Server’s IP (provided
by DNS server).
• Server sends over the necessary files of the website.
• Browser then renders the files and the website is displayed. This rendering is done
with the help of DOM (Document Object Model) interpreter, CSS interpreter
and JS Engine collectively known as the JIT or (Just in Time) Compilers.
5)Message Passing:
• Message passing means how a message can be sent from one end to the other end.
Either it may be a client-server model or it may be from one node to another node.
• Message Passing provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
• Message passing model allows multiple processes to read and write data to the message
queue without being connected to each other. Messages are stored on the queue until their
recipient retrieves them. Message queues are quite useful for interprocess communication
and are used by most operating systems.
In the above diagram, both the processes P1 and P2 can access the message queue and
store and retrieve data.
Fundamentals of Message Passing:
1. In message-passing systems, processors communicate with one another by sending
and receiving messages over a communication channel.
2. The pattern of the connection provided by the channel is described by some topology
systems.
3. The collection of the channels is called a network.
4. So, by the definition of distributed systems, we know that they are geographically set
of computers. So, it is not possible for one computer to directly connect with some
other node.
5. So, all channels in the Message-Passing Model are private.
6. The sender decides what data has to be sent over the network. An example is, making
a phone call.
Advantages of Message Passing:
1. Easier to implement.
2. Quite tolerant of high communication latencies.
3. Easier to build massively parallel hardware.
4. It is more tolerant of higher communication latencies.
5. Message passing libraries are faster and give high performanc
UNIT 2
● Load is often expressed in terms of the CPU queue length or CPU utilization, but
other performance indicators are used as well. Load distribution algorithms by
which decisions are made concerning the allocation and redistribution of tasks
with respect to a set of processors, play an important role in compute-intensive
systems. However, in many modern distributed systems, optimizing computing
capacity is less an issue than, for example, trying to minimize communication.
Moreover, due to the heterogeneity of the underlying platforms and computer
networks, performance improvement through code migration is often based on
qualitative reasoning instead of mathematical models.
● Consider, for example, a client-server system in which the server manages a huge
database. If a client application needs to do many database operations involving
large quantities of data, it may be better to ship part of the client application to the
server and send only the results across the network. Otherwise, the network may
be swamped with the transfer of data from the server to the client. In this case,
code migration is based on the assumption that it generally makes sense to process
data close to where those data reside.
● This same reason can be used for migrating parts of the server to the client. For
example, in many interactive database applications, clients need to fill in forms
that are subsequently translated into a series of database operations. Processing the
form at the client side, and sending only the completed form to the server, can
sometimes avoid that a relatively large number of small messages need to cross
the network. The result is that the client perceives better performance, while at the
same time the server spends less time on form processing and communication.
● Support for code migration can also help improve performance by exploiting
parallelism, but without the usual intricacies related to parallel programming. A
typical example is searching for information in the Web. It is relatively simple to
implement a search query in the form of a small mobile program that moves from
site to site. By making several copies of such a program, and sending each off to
different sites, we may be able to achieve a linear speed-up compared to using just
a single program instance
3 ) Mutual Exclusion
When a process is accessing a shared variable, the process is said to be in a critical section
(CS). No two processes can be in the same critical condition at the same time. This is called
mutual exclusion.
Mutual exclusion will ensure that there will be no other process that should enter the critical
section i.e. use the already shared resources at the same time.
Eg: When both the processes P1 and P2 are asking for the same item at the same time from
the system, it is said to be in the critical section.
This algorithm provides a timestamp for distributed mutual exclusion.
1) Centralized Algorithm
In a centralized algorithm, one process is elected as the coordinator.
Whenever a process wants to access a shared resource, it sends a request to the
coordinator to ask for permission.
Coordinator checks whether the queue is empty or not, if the queue is empty it sends the
message.
Coordinator may queue the requests.
Coordinator takes only one request at a time, until the queue is not empty. It won’t reply
to any other permission asked by any other processes.
Mutual understanding will be there between the processes.
ANS: Java beans incorporate a set of objects into one accessible object that
can be accessed easily from any application. This single accessible object
is maintainable, customizable, and reusable. The setter/getter method
and the single public constructor are used to govern that single accessible
object. We can update and read the value of any variable of any object by
using the setter and getter, respectively.
The EJB stands for Enterprise Java beans that is a server-based
architecture that follows the specifications and requirements of the
enterprise environment. EJB is conceptually based on the Java
RMI(Remote Method Invocation) specification. In EJB, the beans are run
in a container having four-tier architecture. This architecture consists of
four layers, i.e., Client layer, Web layer, Application layer, and Data layer.
The EJB architecture has two main layers, i.e., Application
Server and EJB Container, based on which the EJB architecture exist. The
graphical representation of the EJB architecture is given below.
In the above diagram, the logical representation of how EJBs are invoked
and deployed by using RMI(Remote Method Invocation) is defined. The
containers of the EJB cannot be self-deployed. In order to deploy the
containers, it requires the Application server.
Define the cloud and write the benefits of the cloud in business and IT
perspective
ANS: In Simplest terms, cloud computing means storing and accessing the data
and programs on remote servers that are hosted on the internet instead of the
computer’s hard drive or local server. Cloud computing is also referred to as
Internet-based computing. Cloud Computing Architecture: Cloud computing
architecture refers to the components and sub-components required for cloud
computing. These components typically refer to:
1. Front end (fat client, thin client)
2. Back-end platforms (servers, storage)
3. Cloud-based delivery and a network(Internet, Intranet, Intercloud).
Hosting a cloud: There are three layers in cloud computing. Companies use these
layers based on the service they provide.
• Infrastructure
• Platform
• Application
Benefits of Cloud Hosting:
1. Scalability: With Cloud hosting, it is easy to grow and shrink the
number and size of servers based on the need. This is done by either
increasing or decreasing the resources in the cloud. This ability to alter
plans due to fluctuation in business size and needs is a superb benefit of
cloud computing, especially when experiencing a sudden growth in
demand.
2. Instant: Whatever you want is instantly available in the cloud.
3. Save Money: An advantage of cloud computing is the reduction in
hardware costs. Instead of purchasing in-house equipment, hardware
needs are left to the vendor. For companies that are growing rapidly,
new hardware can be large, expensive, and inconvenient. Cloud
computing alleviates these issues because resources can be acquired
quickly and easily. Even better, the cost of repairing or replacing
equipment is passed to the vendors. Along with purchase costs, off-site
hardware cuts internal power costs and saves space. Large data centers
can take up precious office space and produce a large amount of heat.
Moving to cloud applications or storage can help maximize space and
significantly cut energy expenditures.
4. Reliability: Rather than being hosted on one single instance of a physical
server, hosting is delivered on a virtual partition that draws its resource,
such as disk space, from an extensive network of underlying physical
servers. If one server goes offline it will have no effect on availability, as
the virtual servers will continue to pull resources from the remaining
network of servers.
5. Physical Security: The underlying physical servers are still housed
within data centers and so benefit from the security measures that those
facilities implement to prevent people from accessing or disrupting them
On-site.
personal computer, you are accessing them online from any Internet-
capable device—the information will be available anywhere you go and