0% found this document useful (0 votes)
290 views

Discuss The Transaction Process Cycle

The transaction processing cycle consists of six main steps: 1) data entry, 2) input data validation, 3) transaction processing and validation of results, 4) file and database maintenance, 5) document and report generation, and 6) inquiry processing. The cycle begins with data entry from source documents which is then validated for accuracy before being processed. Validated transactions are used to update files and databases. Reports and documents are generated from the processed transactions. Inquiry processing allows users to check the status of information in the system. Together these steps form a continuous cycle to complete transaction requests.

Uploaded by

Goodson
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
290 views

Discuss The Transaction Process Cycle

The transaction processing cycle consists of six main steps: 1) data entry, 2) input data validation, 3) transaction processing and validation of results, 4) file and database maintenance, 5) document and report generation, and 6) inquiry processing. The cycle begins with data entry from source documents which is then validated for accuracy before being processed. Validated transactions are used to update files and databases. Reports and documents are generated from the processed transactions. Inquiry processing allows users to check the status of information in the system. Together these steps form a continuous cycle to complete transaction requests.

Uploaded by

Goodson
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

1. Discuss the transaction process cycle.

Read the information below and make out the answer and what ever
you don’t understand do please ask me to explain to you.

Definition - What does Transaction Processing mean?


Transaction processing is the process of completing a task and/or user/program
request either instantly or at runtime. It is the collection of different interrelated tasks
and processes that must work in sync to finish an overall business process
transaction.

Techopedia explains Transaction Processing


Transaction processing relates to any real-time business transaction or process
performed by a transaction processing system (TPS) or other business information
system (BIS). The process occurs when a user requests completion or fulfillment of
any process. Once a a TPS or related system receives a request, it coordinates with
the respective system for authorization, data requests or any specific task essential
to a complete transaction.
For example, when a cash withdrawal request is made at an ATM machine, the
machine first authorizes the user credentials and balance inquiry/status from the
back end banking systems. Once the information is received, the ATM machine
processes the user request or overall transaction. Moreover, a TPS can accept,
reject or halt a transaction based on environmental variables.
Share this:

Transaction processing is a style of computing, typically performed by large server


computers, that supports interactive applications. In transaction processing, work is
divided into individual, indivisible operations, called transactions. By contrast, batch
processing is a style of computing in which one or more programs processes a
series of records (a batch) with little or no action from the user or operator.
A transaction processing system allows application programmers to concentrate on
writing code that supports the business, by shielding application programs from the
details of transaction management:
 It manages the concurrent processing of transactions.
 It enables the sharing of data.
 It ensures the integrity of data.
 It manages the prioritization of transaction execution.
When a transaction starts processing, CICS runs a program that is associated with
the transaction. That program can transfer control to other programs in the course of
the transaction, making it possible to assemble modular applications consisting of
many CICS programs.
At any time, in a CICS system, many instances of a transaction can run at the same
time. A single instance of a running transaction is known as a task.
During the time that a task is running, it has exclusive use of (or holds a lock for)
each data resource that it changes, ensuring the isolation property of the transaction.
Other tasks that require access to the resources must wait until the lock is released.
To ensure overall responsiveness of the transaction processing system, you design
your CICS application programs so that they hold locks for the shortest possible
time. For example, you can design your transactions to be short-lived because CICS
releases locks when a task ends.
Subtopics
 ACID properties of transactions

In the context of transaction processing, the acronym ACID refers to the four key
properties of a transaction: atomicity, consistency, isolation, and durability.
 Commit and rollback

To assure the ACID properties of a transaction, any changes made to data in the
course of a transaction must be committed or rolled back.
 Units of work

The consistency property of a transaction ensures that data is in a consistent state


when a transaction starts and when it ends. The recoverable sequence of operations
performed by a transaction between two points of consistency is known in CICS as
a unit of work.
 Distributed transaction processing overview

These topics describe the basic concepts of CICS distributed transaction processing
(DTP) and what you must consider when designing DTP applications.
 Business transactions

Business transactions that are supported by CICS transactions often involve a


number of actions that sometimes take place over an extended period of days or
weeks. By using CICS business transaction services (BTS), you can control the
execution of complex business transactions.
 Recovery and restart

CICS continually records information about the state of the region and about the
state of each unit of work in the region. This information is preserved and used when
a region is restarted, thus enabling CICS to restart with no loss of data integrity.

TRANSACTION PROCESSING CYCLE


Transaction processing is a basic activity in organizations. It is a routine and repetitive
activity that triggers a few other activities like updating database and generation of documents
forming a cycle. The transaction processing cycle consists of six steps such as:
a. Data entry
b. Input data validation
c. Transaction processing and validation of results
d. File and database maintenance
e. Document and report generation, and
f. Inquiry processing
The transactions are measured in some convenient unit for recording such as money unit
for expenses, hours in case of labour etc. Data pertaining to the transaction must be entered into
the system. The source of this data usually is a document such as sales order from customers,
invoice from suppliers etc., these are called source documents and they provide the basic data for
TPS. The data is entered using either the traditional data entry methods or the direct data entry
methods. In the former method the source documents like purchase order are prepared and
usually accumulated into batches. The direct entry method uses automated systems for data
capturing and recording. Point of sale terminals, optical scanners and MICR devices are used in
capturing data and transferring data to computers in real time for transactions processing.
Input data validation is the next step in TPS. It checks the accuracy and reliability of data
by comparing it with range data or standards etc. It involves error detection and error correction.
Checking for errors include checking the data for appropriate format, missing data and
inconsistent data. If the data value falls outside the normal range it is invalid. For example, if a
firm’s orders for materials in kilograms ranging from 100 to 1000 kilograms and if that range is
accepted as normal data range for purchase orders, then this range is coded into the program for
validation checking. That whenever a purchase order is prepared, as soon as quantity is entered
in the appropriate column the system check whether the quantity entered is between 100 and
1000. Otherwise it will give an error message as “Check the Quantity Entered, it is out of
range” or some other error message as is coded.
Processing of transaction data is the next step. This involves some computation,
checking and comparing etc. For instance, if it is a credit sale transaction, then the total value of
the transaction has to be computed, the system should check whether the value is within the
credit limit sanctioned to the customer, it should check the availability of stock, delivery data
possible etc.

Once the transaction is processed, certain output needs to be generated. The output may
be some documents like sales invoices, pay slips etc. Or screen displays or the output data may
used to update related databases. Files and databases have to be updated with each transaction or
each batch of transactions. In case of applications that are not time critical, the transactions may
be processed in batch mode. Certain systems like airline reservation require updating with every
transaction for giving status information in real time. Direct data entry devices have to be used
to capture transaction data and update related files and databases to provide current information
to users.
Inquiry processing is another activity of transaction processing system which involves
providing information on current status like inventory levels, customer credit limit, dues from a
particular customer, inbound supply, et. The inquiry response is pre-planned and the on-screen
display or output is formatted for the convenience of the requester.
A business transaction with a customer involves a good or service that a customer want
and the business provides for a price. The TPS supports the delivery process.
To complete a transaction with an external entity like customer, supplier etc. a series of
activities is involved such as checking of account, current inventory balance, delivery time and
price.
For instance, the transaction at a bank counter involves checking of account balance,
withdrawal of money or deposit of money. In the case of some transactions, an activity serves as
a trigger and a series of activities follow it. For example, a sales order from a customer is
followed by a number of activities; some of these are queries like:

 Checking whether the customer is an existing or a new customer,


 Checking customer’s credit limit to know whether the transaction is within the permitted credit
limit or not,
 Checking inventory balance to know whether the order can be fulfilled within the time the
customer needs it,
 Checking production schedule to know how much will be added to finished stock at the end of
a production period,
 Checking of back orders to know how much stock will be left to meet this sale order.
 Once this querying is over and if adequate stock is available for meeting the order, the sales
order is approved and the transaction is processed. This involves
 Debiting customer account with the value of goods
 Crediting sales file with the value of goods
 Updating inventory file with the quantity of stock sold
 Generating a packing list for the Dispatch department to assemble the order
 Generating documents like sales invoice bill of exchange etc.
 Packing the goods and handing them to the delivery staff, and
 Delivering the goods.
TPS actually tracks the physical workflows. Each operation of the workflow is recorded. At
each of these points in the sales order processing, the information about the state of the order is
recorded.

2. Value chain
Starbucks As An Example Of The Value Chain Model. ... A value chain is a series
of activities or processes that aims at creating and adding value to an article
(product) at every step during theproduction process.

What is a Value Chain Analysis?


By Kayla Harrison, Business News Daily Contributing WriterNovember 17, 2017 10:15 am
EST






 MORE
Credit: stoatphoto/Shutterstock

Entering a new era of innovation, businesses are competing for unbeatable prices, fine
products, successful marketing strategies and customer loyalty. One of the most valuable
tools, the value chain analysis, allows businesses to gain an advantage over their
competition.

According to Smartsheet, a value chain analysis helps you recognize ways you can reduce
cost, optimize effort, eliminate waste and increase profitability. A business begins by
identifying each part of its production process, noting steps that can be eliminated and other
possible improvements.

In doing so, businesses can determine where the best value lies with customers, and expand
or improve said value, resulting in either cost savings or enhanced production. At the end of
the process, customers can enjoy high-quality products at lower costs.

What is a value chain?


A value chain is the full range of activities – including design, production, marketing and
distribution – businesses conduct to bring a product or service from conception to delivery.
For companies that produce goods, the value chain starts with the raw materials used to make
their products, and consists of everything added before the product is sold to consumers.

Value chain management is the process of organizing these activities in order to properly
analyze them. The goal is to establish communication between the leaders of each stage to
ensure the product is placed in the customers' hands as seamlessly as possible.

Porter's value chain


Harvard Business School's Michael E. Porter was the first to introduce the concept of a value
chain. Porter, who also developed the Five Forces Model to show businesses where they
rank in competition in the current marketplace, discussed the value chain concept in his book
"Competitive Advantage: Creating and Sustaining Superior Performance" (Free Press, 1998).

"Competitive advantage cannot be understood by looking at a firm as a whole," Porter wrote.


"It stems from the many discrete activities a firm performs in designing, producing,
marketing, delivering and supporting its product. Each of these activities can contribute to a
firm's relative cost position and create a basis for differentiation."
In his book, Porter splits a business's activities into two categories: primary and support.

Primary activities include the following:

 Inbound logistics are the receiving, storing and distributing of raw materials used in the
production process.
 Operations is the stage at which the raw materials are turned into the final product.
 Outbound logistics is the distribution of the final product to consumers.
 Marketing and sales involves advertising, promotions, sales-force organization, distribution
channels, pricing and managing the final product to ensure it is targeted to the appropriate
consumer groups.
 Service refers to the activities needed to maintain the product's performance after it has been
produced, and includes things like installation, training, maintenance, repair, warranty and after-
sale services.

The support activities help the primary functions and comprise the following:

 Procurement is how the raw materials for the product are obtained.
 Technology development can be used in the research and development stage, in how new
products are developed and designed, and in process automation.
 Human resource management includes the activities involved in hiring and retaining the proper
employees to help design, build and market the product.
 Firm infrastructure refers to an organization's structure and its management, planning,
accounting, finance, and quality-control mechanisms.

Conducting the analysis


According to an article on Strategic Management Insight, there are two different
approaches to the value chain analysis: cost and differentiation advantage.
Cost advantage: After identifying the primary and support activities, businesses should
identify the cost drivers for each activity. For a more labor-intensive activity, cost drivers
could include how fast work is completed, work hours, wage rates, etc. Businesses should
then identify links between activities, knowing that if costs are reduced in one area, they can
be reduced in another. Businesses can then identify opportunities to reduce costs.
Differentiation advantage: Identifying the activities that create the most value to
customers is the priority. These can include using relative marketing strategies, knowing
about products and systems, answering phones faster, and meeting customer expectations.
The next step is evaluating these strategies in order to improve the value. Focusing on
customer service, increasing options to customize products or services, offering incentives,
and adding product features are some of the ways to improve activity value. Lastly,
businesses should identify differentiation that can be maintained and adds the most value.
Free templates are available online to help businesses determine and analyze their value
chains.
Goals and outcomes
Ideally, value chain analysis will help identify areas that can be optimized for maximum
efficiency and profitability. It is important, along with the mechanics of it all, to keep
customers feeling confident and secure enough to remain loyal to the business. By analyzing
and evaluating product quality and effectiveness of services, along with cost, a business can
find and implement strategies to improve.

The Value Chain The term ‘Value Chain’ was used by Michael Porter in his book "Competitive
Advantage: Creating and Sustaining superior Performance" (1985). The value chain analysis
describes the activities the organization performs and links them to the organizations
competitive position. Value chain analysis describes the activities within and around an
organization, and relates them to an analysis of the competitive strength of the organization.
Therefore, it evaluates which value each particular activity adds to the organizations
products or services. This idea was built upon the insight that an organization is more than a
random compilation of machinery, equipment, people and money. Only if these things are
arranged into systems and systematic activates it will become possible to produce
something for which customers are willing to pay a price. Porter argues that the ability to
perform particular activities and to manage the linkages between these activities is a source
of competitive advantage. Porter distinguishes between primary activities and support
activities. Primary activities are directly concerned with the creation or delivery of a product
or service. They can be grouped into five main areas: inbound logistics, operations,
outbound logistics, marketing and sales, and service. Each of these primary activities is
linked to support activities which help to improve their effectiveness or efficiency. There are
four main areas of support activities: procurement, technology development (including
R&D), human resource management, and infrastructure (systems for planning, finance,
quality, information management etc.).
The basic model of Porters Value Chain is as follows: The term ‚Margin’ implies that
organizations realize a profit margin that depends on their ability to manage the linkages
between all activities in the value chain. In other words, the organization is able to deliver a
product / service for which the customer is willing to pay more than the sum of the costs of
all activities in the value chain.
Some thought about the linkages between activities: These linkages are crucial for corporate
success. The linkages are flows of information, goods and services, as well as systems and
processes for adjusting activities. Their importance is best illustrated with some simple
examples: Only if the Marketing & Sales function delivers sales forecasts for the next period
to all other departments in time and in reliable accuracy, procurement will be able to order
the necessary material for the correct date. And only if procurement does a good job and
forwards order information to inbound logistics, only than operations will be able to
schedule production in a way that guarantees the delivery of products in a timely and
effective manner – as pre-determined by marketing. In the result, the linkages are about
seamless cooperation and information flow between the value chain activities. In most
industries, it is rather unusual that a single company performs all activities from product
design, production of components, and final assembly to delivery to the final user by itself.
Most often, organizations are elements of a value system or supply chain. Hence, value
chain analysis should cover the whole value system in which the organization operates.
Within the whole value system, there is only a certain value of profit margin available. This is
the difference of the final price the customer pays and the sum of all costs incurred with the
production and delivery of the product/service (e.g. raw material, energy etc.). It depends
on the structure of the value system, how this margin spreads across the suppliers,
producers, distributors, customers, and other elements of the value system. Each member
of the system will use its market position and negotiating power to get a higher proportion
of this margin. Nevertheless, members of a value system can cooperate to improve their
efficiency and to reduce their costs in order to achieve a higher total margin to the benefit of
all of them (e.g. by reducing stocks in a Just-In-Time system). A typical value chain analysis
can be performed in the following steps: · Analysis of own value chain – which costs are
related to every single activity · Analysis of customers value chains – how does our product
fit into their value chain · Identification of potential cost advantages in comparison with
competitors · Identification of potential value added for the customer – how can our product
add value to the customers value chain (e.g. lower costs or higher performance) – where
does the customer see such potential Organizations Value Chain Supplier V
In most industries, it is rather unusual that a single company performs all activities from
product design, production of components, and final assembly to delivery to the final user
by itself. Most often, organizations are elements of a value system or supply chain. Hence,
value chain analysis should cover the whole value system in which the organization
operates. Within the whole value system, there is only a certain value of profit margin
available. This is the dif

What is Value Chain Analysis?


A value chain is a chain of value added activities; products pass through the
activities in a chain, gaining value at each stage.

As a small business owner, you need to use value chain models for doing
strategic cost analysis (which investigates how your costs compare to your
competition's costs).

What is Strategy? And How Does Porter's Value Chain Model


Fit With Strategy?
Strategy is your business' direction; and the how, why, what, who and when of
following that direction. Your strategic small business plan needs to include the
chain analysis results as strategic action items.

Most businesses analyze their own internal cost structures but most do not
analyze their competitor's structures.

Analyze your value chains for your business and then compare to the
competitors in your industry that have (in total) up to 80% of the market share -
do not spend a lot of time analyzing the smaller competitors unless you believe
they are up and coming.
This type of industry analysis will be invaluable for developing and implementing
new competitive strategies.

If you are operating in an industry where most competitors are publicly


traded, you will be able to access most of their financial statements through
their mandated public annual reports.

If your business and/or industry is populated with privately held


companies, your cost analysis does not need to include specific costs - it's
unlikely your competitor will give you those - but by analyzing where in your
competitor's process they must incur cost, you can get a very good idea of your
competitor's efficiencies and inefficiencies and you should be able to estimate
some of their costs.

Value Chain Example:


For example, you might have recruited employees who've worked for your
competitor and they've told you that they are earning a dollar an hour more with
you for the same job they did with the competitor. Or you scan the online job
boards for your competitors' job postings.

Labor costs are often a large overall cost in most businesses - at least, you will
be able to estimate if they are higher or lower than you.

The value chain identifies, and shows the links, or chain, of the distinct activities
and processes that you perform to create, manufacture, market, sell, and
distribute your product or service. The focus is on recognizing the activities and
processes that create value for your customers.

The importance of value chain analysis is that it can help you assess costs in
your chain that might be reduced or impacted by a change in one of the chain's
processes. By comparing your value chain to your competitors, you can often
find the areas or links of the chain where they might be more efficient than you;
that points the direction for you to improve.

However, you need to understand that the value chain will be


influenced by the type of small business strategy you and your competitors
follow: if you are the high value, high quality market leader, your chain will be
quite different than the low cost, high volume competitor. Understand how those
differences influence your analysis and make sure that your business strategy is
in-tune with your market and with your strategic objectives.

Expect your competitors to have a value chain quite different than yours;
because their business grew from a different set of circumstances and a different
set of operating parameters than your business.
How to do a Value Chain Analysis:
(I like to do this type of analysis in a spreadsheet, so that I can add, delete,
amend, and sort easily and see the comparative data clearly.)

 include all the primary or direct activities, such as


1. purchases of supplies, materials, incoming shipping
2. manufacturing and operations (or if a consumer based business -
inventory control and operations)
3. outgoing shipping and logistics
4. customer service - includes estimating, coordinating, scheduling
5. marketing
6. sales
 then include all the support or indirect activities, such as
1. accounting and finance
2. systems support
3. legal
4. environmental
5. safety
6. human resources (hiring, firing, training, and more)
7. new product or service research and development
 the last element of your value chain needs to include the profit
margin your business (or the product you are analyzing) earns.

Each value chain analysis will be specific to the business and the industry it is in.

In your business, you may need to consider your upstream suppliers (are you
getting the best deal, the best quality, etc.) and the downstream end use of the
product or service. End-users often have a strong role to play in specifying who
gets the business - you need to understand what they value, as well as
understand what your direct customers value.

The reason to do this in-depth type of analysis is to provide you with


enough information to be able to see how cost competitive you are and where
your cost weaknesses are: this chain analysis leads to a detailed strategic cost
analysis.

This information will then lead you to develop strategic actions to reduce or
eliminate your weaknesses or disadvantages. Focusing on taking action from the
results of your value chain analysis will help your business become more
profitable, and may even earn you more market share.
Value Chain Analysis Example Using Primary
Activities
Kiesha Frue Dec 7, 2016

We can’t shove businesses into customers’ faces and expect bucket loads of sales
anymore.

Now, business is about the feedback. Communication. And adding genuine value to a
customer’s life.
By using the value chain analysis, you leverage customer desires and give the value
they need. Doing this builds trust and by proxy, sales.

We’ll look at a value chain analysis example to see how value works and why it works.
But first, we’ve got to go into the basics.

What is value chain analysis?


And better yet — why should you care?

Value chain analysis looks at what benefits (value) a company’s products and services
offer. Then you analyze how to use this value to reduce costs. Or leverage the value to
stand out from their competition.
Say you’ve got great margins — it costs you $5 to build your product but you retail it
for $25.99. You can offer lower shipping costs than your competitor whose margins are
less than boastful.

This is a competitive advantage and value your customers appreciate.


A valuable chain of importance
Mr. Michael Porter created a graph for value chain analysis. The primary activities are
broken into these parts…

1. Inbound logistics: Suppliers are vital. Because they’re necessary for receiving,
storing, and distributing products. Without the supplier, you’re limited in the
product development stage.
2. Operations: When you take products and offer them to the public, this is where
operations come into play. The systems you use can be invaluable.
3. Outbound logistics: When you provide the product to your customer. The
systems at play focus on distribution, storage, and collection of your services.
4. Sales/marketing: Goes without saying but how you get customers to say, “Yes!”
to your product is important. Here, you’ll highlight product benefits and persuade
these customers to keep their wallets away from the competition. Benefits are the
value.
5. Service: Do you have good customer service or rancid customer service? The
value you offer pre and post-sale to the customer decides whether they become a
repeat customer. Or not.
The more value you can create and use, the higher your success with said product will
be. But again, it’s all about leveraging. So, let’s get to the value chain analysis example
— finally!
Starbucks: The world renowned coffee mogul
We know Starbucks. The whole world knows Starbucks. And for good reason. Using
the above chain of operations, let’s see how Starbucks uses the five primary principles
to expand worldwide.
1. Inbound logistics — premium coffee beans…
Starbucks uses high-quality coffee beans for its drinks. They don’t outsource the
retrieval of these beans from Latin America, Asia, and Africa isn’t outsourced.
Starbucks handles the buying, transportation, and storage of beans. This is how they
keep full control on the quality of their beans.

Why this matters: Customers want the best. And to know where the beans are coming
from. Starbucks maintaining control of their coffee beans — compared to a randomly
named corporation — leaves a lasting impression in coffee drinkers’ minds.
2. Operations — outbound stores across the globe…
Starbucks is everywhere. They own several other beverage shops (Teavana and
Seattle’s Best Coffee, for example) which generate sales. Starbucks coffee shops are in
over 21,000 stores worldwide. The main stores still make the most revenue while the
licensed shops bring in less than 10% of their revenue.

Why this matters: Starbucks offers services around the world, ensuring they build a
strong consumer base and keep their name well-known. Customers order this premium
service while Starbucks sees massive sales each year. A value win-win for everyone.
3. Outbound logistics — Barely selling in retails…
Truthfully, Starbucks doesn’t sell their products outside their licensed shops. This is
where they’re lacking. If you want their coffee beans or packaged goods, you’ll must
go to their shops and pick them up. While they are planning to sell single-origin coffees,
it’s a small move by Starbucks.

Why this matters: If Starbucks products were in retail, customers could pick up their
favorite drink or mug while finishing up errands. Making it easier for Starbucks to
expand into the home. But they’ve dropped the ball here. You can use value chain
analysis to see disadvantages too and, with the right research, flip it into an advantage.
4. Sales/marketing — the quality is loud…
The high-quality products speak for themselves.
Starbucks relies heavily on quality rather than aggressive marketing tactics. Social
proof, like customers recommending drinks — even the joke articles about Starbucks
“secret menu” — are marketing methods Starbucks doesn’t pay for. The customers take
care of it.

Why this matters: Starbucks uses its reputation to their advantage. This reputation is
built on premium quality beans and service. Because of this, customers recommend
drinks, post selfies online, and write blog posts about the coffee stores. Customers see
the value and promote Starbucks themselves.
5. Service — building relationships…
Most of Starbucks sales ventures goes towards exceptional customer service in each of
their stores.

Writing names on glasses isn’t just a way for customers to know when to pick up their
drink. It’s a method to learn about a customer — temporarily, sure — and build up a
relationship between barista and customer.

Why this matters: Customers want to be important. They want to be recognized by the
companies they buy into. And Starbucks creates a comfy, familiar, and acknowledged
environment for their customers. This builds loyalty, credibility, and trust. This is
the Starbucks Experience. Combined with their unique drinks, Starbucks makes
customer retention look easy.
Now…
You’ve seen how a hugely successful company like Starbucks uses value chain analysis
correctly. You can use it to your advantage as well. Outline how your firm uses each
step and create a plan to implement positive changes.

3. The operation system

An operating system (OS) is a collection of software that manages computer


hardware resources and provides common services for computer
programs. The operating system is a vital component of the
system software in a computersystem.

4. Categories of computers:

What Are the Five Main Categories of Computers?


by Alan Hughes
RELATED ARTICLES

 What Is the Advantage of a Fanless Computer?


 Recommended CPU Speed for Internet Surfing
 Electrical Usage of an LED TV Vs. Projector
 The Advantages of a Strip-Cut Paper Shredder
 High-Speed Digital Internet Options
 Dell Laptops Vs. Desktops

The computer age has brought about many advances in technology, including the increasing
miniaturization of computers and components. However, the earliest computers were large
machines, taking up lots of floor space and consuming large amounts of electricity. As computer
technology has advanced there are more categories of computers, each with specific qualities
and purposes. What once required a large room now fits in your hand and connects to other
computers around the world.

Supercomputers
Super computers are large computing machines that have enormous computing power, with
many processors and large amounts of memory. These machines are typically used for scientific
purposes, as they are great number crunchers. Their strength is in their speed, achieving
exponential numbers of floating point operations per second, a typical computing measurement
scale. Titan, one of the world’s fastest supercomputers, has performed at 17.59 quadrillion
calculations per second.

Mainframes
At a slightly lower level than the supercomputer is the mainframe. Mainframes, contrary to much
popular thought, are not dead, but are thriving in American businesses. Large insurance and
financial firms rely on these large computers to process the massive volumes of transactions in
each business day. Mainframes were the computer of choice in the 1960s and 1970s.
Mainframes are distinguished by their speed and use of complex, powerful operating systems,
easily serving thousands of users simultaneously.

Servers
The 1980s saw the advent and growth of smaller computers, often called microcomputers. As
networks began to grow in companies, technology workers realized the need for centralizing data
storage, email and print services, giving rise to the departmental server. Servers are much
smaller than mainframes and more powerful than desktop workstations. They can serve
hundreds of users at the same time and typically provide services to websites and company
departments.

Personal Computers
Another development of the microcomputer wave of computing was the personal computer.
Workstations provide individuals with computing power at the desk, using applications such as
spreadsheets, word processors and presentation software. Laptops are essentially portable
personal computers that have batteries for power and a screen that folds down into a compact
book-like form. Workstations have become more powerful with advances in processor technology
to the point that most desktop computing power is vastly underutilized.

Hand-held Computing Devices


Smartphones are forms of hand-held computing devices. Most computers in this category can
easily slip into a pocket or purse, while another form, tablets, are more like small notebooks that
use touchscreen technology for input, as do smartphones. Smartphones are multi-purpose
devices providing phone service and Internet connectivity over wireless channels, bringing
connectivity to an entirely new level.

5. Database information

6. The benefits of database approach is to


1. reduce redundancy of information
Data’s are being made simple and complex, we don’t have to input as many
data’s which can only result in overloaded space in computer hard disk/ memory.
2. Consistent data flow
Once the data’s are being analyzed in sequence, once can easily identify the flow
of information and hence, results will come in reliable output.
3. Integration of data
Incorporating of data’s in tables should result in a highly assimilation of
information.
4. Security and User privileges
Protection of data’s from unwanted users and giving user’s rights to what level
they are to use the application.
5. Ease of application development.
It is a more comfortable workspace for the relevance of its factual purpose.

Advantages in the database approach

The advantages in the database approach are as follows:

 All the three managers are using the same database; hence, any report using the information
will not be inconsistent.

 All the three managers can view the database as per their needs.

 The application systems can be developed independent of the database.

 The data validation and updating will be once and same for all.

 The data is shared by all users.

 The data security and privacy can be managed and ensured because the data entry in the
database occurs once only and is protected by the security measures.

 Since the database is storage of the structured information, the queries can be answered fast
by using the logic of the data structures.

 ← Previous

 Next →
Chapter 1. Introduction to the Module
Table of contents

 Module objectives
 Chapter objectives
 Introduction
 Motivation for data storage
 Traditional file-based approach
 The shared file approach
 The database approach
o ANSI/SPARC three-level architecture
 The external schema
 The conceptual schema
 The internal schema
 Physical data independence
 Logical data independence
o Components of a DBMS
 DBMS engine
 User interface subsystem
 Data dictionary subsystem
 Performance management subsystem
 Data integrity management subsystem
 Backup and recovery subsystem
 Application development subsystem
 Security management subsystem
o Benefits of the database approach
o Risks of the database approach
 Data and database administration
o The role of the data administrator
o The role of the database administrator
 Introduction to the Relational model
o Entities, attributes and relationships
o Relation: Stationery
 Discussion topic
 Additional content and activities

The purpose of this chapter is to introduce the fundamental concepts of database systems. Like
most areas of computing, database systems have a significant number of terms and concepts
that are likely to be new to you. We encourage you to discuss these terms in tutorials and online
with one another, and to share any previous experience of database systems that you may have.
The module covers a wide range of issues associated with database systems, from the stages
and techniques used in the development of database applications, through to the administration
of complex database environments. The overall aim of the module is to equip you with the
knowledge required to be a valuable member of a team, or to work individually, in the areas of
database application development or administration. In addition, some coverage of current
research areas is provided, partly as a stimulus for possible future dissertation topics, and also to
provide an awareness of possible future developments within the database arena.

Module objectives
At the end of this module you will have acquired practical and theoretical knowledge and skills
relating to modern database systems. The module is designed so that this knowledge will be
applicable across a wide variety of database environments. At the end of the module you will be
able to:
 Understand and explain the key ideas underlying database systems and the database
approach to information storage and manipulation.
 Design and implement database applications.
 Carry out actions to improve the performance of existing database applications.
 Understand the issues involved in providing multiple users concurrent access to
database systems.
 Be able to design adequate backup, recovery and security measures for a database
installation, and understand the facilities provided by typical database systems to support
these tasks.
 Understand the types of tasks involved in database administration and the facilities
provided in a typical database system to support these tasks.
 Be able to describe the issues and objectives in a range of areas of contemporary
database research.

Chapter objectives
At the end of this chapter you should be able to:
 Explain the advantages of a database approach for information storage and retrieval.
 Explain the concepts of physical and logical data independence, and describe both
technically and in business terms the advantages that these concepts provide in
Information Systems development.
 Understand the basic terminology and constructs of the Relational approach to database
systems.

Introduction
In parallel with this chapter, you should read Chapter 1 and Chapter 2 of Thomas Connolly and
Carolyn Begg, "Database Systems A Practical Approach to Design, Implementation, and
Management", (5th edn.).
This chapter sets the scene for all of the forthcoming chapters of the module. We begin by
examining the approach to storing and processing data that was used before the arrival of
database systems, and that is still appropriate today in certain situations (which will be
explained). We then go on to examine the difference between this traditional, file-based
approach to data storage, and that of the database approach. We do this first by examining
inherent limitations of the file-based approach, and then discuss ways in which the database
approach can be used to overcome these limitations.
A particular model of database systems, known as the Relational model, has been the dominant
approach in the database industry since the early '80s. There are now important rivals and
extensions to the Relational model, which will be examined in later chapters, but the Relational
model remains the core technology on which the database industry worldwide is based, and for
this reason this model will be central to the entire module.

Motivation for data storage


Day-to-day business processes executed by individuals and organisations require both present
and historical data. Therefore, data storage is essential for organisations and individuals. Data
supports business functions and aids in business decision-making. Below are some of the
examples where data storage supports business functions.
Social media
Social media has become very popular in the 21st century. We access social media using our
computers and mobile phones. Every time we access social media, we interact, collaborate and
share content with other people. The owners of social media platforms store the data we
produce.
Supermarket
A supermarket stores different types of information about its products, such as quantity, prices
and type of product. Every time we buy anything from the supermarket, quantities must be
reduced and the sales information must be stored.
Company
A company will need to hold details of its staff, customers, products, suppliers and financial
transactions.
If there are a small number of records to be kept, and these do not need to be changed very
often, a card index might be all that is required. However, where there is a high volume of data,
and a need to manipulate this data on a regular basis, a computer-based solution will often be
chosen. This might sound like a simple solution, but there are a number of different approaches
that could be taken.

Traditional file-based approach


The term 'file-based approach' refers to the situation where data is stored in one or more
separate computer files defined and managed by different application programs. Typically, for
example, the details of customers may be stored in one file, orders in another, etc. Computer
programs access the stored files to perform the various tasks required by the business. Each
program, or sometimes a related set of programs, is called a computer application. For example,
all of the programs associated with processing customers' orders are referred to as the order
processing application. The file-based approach might have application programs that deal with
purchase orders, invoices, sales and marketing, suppliers, customers, employees, and so on.
Limitations
 Data duplication: Each program stores its own separate files. If the same data is to be
accessed by different programs, then each program must store its own copy of the same
data.
 Data inconsistency: If the data is kept in different files, there could be problems when an
item of data needs updating, as it will need to be updated in all the relevant files; if this is
not done, the data will be inconsistent, and this could lead to errors.
 Difficult to implement data security: Data is stored in different files by different application
programs. This makes it difficult and expensive to implement organisation-wide security
procedures on the data.
The following diagram shows how different applications will each have their own copy of the files
they need in order to carry out the activities for which they are responsible:
Figure 1.1

The shared file approach


One approach to solving the problem of each application having its own set of files is to share
files between different applications. This will alleviate the problem of duplication and inconsistent
data between different applications, and is illustrated in the diagram below:

Figure 1.2
The introduction of shared files solves the problem of duplication and inconsistent data across
different versions of the same file held by different departments, but other problems may emerge,
including:
 File incompatibility: When each department had its own version of a file for processing,
each department could ensure that the structure of the file suited their specific
application. If departments have to share files, the file structure that suits one department
might not suit another. For example, data might need to be sorted in a different sequence
for different applications (for instance, customer details could be stored in alphabetical
order, or numerical order, or ascending or descending order of customer number).
 Difficult to control access: Some applications may require access to more data than
others; for instance, a credit control application will need access to customer credit limit
information, whereas a delivery note printing application will only need access to
customer name and address details. The file will still need to contain the additional
information to support the application that requires it.
 Physical data dependence: If the structure of the data file needs to be changed in some
way (for example, to reflect a change in currency), this alteration will need to be reflected
in all application programs that use that data file. This problem is known as physical data
dependence, and will be examined in more detail later in the chapter.
 Difficult to implement concurrency: While a data file is being processed by one
application, the file will not be available for other applications or for ad hoc queries. This
is because, if more than one application is allowed to alter data in a file at one time,
serious problems can arise in ensuring that the updates made by each application do not
clash with one another. This issue of ensuring consistent, concurrent updating of
information is an extremely important one, and is dealt with in detail for database
systems in the chapter on concurrency control. File-based systems avoid these problems
by not allowing more than one application to access a file at one time.
Review question 1
What is meant by the file-based approach to storing data? Describe some of the disadvantages
of this approach.
Review question 2
How can some of the problems of the file-based approach to data storage be avoided?
Review question 3
What are the problems that remain with the shared file approach?

The database approach


The database approach is an improvement on the shared file solution as the use of a database
management system (DBMS) provides facilities for querying, data security and integrity, and
allows simultaneous access to data by a number of different users. At this point we should
explain some important terminology:
 Database: A database is a collection of related data.
 Database management system: The term 'database management system', often
abbreviated to DBMS, refers to a software system used to create and manage
databases. The software of such systems is complex, consisting of a number of different
components, which are described later in this chapter. The term database system is
usually an alternative term for database management system.
 System catalogue/Data dictionary: The description of the data in the database
management system.
 Database application: Database application refers to a program, or related set of
programs, which use the database management system to perform the computer-related
tasks of a particular business function, such as order processing.
One of the benefits of the database approach is that the problem of physical data dependence is
resolved; this means that the underlying structure of a data file can be changed without the
application programs needing amendment. This is achieved by a hierarchy of levels of data
specification. Each such specification of data in a database system is called a schema. The
different levels of schema provided in database systems are described below. Further details of
what is included within each specific schema are discussed later in the chapter.
The Systems Planning and Requirements Committee of the American National Standards
Institute encapsulated the concept of schema in its three-level database architecture model,
known as the ANSI/SPARC architecture, which is shown in the diagram below:

Figure 1.3

ANSI/SPARC three-level architecture


ANSI = American National Standards Institute
ANSI/X3 = Committee on Computers and Information Processing
SPARC = Standards Planning and Requirements Committee
The ANSI/SPARC model is a three-level database architecture with a hierarchy of levels, from
the users and their applications at the top, down to the physical storage of data at the bottom.
The characteristics of each level, represented by a schema, are now described.

The external schema


The external schemas describe the database as it is seen by the user, and the user applications.
The external schema maps onto the conceptual schema, which is described below.
There may be many external schemas, each reflecting a simplified model of the world, as seen
by particular applications. External schemas may be modified, or new ones created, without the
need to make alterations to the physical storage of data. The interface between the external
schema and the conceptual schema can be amended to accommodate any such changes.
The external schema allows the application programs to see as much of the data as they require,
while excluding other items that are not relevant to that application. In this way, the external
schema provides a view of the data that corresponds to the nature of each task.
The external schema is more than a subset of the conceptual schema. While items in the
external schema must be derivable from the conceptual schema, this could be a complicated
process, involving computation and other activities.

The conceptual schema


The conceptual schema describes the universe of interest to the users of the database system.
For a company, for example, it would provide a description of all of the data required to be stored
in a database system. From this organisation-wide description of the data, external schemas can
be derived to provide the data for specific users or to support particular tasks.
At the level of the conceptual schema we are concerned with the data itself, rather than storage
or the way data is physically accessed on disk. The definition of storage and access details is the
preserve of the internal schema.

The internal schema


A database will have only one internal schema, which contains definitions of the way in which
data is physically stored. The interface between the internal schema and the conceptual schema
identifies how an element in the conceptual schema is stored, and how it may be accessed.
If the internal schema is changed, this will need to be addressed in the interface between the
internal and the conceptual schemas, but the conceptual and external schemas will not need to
change. This means that changes in physical storage devices such as disks, and changes in the
way files are organised on storage devices, are transparent to users and application programs.
In distinguishing between 'logical' and 'physical' views of a system, it should be noted that the
difference could depend on the nature of the user. While 'logical' describes the user angle, and
'physical' relates to the computer view, database designers may regard relations (for staff
records) as logical and the database itself as physical. This may contrast with the perspective of
a systems programmer, who may consider data files as logical in concept, but their
implementation on magnetic disks in cylinders, tracks and sectors as physical.

Physical data independence


In a database environment, if there is a requirement to change the structure of a particular file of
data held on disk, this will be recorded in the internal schema. The interface between the internal
schema and the conceptual schema will be amended to reflect this, but there will be no need to
change the external schema. This means that any such change of physical data storage is not
transparent to users and application programs. This approach removes the problem of physical
data dependence.
Logical data independence
Any changes to the conceptual schema can be isolated from the external schema and the
internal schema; such changes will be reflected in the interface between the conceptual schema
and the other levels. This achieves logical data independence. What this means, effectively, is
that changes can be made at the conceptual level, where the overall model of an organisation's
data is specified, and these changes can be made independently of both the physical storage
level, and the external level seen by individual users. The changes are handled by the interfaces
between the conceptual, middle layer, and the physical and external layers.
Review question 4
What are some of the advantages of the database approach compared to the shared file
approach of storing data?
Review question 5
Distinguish between the terms 'external schema', 'conceptual schema' and 'internal schema'.

Components of a DBMS
The major components of a DBMS are as follows:

DBMS engine
The engine is the central component of a DBMS. This component provides access to the
database and coordinates all of the functional elements of the DBMS. An important source of
data for the DBMS engine, and the database system as a whole, is known as metadata.
Metadata means data about data. Metadata is contained in a part of the DBMS called the data
dictionary (described below), and is a key source of information to guide the processes of the
DBMS engine. The DBMS engine receives logical requests for data (and metadata) from human
users and from applications, determines the secondary storage location (i.e. the disk address of
the requested data), and issues physical input/output requests to the computer operating system.
The data requested is fetched from physical storage into computer main memory; it is contained
in special data structures provided by the DBMS. While the data remains in memory, it is
managed by the DBMS engine. Additional data structures are created by the database system
itself, or by users of the system, in order to provide rapid access to data being processed by the
system. These data structures include indexes to speed up access to the data, buffer areas into
which particular types of data are retrieved, lists of free space, etc. The management of these
additional data structures is also carried out by the DBMS engine.

User interface subsystem


The interface subsystem provides facilities for users and applications to access the various
components of the DBMS. Most DBMS products provide a range of languages and other
interfaces, since the system will be used both by programmers (or other technical persons) and
by users with little or no programming experience. Some of the typical interfaces to a DBMS are
the following:
 A data definition language (or data sublanguage), which is used to define, modify or
remove database structures such as records, tables, files and views.
 A data manipulation language, which is used to display data extracted from the database
and to perform simple updates and deletions.
 A data control language, which allows a database administrator to have overall control of
the system, often including the administration of security, so that access to both the data
and processes of the database system can be controlled.
 A graphical user interface, which may provide a visual means of browsing or querying the
data, including a range of different display options such as bar charts, pie charts, etc.
One particular example of such a system is Query-by-Example, in which the system
displays a skeleton table (or tables), and users pose requests by suitable entry in the
table.
 A forms-based user interface in which a screen-oriented form is presented to the user,
who responds by filling in blanks on the form. Such forms-based systems are a popular
means of providing a visual front-end to both developers and users of a database
system. Typically, developers use the forms-based system in 'developer mode', where
they design the forms or screens that will make up an application, and attach fragments
of code which will be triggered by the actions of users as they use the forms-based user
interface.
 A DBMS procedural programming language, often based on standard third-generation
programming languages such as C and COBOL, which allows programmers to develop
sophisticated applications.
 Fourth-generation languages, such as Smalltalk, JavaScript, etc. These permit
applications to be developed relatively quickly compared to the procedural languages
mentioned above.
 A natural language user interface that allows users to present requests in free-form
English statements.

Data dictionary subsystem


The data dictionary subsystem is used to store data about many aspects of how the DBMS
works. The data contained in the dictionary subsystem varies from DBMS to DBMS, but in all
systems it is a key component of the database. Typical data to be contained in the dictionary
includes: definitions of the users of the system and the access rights they have, details of the
data structures used to contain data in the DBMS, descriptions of business rules that are stored
and enforced within the DBMS, and definitions of the additional data structures used to improve
systems performance. It is important to understand that because of the important and sensitive
nature of the data contained in the dictionary subsystem, most users will have no or little direct
access to this information. However, the database administrator will need to have regular access
to much of the dictionary system, and should have a detailed knowledge of the way in which the
dictionary is organised.

Performance management subsystem


The performance management subsystem provides facilities to optimise (or at least improve)
DBMS performance. This is necessary because the large and complex software in a DBMS
requires attention to ensure it performs efficiently, i.e. it needs to allow retrieval and changes to
data to be made without requiring users to wait for significant periods of time for the DBMS to
carry out the requested action.
Two important functions of the performance management subsystem are:
 Query optimisation: Structuring SQL queries (or other forms of user queries) to minimise
response times.
 DBMS reorganisation: Maintaining statistics on database usage, and taking (or
recommending) actions such as database reorganisation, creating indexes and so on, to
improve DBMS performance.

Data integrity management subsystem


The data integrity management subsystem provides facilities for managing the integrity of data in
the database and the integrity of metadata in the dictionary. This subsystem is concerned with
ensuring that data is, as far as software can ensure, correct and consistent. There are three
important functions:
 Intra-record integrity: Enforcing constraints on data item values and types within each
record in the database.
 Referential integrity: Enforcing the validity of references between records in the
database.
 Concurrency control: Ensuring the validity of database updates when multiple users
access the database (discussed in a later chapter).

Backup and recovery subsystem


The backup and recovery subsystem provides facilities for logging transactions and database
changes, periodically making backup copies of the database, and recovering the database in the
event of some type of failure. (We discuss backup and recovery in greater detail in a later
chapter.) A good DBMS will provide comprehensive and flexible mechanisms for backing up and
restoring copies of data, and it will be up to the database administrator, in consultation with users
of the system, to decide precisely how these features should be used.

Application development subsystem


The application development subsystem is for programmers to develop complete database
applications. It includes CASE tools (software to enable the modelling of applications), as well as
facilities such as screen generators (for automatically creating the screens of an application
when given details about the data to be input and/or output) and report generators.
In most commercial situations, there will in fact be a number of different database systems,
operating within a number of different computer environments. By computer environment we
mean a set of programs and data made available usually on a particular computer. One such set
of database systems, used in a number of medium to large companies, involves the
establishment of three different computer environments. The first of these is the development
environment, where new applications are developed and new applications, whether written within
the company or bought in from outside, are tested. The development environment usually
contains relatively little data, just enough in fact to adequately test the logic of the applications
being developed and tested. Security within the development environment is usually not an
important issue, unless the actual logic of the applications being developed is, in its own right, of
a sensitive nature.
The second of the three environments is often called pre-production. Applications that have been
tested in the development environment will be moved into pre-production for volume testing; that
is, testing with quantities of data that are typical of the application when it is in live operation.
The final environment is known as the production or live environment. Applications should only
be moved into this environment when they have been fully tested in pre-production. Security is
nearly always a very important issue in the production environment, as the data being used
reflects important information in current use by the organisation.
Each of these separate environments will have at least one database system, and because of
the widely varying activities and security measures required in each environment, the volume of
data and degree of administration required will itself vary considerably between environments,
with the production database(s) requiring by far the most support.
Given the need for the database administrator to migrate both programs and data between these
environments, an important tool in performing this process will be a set of utilities or programs for
migrating applications and their associated data both forwards and backwards between the
environments in use.
Security management subsystem
The security management subsystem provides facilities to protect and control access to the
database and data dictionary.

Benefits of the database approach


The benefits of the database approach are as follows:
 Ease of application development: The programmer is no longer burdened with designing,
building and maintaining master files.
 Minimal data redundancy: All data files are integrated into a composite data structure. In
practice, not all redundancy is eliminated, but at least the redundancy is controlled. Thus
inconsistency is reduced.
 Enforcement of standards: The database administrator can define standards for names,
etc.
 Data can be shared. New applications can use existing data definitions.
 Physical data independence: Data descriptions are independent of the application
programs. This makes program development and maintenance an easier task. Data is
stored independently of the program that uses it.
 Logical data independence: Data can be viewed in different ways by different users.
 Better modelling of real-world data: Databases are based on semantically rich data
models that allow the accurate representation of real-world information.
 Uniform security and integrity controls: Security control ensures that applications can only
access the data they are required to access. Integrity control ensures that the database
represents what it purports to represent.
 Economy of scale: Concentration of processing, control personal and technical expertise.

Risks of the database approach


 New specialised personnel: Need to hire or train new personnel e.g. database
administrators and application programmers.
 Need for explicit backup.
 Organisational conflict: Different departments have different information needs and data
representation.
 Large size: Often needs alarmingly large amounts of processing power.
 Expensive: Software and hardware expenses.
 High impact of failure: Concentration of processing and resources makes an organisation
vulnerable if the system fails for any length of time.
Review question 6
Distinguish between the terms 'database security' and 'data integrity'.

Data and database administration


Organisations need data to provide details of the current state of affairs; for example, the amount
of product items in stock, customer orders, staff details, office and warehouse space, etc. Raw
data can then be processed to enable decisions to be taken and actions to be made. Data is
therefore an important resource that needs to be safeguarded. Organisations will therefore have
rules, standards, policies and procedures for data handling to ensure that accuracy is maintained
and that proper and appropriate use is made of the data. It is for this reason that organisations
may employ data administrators and database administrators.

The role of the data administrator


It is important that the data administrator is aware of any issues that may affect the handling and
use of data within the organisation. Data administration includes the responsibility for determining
and publicising policy and standards for data naming and data definition conventions, access
permissions and restrictions for data and processing of data, and security issues.
The data administrator needs to be a skilled manager, able to implement policy and make
strategic decisions concerning the organisation's data resource. It is not sufficient for the data
administrator to propose a set of rules and regulations for the use of data within an organisation;
the role also requires the investigation of ways in which the organisation can extract the
maximum benefit from the available data.
One of the problems facing the data administrator is that data may exist in a range of different
formats, such as plain text, formatted documents, tables, charts, photographs, spreadsheets,
graphics, diagrams, multimedia (including video, animated graphics and audio), plans, etc. In
cases where the data is available on computer-readable media, consideration needs to be given
to whether the data is in the correct format.
The different formats in which data may appear is further complicated by the range of terms used
to describe it within the organisation. One problem is the use of synonyms, where a single item of
data may be known by a number of different names. An example of the use of synonyms would
be the terms 'telephone number', 'telephone extension', 'direct line', 'contact number' or just
'number' to mean the organisation's internal telephone number for a particular member of staff. In
an example such as this, it is easy to see that the terms refer to the same item of data, but it
might not be so clear in other contexts.
A further complication is the existence of homonyms. A homonym is a term which may be used
for several different items in different contexts; this can often happen when acronyms are used.
One example is the use of the terms 'communication' and 'networking'; these terms are
sometimes used to refer to interpersonal skills, but may also be employed in the context of data
communication and computer networks.
When the items of data that are important to an organisation have been identified, it is important
to ensure that there is a standard representation format. It might be acceptable to tell a colleague
within the organisation that your telephone extension is 5264, but this would be insufficient
information for someone outside the organisation. It may be necessary to include full details,
such as international access code, national code, area code and local code as well as the
telephone extension to ensure that the telephone contact details are usable worldwide.
Dates are a typical example of an item of data with a wide variety of formats. The ranges of date
formats include: day-month-year, month-day-year, year-month-day, etc. The month may appear
as a value in the range 1 to 12, as the name of the month in full, or a three-letter abbreviation.
These formats can be varied by changing the separating character between fields from a hyphen
(-) to a slash (/), full stop (.) or space ( ).
The use of standardised names and formats will assist an organisation in making good use of its
data. The role of the data administrator involves the creation of these standards and their
publication (including the reasons for them and guidelines for their use) across the organisation.
Data administration provides a service to the organisation, and it is important that it is perceived
as such, rather than the introduction of unnecessary rules and regulations.
The role of the database administrator
The role of the database administrator within an organisation focuses on a particular database or
set of databases, and the associated computer applications, rather than the use of data
throughout the organisation. A database administrator requires a blend of management skills
together with technical expertise. In smaller organisations, the data administrator and database
administrator roles may be merged into a single post, whereas larger companies may have
groups of staff involved with each activity.
The activities of the database administrator take place in the context of the guidelines set out by
the data administrator. This requires striking a balance between the security and protection of the
database, which may be in conflict with the requirements of users to have access to the data.
The database administrator has responsibility for the development, implementation, operation,
maintenance and security of the database and the applications that use it. Another important
function is the introduction of controls to ensure the quality and integrity of the data that is
entered into the database. The database administrator is a manager of the data in the database,
rather than a user. This role requires the development of the database structure and data
dictionary (a catalogue of the data in the database), the provision of security measures to permit
authorised access and prevent unauthorised access to data, and to guard against failures in
hardware or software in order to offer reliability.
Exercise 1
Find out who is responsible for the tasks of data administration and database administration in
the organisation where you are currently working or studying. Find out whether the two roles are
combined into one in your organisation, or if not, how many people are allocated to each
function, and what are their specific roles?

Introduction to the Relational model


A number of different approaches or models have been developed for the logical organisation of
data within a database system. This 'logical' organisation must be distinguished from the
'physical' organisation of data, which describes how the data is stored on some suitable storage
medium such as a disk. The physical organisation of data will be dealt with in the chapter on
physical storage. By far the most commonly used approach to the logical organisation of data is
the Relational model. In this section we shall introduce the basic concepts of the Relational
model, and give examples of its use. Later in the module, we shall make practical use of this
knowledge in both using and developing examples of Relational database applications.

Entities, attributes and relationships


The first step in the development of a database application usually involves determining what the
major elements of data to be stored are. These are referred to as entities. For example, a library
database will typically contain entities such as Books, Borrowers, Librarians, Loans, Book
Purchases, etc. Each of the entities identified will contain a number of properties, or attributes.
For example, the entity Book will contain attributes such as Title, Author and ISBN; the entity
Borrower will possess attributes such as Name, Address and Membership Number. When we
have decided which entities are to be stored in a database, we also need to consider the way in
which those entities are related to one another. Examples of such relationships might be, for the
library system, that a Borrower can borrow a number of Books, and that a Librarian can make a
number of Book Purchases. The correct identification of the entities and attributes to be stored,
and the relationships between them, is an extremely important topic in database design, and will
be covered in detail in the chapter on entity-relationship modelling. In introducing the Relational
approach to database systems, we must consider how entities and their attributes, and the
relationships between them, will be represented within a database system.
A relation is structured like a table. The rows of the structure (which are also sometimes referred
to as tuples) correspond to individual instances of records stored in the relation. Each column of
the relation corresponds to a particular attribute of those record instances. For example, in the
relation containing details of stationery below, each row of the relation corresponds to a different
item of stationery, and each column or attribute corresponds to a particular aspect of stationery,
such as the colour or price.
Each tuple contains values for a fixed number of attributes. There is only one tuple for each
different item represented in the database.
The set of permissible values for each attribute is called the domain for that attribute. It can be
seen that the domain for the attribute Colour in the stationery relation below includes the values
Red, Blue, Green, White, Yellow, and Black (other colours may be permitted but are not shown in
the relation).
The sequence in which tuples appear within a relation is not important, and the order of attributes
within a relation is of no significance. However, once the attributes of a particular relation have
been identified, it is convenient to refer to them in the same order.
Very often it is required to be able to identify uniquely each of the different instances of entities in
a database. In order to do this we use something called a primary key. We will discuss the nature
of primary keys in detail in the next learning chapter, but for now we shall use examples where
the primary key is the first of the attributes in each tuple of a relation.

Relation: Stationery

Figure 1.4
Here, the attributes are item-code, item-name, colour and price. The values for each attribute for
each item are shown as a single value in each column for a particular row. Thus for item-code
20217, the values are A4 paper 250 sheets for the item-name, Blue for the attribute colour, and
<=2.75 is stored as the price.
Question: Which of the attributes in the stationery relation do you think would make a suitable
key, and why?
The schema defines the 'shape' or structure of a relation. It defines the number of attributes, their
names and domains. Column headings in a table represent the schema. The extension is the set
of tuples that comprise the relation at any time. The extension (contents) of a relation may vary,
but the schema (structure) generally does not.
From the example above, the schema is represented as:
Figure 1.5
The extension from the above example is given as:

Figure 1.6
The extension will vary as rows are inserted or deleted from the table, or values of attributes (e.g.
price) change. The number of attributes will not change, as this is determined by the schema.
The number of rows in a relation is sometimes referred to as its cardinality. The number of
attributes is sometimes referred to as the degree or grade of a relation.
Each relation needs to be declared, its attributes defined, a domain specified for each attribute,
and a primary key identified.
Review question 7
Distinguish between the terms 'entity' and 'attribute'. Give some examples of entities and
attributes that might be stored in a hospital database.
Review question 8
The range of values that a column in a relational table may be assigned is called the domain of
that column. Many database systems provide the possibility of specifying limits or constraints
upon these values, and this is a very effective way of screening out incorrect values from being
stored in the system. It is useful, therefore, when identifying which attributes or columns we wish
to store for an entity, to consider carefully what is the domain for each column, and which values
are permissible for that domain.
Consider then for the following attributes, what the corresponding domains are, and whether
there are any restrictions we can identify which we might use to validate the correctness of data
values entered into attributes with each domain:
 Attribute: EMPLOYEE_NAME
 Attribute: JOB (i.e. the job held by an individual in an organisation)
 Attribute: DATE_OF_BIRTH
External schemas can be used to give individual users, or groups of users, access to a part of
the data in a database. Many systems also allow the format of the data to be changed for
presentation in the external schema, or for calculations to be carried out on it to make it more
usable to the users of the external schema. Discuss the possible uses of external schemas, and
the sorts of calculations and/or reformatting that might be used to make the data more usable to
specific users or user groups.
External schemas might be used to provide a degree of security in the database, by making
available to users only that part of the database that they require in order to perform their jobs.
So for example, an Order Clerk may be given access to order information, while employees
working in Human Resources may be given access to the details of employees.
In order to improve the usability of an external schema, the data in it may be summarised or
organised into categories. For example, an external schema for a Sales Manager, rather than
containing details of individual sales, might contain summarised details of sales over the last six
months, perhaps organised into categories such as geographical region. Furthermore, some
systems provide the ability to display data graphically, in which case it might be formatted as a
bar, line or pie chart for easier viewing.

Additional content and activities


Database systems have become ubiquitous throughout computing. A great deal of information is
written and published describing advances in database technology, from research papers
through to tutorial information and evaluations of commercial products. Conduct a brief search on
the Internet and related textbooks. You will likely find that there are many alternative definitions
and explanations to the basic concepts introduced in this chapter, and these will be helpful in
consolidating the material covered here.

7. What is network and its advantages


Advantages. Site (software) licences are likely to be cheaper than buying several
standalone licences. Network users can communicate by email and instant
messenger. Security is good - users cannot see other users' files unlike on stand-
alone machines.

Pros and Cons of Networking


written by: Senadheera Jayakody•edited by: M.S. Smith•updated: 2/28/2010

The ability to exchange data and communicate efficiently is the main purpose of networking computers. But we
have to consider beyond these points to evaluate the feasibility of networking for our own advantages.

 A computer network can be identified as a group of computers that are interconnected for sharing data between
them or their users. There is a wide variety of networks and their advantages and disadvantages mainly depend
on the type of network.

 Advantages of Computer Networking


1. Easy Communication and Speed

It is very easy to communicate through a network. People can communicate efficiently using a network with a
group of people. They can enjoy the benefit of emails, instant messaging, telephony, video conferencing, chat
rooms, etc.

2. Ability to Share Files, Data and Information


This is one of the major advantages of networking computers. People can find and share information and data
because of networking. This is beneficial for large organizations to maintain their data in an organized manner
and facilitate access for desired people.

3. Sharing Hardware

Another important advantage of networking is the ability to share hardware. For an example, a printer can be
shared among the users in a network so that there’s no need to have individual printers for each and every
computer in the company. This will significantly reduce the cost of purchasing hardware.

4. Sharing Software

Users can share software within the network easily. Networkable versions of software are available at
considerable savings compared to individually licensed version of the same software. Therefore large companies
can reduce the cost of buying software by networking their computers.

5. Security

Sensitive files and programs on a network can be password protected. Then those files can only be accessed by
the authorized users. This is another important advantage of networking when there are concerns about security
issues. Also each and every user has their own set of privileges to prevent them accessing restricted files and
programs.

6. Speed

Sharing and transferring files within networks is very rapid, depending on the type of network. This will save
time while maintaining the integrity of files.

 Disadvantages of Networking
1. Breakdowns and Possible Loss of Resources

One major disadvantage of networking is the breakdown of the whole network due to an issue of the server.
Such breakdowns are frequent in networks causing losses of thousands of dollars each year. Therefore once
established it is vital to maintain it properly to prevent such disastrous breakdowns. The worst scenario is such
breakdowns may lead to loss of important data of the server.

2. Expensive to Build

Building a network is a serious business in many occasions, especially for large scale organizations. Cables and
other hardware are very pricey to buy and replace.

3. Security Threats

Security threats are always problems with large networks. There are hackers who are trying to steal valuable
data of large companies for their own benefit. So it is necessary to take utmost care to facilitate the required
security measures.

4. Bandwidth Issues

In a network there are users who consume a lot more bandwidth than others. Because of this some other people
may experience difficulties.

Although there are disadvantages to networking, it is a vital need in today’s environment. People need to access
the Internet, communicate and share information and they can’t live without that. Therefore engineers need to
find alternatives and improved technologies to overcome issues associated with networking. Therefore we can
say that computer networking is always beneficial to have even if there are some drawbacks.
What is a computer network? Advantages of Network .
BY DINESH THAKUR Category: Computer Network

A computer network consists of two or more computers that are linked in order to
share resources such as printers and CD-ROMs, exchange files, or allow electronic
communications. The computers on a computer network may be linked through
cables, telephone lines, radio waves, satellites, or infrared light beams.
Computer network can be classified on the basis of following features :
By Scale: Computer networks may be classified according to the scale :
• Local Area Network (LAN)
• Metropolitan Area Network (MAN)
• Wide Area Network (WAN)
By Connection Method: Computer networks can also be classified according to the
hardware technology that is used to connect the individual devices in the network such
as Optical fibre, Ethernet, Wireless LAN.
• Client-Server
• Peer-to-Peer Architecture
By Functional Relationship (Network Architectures) : Computer networks may be
classified according to the functional relationships which exist between the elements
of the network. This classification also called computer architecture. There are two
type of network architecture
By Network Topology: Network Topology signifies the way in which intelligent devices
in the network see their logical or physical relations to one another. Computer
networks may be classified according to the network topology upon which the network
is based, such as :
• Bus Network
• Star Network
• Ring Network
• Mesh Network
• Star-Bus Network
• Tree or Hierarchical Topology Network

Advantages of Network
The following are the distinct notes in favor of computer network.
a. The computers, staff and information can be well managed
b. A network provides the means to exchange data among the computers and to make
programs and data available to people
c. It permits the sharing of the resources of the machine
d. Networking also provides the function of back-up.
e. Networking provides a flexible networking environment. Employees can work at
home by using through networks ties through networks into the computer at office.

Explain Network Services


1. Network services are the thing that a network can do. The major networking services
are
2. File Services: This includes file transfer, storage, data migration, file update,
synchronization and achieving.
3. Printing Services: This service produces shared access to valuable printing
devices.
4. Message Services: This service facilitates email, voice mails and coordinate object
oriented applications.
5. Application Services: This services allows to centralize high profile applications to
increase performance and scalability
6. Database Services: This involves coordination of distributed data and replication.

Development life cycle


The systems development life cycle (SDLC), also referred to as the
application development life-cycle, is a term used in systems engineering,
information systems and software engineering to describe a process for planning,
creating, testing, and deploying an information system.

What is System Development Life Cycle?


January 9, 2015 Motea Alwan in SDLC

System Development Life Cycle (SDLC) is a series of six main phases to create a
hardware system only, a software system only or a combination of both to meet or
exceed customer’s expectations.

System is a broad and a general term, and as per to Wikipedia; “A system is a set of
interacting or interdependent components forming an integrated whole” it’s a term
that can be used in different industries, therefore Software Development Life Cycle is
a limited term that explains the phases of creating a software component that
integrates with other software components to create the whole system.

Some more specific takes on SDLC include:

Rapid Application Development Test-Driven Development Waterfall Model

Iterative Model Extreme Programming Scaled Agile Framework

Agile Model Scrum Rational Unified Process

Big Bang Model V-Model Conceptual Model


Kaizen Model Kanban Model Spiral Model

Below we’ll take a general look on System Development Life Cycle phases, bearing in
mind that each system is different from the other in terms of complexity, required
components and expected solutions and functionalities:

System Development Life Cycle Phases:

1- System Planning
The Planning phase is the most crucial step in creating a successful system, during
this phase you decide exactly what you want to do and the problems you’re trying to
solve, by:

 Defining the problems, the objectives and the resources such as personnel and costs.
 Studying the ability of proposing alternative solutions after meeting with clients, suppliers,
consultants and employees.

 Studying how to make your product better than your competitors’.

After analyzing this data you will have three choices: develop a new system, improve
the current system or leave the system as it is.

2- System Analysis
The end-user’s requirements should be determined and documented, what their
expectations are for the system, and how it will perform. A feasibility study will be
made for the project as well, involving determining whether it’s organizationally,
economically, socially, technologically feasible. it’s very important to maintain strong
communication level with the clients to make sure you have a clear vision of the
finished product and its function.

3- System Design
The design phase comes after a good understanding of customer’s requirements,
this phase defines the elements of a system, the components, the security level,
modules, architecture and the different interfaces and type of data that goes through
the system.

A general system design can be done with a pen and a piece of paper to determine
how the system will look like and how it will function, and then a detailed and
expanded system design is produced, and it will meet all functional and technical
requirements, logically and physically.

4- Implementation and Deployment


This phase comes after a complete understanding of system requirements and
specifications, it’s the actual construction process after having a complete and
illustrated design for the requested system.

In the Software Development Life Cycle, the actual code is written here, and if the
system contains hardware, then the implementation phase will contain configuration
and fine-tuning for the hardware to meet certain requirements and functions.

In this phase, the system is ready to be deployed and installed in customer’s


premises, ready to become running, live and productive, training may be required for
end users to make sure they know how to use the system and to get familiar with it,
the implementation phase may take a long time and that depends on the complexity
of the system and the solution it presents.

5- System Testing and Integration


Bringing different components and subsystems together to create the whole
integrated system, and then Introducing the system to different inputs to obtain and
analyze its outputs and behavior and the way it functions. Testing is becoming more
and more important to ensure customer’s satisfaction, and it requires no knowledge
in coding, hardware configuration or design.

Testing can be performed by real users, or by a team of specialized personnel, it can


also be systematic and automated to ensure that the actual outcomes are compared
and equal to the predicted and desired outcomes.

6- System Maintenance
In this phase, periodic maintenance for the system will be carried out to make sure
that the system won’t become obsolete, this will include replacing the old hardware
and continuously evaluating system’s performance, it also includes providing latest
updates for certain components to make sure it meets the right standards and the
latest technologies to face current security threats.

These are the main six phases of the System Development Life Cycle, and it’s an
iterative process for each project. It’s important to mention that excellent
communication level should be maintained with the customer, and Prototypes are
very important and helpful when it comes to meeting the requirements. By building
the system in short iterations; we can guarantee meeting the customer’s
requirements before we build the whole system.

Many models of system development life cycle came up from the idea of saving
effort, money and time, in addition to minimizing the risk of not meeting the
customer’s requirement at the end of project, some of theses models are SDLC
Iterative Model, and SDLC Agile Model.

Continues improvement and fixing of the system is essential, Airbrake provides


robust bug capturing in your application. In doing so, it notifies you with bugs
instantly, allows you to easily review them, tie the bug to an individual piece of code,
and trace the cause back to recent changes.
Airbrake enables for to categorize, search, and prioritize errors so that when bugs
occur, your team can quickly determine the root cause. The time and effort you
save by capturing your errors with Airbrake is invaluable!

How does cloud computing work


Sharing and Storing Data. Cloud computing, in turn, refers to sharing resources,
software, and information via a network, in this case the Internet. The information is
stored on physical servers maintained and controlled by a cloud
computing provider, such as Apple in regards to iCloud.

Just getting started? Whether you’re looking to become a cloud engineer or you’re a manager
wanting to learn more about this industry, learn the basics about cloud computing here.

Are you wondering about how cloud computing actually works? I’ll
explain the basic principles behind this technology. Cloud computing
presents an ever-expanding universe that intimidates even the smartest
among us.

Take heart. The journey of a thousand miles begins with a single step.

Even if you’ve only just begun to get acquainted with technology and
computing, you’ve almost certainly heard cloud computing being brought
up as a hot topic in conversations. The information below will inform you
about this popular technology and help you understand why it such
dominant topic of conversation in our tech-driven society.

Plus, we’ll look at what you can expect from the future of cloud
computing.
What Is Cloud Computing?

Sometimes referred to as “the cloud,” cloud computing is a way for


individuals and companies to access digital resources over the internet,
from just about anywhere in the world that has connectivity. Cloud
computing is typically provided by a third party as a software service, or is
sometimes built in-house using DIY techniques and ad hoc hardware.

Cloud computing usually eliminates or reduces the need for on-site


hardware and/or software. For example, if a person buys a hard drive
backup service that relies on cloud computing, he or she could transfer his
or her files through an internet connection so they’re stored on servers that
may be located in another state, or even in another country. Typically the
files would be stored in multiple places offering added security and
redundancy that is impossible with standard hardware solutions.

Cloud computing offers the potential to vastly increase available resources


since some people refer to cloud computing as “IT outsourcing.” The
concept of outsourcing is particularly common in the customer service
industry because companies outsource their call center duties to
representatives in other places when they aren’t able to find suitable
customer service agents locally.

Similarly, if you don’t have a desired type of software, or an on-site server


large enough to handle the needs of your company, there’s a good chance
cloud computing could fill that role.
Common Examples of Cloud Computing Technology

Cloud computing may seem like a foreign concept, but you probably use it
every day without even realizing it. Here are some familiar tasks that are
made possible through cloud computing:

 Checking your email from anywhere in the world by logging onto a


cloud-based webmail client.
 Saving a document in an online cloud storage account and later
accessing it at work, even though the original file resides on your
home computer.
 Collaborating in real-time on a shared online spreadsheet with
colleagues that are working from different office locations.
 Being able to rent software applications and save the documents you
create online, rather than purchasing the physical software disks and
having to download the contents to your hard drive. This example of
cloud technology is especially useful considering how quickly some
software becomes obsolete. Rather than making a one-time purchase
of a physical disc, a user could pay a monthly access fee for the
service, and then receive alerts whenever it’s time to download the
latest version of the software.
Characteristics of Cloud Technology

There are several factors that set cloud computing technology apart from
other options, and which make it especially attractive for business use. For
starters, cloud computing technology provides a managed service so you
can just focus on whatever task you’re doing that’s supported by the
service.

When using your “local” version of Microsoft Word, you have to go into
the program’s preferences and specify you want versions of your files to
be periodically saved. Once you’ve done that, you can breathe easy
knowing that, if you have a sudden power outage or other crisis that results
in lost work, you’ll at least have a version of your file that was saved
within the last few minutes. Even then, there’s always the chance your
hard drive might crash, causing you to lose your work, despite taking the
time to tweak settings so your versions are automatically saved.

However, when using Google Drive, which has a cloud-based word


processor, everything you type is automatically saved in the cloud every
few seconds. There’s no need to fiddle with settings to make sure work
gets saved, or to designate a folder on your computer to store the saved
content. The managed nature of cloud-based services like Google Drive
allows users to simply enjoy the benefits of the technology they’re using
and feel confident the service provider will take care of things like file
saving and storage.

Many cloud computing services are available on-demand and are quite
scalable. If your needs vary from one month to the next, its likely you can
simply pay more or less depending on how your usage changes.
Traditionally, there was always the risk of buying a pricey computer
network and realizing it was larger than you needed, or perhaps
discovering that the setup you have is much too small for what you’re
trying to do. Cloud computing makes these scenarios less likely because
you may subscribe to most cloud computing services without getting
locked into lengthy contracts.

Cloud computing makes its respective services available publicly or


privately, too. A cloud-based email account is one example of a public
cloud computing service. However, many companies use virtual private
networks (VPNs) to access secure private clouds, such as those that are
only accessible to people who work at a particular company or department.
Pros and Cons of Cloud Computing Technology

Like any other type of technology, cloud computing has both good and bad
attributes. Although we touched on a few advantages in the previous
section, let’s go into more depth about the benefits of cloud technology,
and then examine the potential downsides.

A reduced need for on-site IT staff: When choosing a service provider


for your cloud computing needs, you’ll probably notice how most of them
guarantee a very high level of consistent uptime. For example, a company
may guarantee trouble-free service 365 days of the year and 99.9 percent
of the time, and if it fails to meet those goals, you won’t pay for service. If
you pick a provider that promises to be very reliable, won’t be dependent
on on-site IT professionals for troubleshooting.

Cost-effectiveness: As mentioned above, it’s usually possible to buy only


the cloud services you need, and have the option of scaling up later when
necessary. That means you don’t have to make huge investments in
physical equipment that may break down, get stolen, or age out over time.

Fewer maintenance concerns: When dealing with physical computer


networks, software, and hardware, there are a lot of maintenance
needs. You must dedicate resources for regularly optimizing processes
that are working. Downloading new versions of software, installing them
on computers and even running virus scans are all things that absorb
valuable time and draw your attention away from other critical
responsibilities.

Cloud computing usually allows you to log into a well-maintained online


interface and access the latest versions of applications and content —
without having to download anything that needs to be checked for viruses.

Service is unavailable when the internet goes down: As mentioned


above, most of today’s top providers of cloud-based technology are very
reliable and can promise an exceptionally high percentage of uptime
(almost unbelievably high). However, problems can occur if you’re solely
reliant on the internet to access your files, and the internet connection in
your workplace or home suddenly malfunctions.

If you’re using content in the cloud exclusively to run your business,


operations will grind to a halt until your internet connection is restored.
Potential migration issues: If you start using one cloud computing
service and then want to transfer your files over to a different provider,
that process may prove much more complicated than expected. Although
progress is occurring to make the task easier, there are still substantial
incompatibility issues that may make moving your files between providers
painful — at best.

Reduced customer control: Because cloud computing offers a managed


service, that means customers give up some control to use what’s offered.
That’s especially true in terms of what’s happening in the background.
Many cloud computing service providers don’t provide details about their
infrastructures, which may be frustrating to customers that prefer to handle
administration needs on their own.
Predictions About the Future of Cloud Computing

Cloud Computing simply follows everything else in a shift into a paperless


and wireless word. For example, fundraising has gone from mailed letters
to fully online programs like Kickstarter or FirstGiving where funds are
transferred instantly. When it comes to music, listeners are no longer
keeping their music on iPods or hard drives, they’re streaming
from Spotify or Pandora. Users are just now moving other data from their
computers to the cloud as it moves from a fad to a trend to eventually a
standard.

Things move very quickly in the cloud computing world. In early 2006,
Amazon became the first provider of a public cloud computing service.
Now, a decade later, the online retailer is still a massive force in the cloud
computing industry, but is no longer the lone entity.

Let’s look at some of the things analysts think are likely to happenas the
cloud computing industry continues to grow and evolve, according to the
Wikibon 2015 Future of Cloud Computing survey:

IT companies will continue to favor public or hybrid cloud


services: Over the past five years, IT companies have contributed to a 43.3
percent increase in public cloud usage. Furthermore, hybrid cloud services
(which use both public and private clouds throughout single organizations)
saw a 19.2 percent growth. However, the use of private services fell by
almost 50 percent, and there’s no sign of them gaining significant
momentum anytime soon.
Cloud computing providers must deliver products that are innovative,
yet secure: Because the cloud computing industry has become
increasingly popular, many well-known companies are launching
improved or entirely new cloud services. Recently, Google has asserted
its readiness to become a respected entitythat provides enterprise-level
cloud services.

In order to be competitive, providers of cloud-based services must prove


that their technology is the most current and robust available, but also that
everything’s being delivered and stored in an incredibly secure way. When
answering a question in the Wikibon survey to identify their top concerns,
63 percent of respondents said security was first on the list.

Cloud technology will move beyond computers: Although we’ve


focused on computers, analysts think cloud technology will soon be widely
thought of outside the computing world. Use of the technology is already
occurring in other industries, but still on a relatively small scale.

For example, some automotive companies have utilized cloud technology


to deliver and share vehicle data and apps, whilecloud-related
advancements in the travel industry allow Lufthansa Airlines passengers to
choose media at home before they leave, and then access that stored
media/data during their flight for a personalized entertainment experience.
Passengers have more options for controlling how they use time on an
otherwise restricted journey.

In healthcare, practitioners are relying on cloud technology to share CAT


scans and MRI images with colleagues, saving patients from unnecessary
tests and radiation exposure. This relatively simple advance allows doctor
and patient to benefit from multiple medical opinions about specific
conditions. This data may become useful in tracking treatment and success
ratios leading to new standards and saving time, money, and lives.

Customers will prefer to buy cloud-based services directly from


providers: Until recently, individuals or companies that wished to take
advantage of cloud computing options usually bought them from outside
vendors, rather than going directly to the providers. However, trends show
an increasing preference for straightforward, transparent transactions
between service providers and their customers.
The reduction of influence from third-party entities gives customers easy
access to information about pricing while simultaneously allowing
providers access to feedback about how their customers use their
respective products.

Technology With Staying Power

Now you have a general overview of how cloud computing works, why it
has become a desirable type of service and what the future may hold for
this technology. The next step is acquiring a greater technical
understanding of specific providers like AWS, Microsoft Azure, or Google
Cloud Platform. Cloud Academy offers myriad resources for profitable
cloud journey. Their video courses focus on practical subjects that move
students along quickly. They boast a considerable stable of experienced IT
professionals turned educators who share their hard-earned knowledge
freely.

You might also like