0% found this document useful (0 votes)
20 views20 pages

FCET MCA Unit 1

notes

Uploaded by

vinodnaksh2013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views20 pages

FCET MCA Unit 1

notes

Uploaded by

vinodnaksh2013
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

MASTER OF COMPUTER APPLICATIONS

KCA 101 Fundamental Of Computers and Emerging Technologies

Unit 1

What is Computers :-
A computer is an electronic machine that takes an input, processes it to produce
the desired output. Every computer is a combination of hardware and software.
The physical components of a computer that can be seen and touched form the
hardware. For example, CPU, monitor, keyboard, printer etc., are hardware or
peripheral devices. The input to a computer is given in the form of instructions.
These set of instructions that we give to the computer to perform a particular
task constitutes a program. Many such programs together form a software for the
computer. Operating system, Antivirus, MS Office, Computer games are all
software applications.

Working of Computer:-
To function properly, the computer needs both hardware and software. Hardware
consists of the mechanical and electronic devices, which we can see and touch. The
software consists of programs, the operating system and the data that reside in
the memory and storage devices.
The working of a computer can be well understood by the block diagram shown in
Fig
The working of a computer can be broadly categorized into following four functions
or steps.
(i) Receive input – Accept data/information from user through various input
devices like the keyboard, mouse, scanner, etc.
(ii) Process information–Perform arithmetic or logical operations on data/
information.
(iii) Store information—Store the information in storage devices like hard disk,
CD, pen drive etc.
(iv) Produce output–Communicate information to the user through any of the
available output devices like monitor, printer, etc.
The hardware components of the computer specialize in any one of these
functions. Computer hardware falls into two categories: processing hardware and
the peripheral devices. The processing hardware consists of the Central Processing
Unit (CPU), and as its name implies, it is where the data processing is done.
Peripheral devices allow people to interact with the CPU. Together, they make it
possible to use the computer for a variety of tasks.

GENERATIONS OF COMPUTERS

First Generation - 1940-1956:


Vacuum Tubes:
• Used vacuum tubes for circuitry, magnetic drums for memory, and were often
enormous, taking up entire rooms.
• Very expensive , consumed great deal of electricity, generated a lot of heat,
which was often the cause of malfunctions.
• Relied on machine language to perform operations, could solve one problem at a
time.
• Input was based on punched cards and paper tape, and output was displayed on
printouts.
• UNIVAC and ENIAC computers are examples of first-generation computing
devices.

Second Generation - 1956-1963:


Transistors:
• Transistors replaced vacuum tubes allowing computers to become smaller, faster,
cheaper, more energy-efficient and more reliable than their first-generation
predecessors.
• Still relied on punched cards for input and printouts for output.
• Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify
instructions in words.
• High-level programming languages like COBOL and FORTRAN were used.

Third Generation - 1964-1971:


Integrated Circuits:
• Integrated circuit was used, Transistors were miniaturized and placed on silicon
chips, called semiconductors, which increased the speed and efficiency of
computers.
• Instead of punched cards and printouts, users interacted through keyboards and
monitors and interfaced with an operating system, which allowed the device to run
many different applications at one time with a central program that monitored the
memory.
• Computers for the first time became accessible to a mass audience because they
were smaller and cheaper than their predecessors.

Fourth Generation - 1971-1995:


Microprocessors
• Microprocessor were used, What in the first generation filled an entire room
could now fit in the palm of the hand.
• In 1981 IBM introduced its first computer for the home user, and in 1984 Apple
introduced the Macintosh.
• As these small computers became more powerful, they could be linked together
to form networks, which eventually led to the development of the Internet.
• Fourth generation computers also saw the development of GUIs, the mouse and
Hand held devices.

Fifth Generation:1995 and Beyond:


Artificial Intelligence
• Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition, that
are being used today.
• The use of parallel processing and superconductors is helping to make artificial
intelligence a reality. • Quantum computation and molecular and nanotechnology will
radically change the face of computers in years to come.
• The goal of fifth-generation computing is to develop devices that respond to
natural language input and are capable of learning and self-organization.
Hardware and Software of Computer
The term hardware refers to the physical components of your computer such as
the system unit, mouse, keyboard, monitor, etc.
The software is the instructions that makes the computer work. Software is held
either on your computers hard disk, CD-ROM, DVD or on a diskette ( floppy disk)
and is loaded (i.e copied) from the disk into the computers RAM ( Random Access
Memory) as and when required.

TYPES OF COMPUTERS
Computers are categorized on the basis of size, sot and performance. Generally,
the larger the system, the greater is its processing speed, storage capacity, cost
and ability to handle large number of devices. The various types of computers are:
Microcomputers :-
Microcomputers Systems on the lower end of the size of the size scale are
microcomputers. They may be tiny special purpose devices dedicated to carrying
out a single task such as one inside a camera.
Personal Computers:-
Personal Computers The most popular form of computer in use today is the
Personal Computer generally known as the PC. The PC can be used for various
applications. It can be defined as a single user oriented and general purpose
microcomputer. It can perform a diverse range of functions, from keeping track of
household accounts to keeping records of the stores of a large manufacturing
company.
Mini Computers:-
Mini Computers Mini computers are small, general-purpose computers. They can
vary in size from a small desktop model to the size of a small filing cabinet. A
typical mini system is more expensive than a PC and surpasses of PC in storage
capacity and speed. While most PCs are oriented towards single users, mini
systems are designed to handle the needs of multiple users, i.e., more than one
person can work on a mini at the same time.
Mainframe Computers :-
Mainframe Computers A mainframe is another form of a computer system that is
generally more powerful than a typical mini. Mainframes themselves may vary
widely in cost and capability. They are used in large organizations for large scale
jobs. However, there is an overlap between the expensive minis and small
mainframe models in terms of cost and capability.
Super computers:-
Super computers At the end of the size and capability scale, are the super
computers. These systems are the largest, fastest and most expensive computers
in the world. These computers are owned by large organizations. They are used for
complex scientific applications.
Components of Computer :-
Computer has four main components z
Input Devices : These are the devices that are used to accept data and
instructions from the user. Keyboard, mouse, scanner etc., are the examples of
input devices.
Central Processing Unit (CPU) : This is known as the ‘Brain of the Computer’ as it
controls the complete working of the computer.
Memory: The data and instructions are stored in this component of the computer.
Hard disk, DVD, pen drive etc., are the examples of memory storage devices. Z
Output Devices: These are the devices that are used to display the desired result
or information. Monitor, printer etc., are the examples of output devices.

Input Device

An input device is used to get data or instructions from the user. This data is then
passed on to CPU for processing so as to produce the desired result. Although
keyboard and mouse are the two common input devices, other devices such as
Optical Character Recognition (OCR), Magnetic Ink Character Recognition (MICR),
and mark sense reader, etc., are also used as per our requirement.
Keyboard
The keyboard is very much like a standard typewriter with a few additional keys.
Generally, we find a QWERTY keyboard with 104 keys on it. The additional keys
may be included in modern multimedia keyboards.
Mouse
The mouse is another very commonly used input device. It is basically a pointing
device that controls the movement of the cursor or pointer on a display screen. It
is a small object that you can roll along a hard and flat surface. As you move the
mouse, the pointer on the display screen moves in the same direction. A mouse may
contain one, two or three buttons which have different functions depending on
what program is running.
Scanner
Scanner is an input device that can read text or an illustration printed on paper
and translates the information into a form that the computer can use. A scanner
works by digitizing an image - dividing it into a grid of boxes and representing each
box with either a zero or a one, depending on whether the box is filled in. The
resulting matrix of bits, called a bit map, can then be stored in a file, displayed on
a screen and manipulated by programs. Optical scanners do not distinguish text
from illustrations; they represent all images as bit maps. Therefore, you cannot
directly edit text that has been scanned. To edit text read by an optical scanner,
you need an Optical Character Recognition (OCR) system to translate the image
into ASCII characters.
Optical Character Recognition (OCR)
An Optical Character Recognition (OCR) is a device that is used for reading text
from paper and translating the images into a form that the computer can
understand. An OCR is used to convert books, magazines and other such printed
information into digital form.
Magnetic Ink Character Recognition (MICR)
An MICR can identify characters printed with a special magnetic ink. This device
particularly finds applications in banking industry. The cheques used for
transactions have a unique MICR code that can be scanned by an MICR device
Optical Mark Recognition (OMR)
Optical Mark Recognition, also called mark sense reader, is a technology where an
OMR device senses the presence or absence of a mark, such as pencil mark. OMR
is widely used for assessing the objective examinations involving multiple choice
questions.
Bar Code Reader
A bar code reader is an input device that is generally seen in super markets,
bookshops, libraries etc., A bar-code reader is a photoelectric scanner that reads
the bar codes (vertical striped black and white marks), printed on product
containers. The bar code reader scans the bar code of the product and checks the
description and the latest price of the product.
Digitizing Tablet
This is an input device that enables you to enter drawings and sketches into a
computer. A digitizing tablet consists of an electronic tablet and a cursor or pen. A
cursor (also called a puck) is similar to a mouse, except that it has a window with
cross hairs for pinpoint placement, and it can have as many as 16 buttons. A pen
(also called a stylus) looks like a simple ballpoint pen but uses an electronic head
instead of ink. The tablet contains electronic field that enables it to detect
movement of the cursor or pen and translate the movements into digital signals
that it sends to the computer. Digitizing tablets are also called digitizers, graphic
tablets, touch tablets or simply tablets. Now-a-days most tablets allow you to
simply use your finger to choose items or open or select apps by simply tapping
them.
Light Pen
A light pen is an input device that utilizes a light-sensitive detector to select
objects on a display screen. Speech input devices
Speech or voice input devices convert a person’s speech into digital form. These
input devices, when combined with appropriate software, form voice recognition
systems. These systems enable users to operate microcomputers using voice
commands.

CENTRAL PROCESSING UNIT (CPU)

As mentioned earlier, CPU is the ‘Brain of your computer’. This is because it


processes or executes the instructions given to the computer. Any type of
instruction given to the computer using any of the input devices has to be sent to
the CPU for execution. In a microcomputer, the CPU is based on a single chip called
the microprocessor. A typical CPU has the following components:
Control Unit (CU)
The Control unit manages the instructions given to the computer. It coordinates
the activities of all the other units in the system by instructing rest of the
components of the computer about how to carry out a program’s instructions. It
reads and interprets instructions from memory and transforms them in to series
of signals to be executed or stored. It also directs the movement of these
electronic signals between memory and ALU or between CPU and input/output
devices. Hence it controls the transfer of data and information between various
units. The user’s program provides the basic control instructions. Conceptually, the
control unit fetches instructions from the memory, decodes them and directs the
various units to perform the specified functions.
Arithmetic Logic Unit (ALU)
Arithmetic Logic Unit or ALU performs two types of operations - arithmetic and
logical. Arithmetic operations are the fundamental mathematical operations
consisting of addition, subtraction, multiplication and shifting operations. Logical
operations consist of boolean comparisons such as AND, OR and NOT.
Memory Registers
The CPU processes data and instruction with high speed. There is also movement
of data between various units of the computer. It is necessary to transfer the
processed data with high speed. So the computer uses a number of special memory
units called registers. A memory register is a sort of special storage area that
holds the data and instructions temporarily during processing. Since this is
internally located in the CPU, the processing time is very less. They often hold
data for less than a millisecond. This high speed storage area make processing
more efficient. To locate the characters of data or instructions in the main
memory, the computer stores them in locations known as addresses. A unique
number designates each address. Addresses can be compared to post office
mailboxes. Their numbers remain the same, but contents continuously change. The
contents of the memory are held only temporarily, that is, it is stored only as long
as the microcomputer is turned on. When you turn the machine off, the contents
are lost. The capacity of the memory to hold data and program instructions varies
in different computers. The original IBM PC could hold approximately several
thousand characters of data or instructions only. But modern microcomputers can
hold millions or even billions of characters in their memory.

How the CPU and Memory work together? The working of the CPU and memory is
shown in Fig
Various steps involved for multiplying two numbers is shown in Fig.
Let us take an example and understand how CPU and memory work together to
execute a given instruction. The control unit recognizes that the program has been
loaded into the memory. It begins to execute the first step in the program.
1. The program tells the user, “Enter 1st Number”.
2. The user types the number 100 on the keyboard. An electronic signal is sent to
the CPU.
3. The control unit recognizes this signal and routes the signal to an address in
memory - say address 7.
4. After completing the above instruction, the next instruction tells the user,
“Enter 2nd Number.”
5. The user types the number 4 on the keyboard. An electronic signal is sent to
the CPU.
6. The control unit recognizes this signal and routes it to memory address 8.
7. The next program instruction is executed - “Multiply 1st and 2nd Numbers”.
8. To execute this instruction, the control unit informs the ALU that two numbers
are coming and the ALU has to multiply them. The control unit next sends to the
ALU a copy of the contents of address 7 (100) and address 8(4).
9. ALU performs the multiplication: 100 × 4 = 400
10. The control unit sends a copy of the multiplied result (400:) back to memory to
store it in address 9.
11. The next program instruction is executed: “Print the Result.”
12. To execute this instruction, the control unit sends the contents of the address
9 (400) to the monitor.
13. Monitor displays the value 400. 14. Final instruction is executed: “End”. The
program is end.

OUTPUT DEVICES
Output devices receive information from the CPU and present it to the user in the
desired form. Some of the output devices are monitor, printers, plotters, etc. Let
us now learn each of them in detail.
Monitor
The monitor is just like a television screen and it is used to display data and
information. When some data or instruction is being keyed in, the monitor displays
the characters being typed. The monitors are available in various sizes. They may
also differ for different types of computers. The standard size is 24 lines by 80
characters. The output displayed on the monitor is called soft copy. There are two
types of monitors – CRT and TFT-LCD monitors.
CRT Monitor :
CRT (Cathode Ray Tube) monitor is a relatively older type of monitor. It is rarely
being used today. These were bigger and bulkier monitors and hence took lot of
desk space. They also consumed lot of electricity
TFT-LCD monitors:
TFT stands for Thin Film Transistor and LCD stands for Liquid Crystal Display.
These monitors are lighter and occupy less space. They are also commonly referred
to as flat screen displays and consume much less electricity than CRT monitors.
Now-a-days even LED (Light Emitting Diode) monitors are being used

Printer
You must have used printer for taking printouts. Printer is a device that produces
the output on paper. Such an output is also known as hard copy and it may be in the
form of text or graphics. There are many different types of printers. These
printers vary in terms of size, speed and quality of output. Some of them are
discussed below:
(i) Dot Matrix Printer : It is a type of printer that uses a print head to print
characters on paper. The print head moves in back and forth or up and down
motion on the page. The print head strikes on an ink soaked cloth ribbon
that is laid against a paper. The characters formed from dots are thus
printed on the paper.
(ii) Ink-jet Printer: Ink-jet printers work by spraying ionized ink on a sheet of
paper. Magnetized plates in the ink’s path direct the ink onto the paper in
the desired shapes. Ink-jet printers are capable of producing a better print
than the dot matrix printers. A typical ink-jet printer provides a resolution
of 300 dots per inch, although some newer models offer higher resolutions.
These are also known as Line printers as the output is produced line by line.
In general, the price of ink-jet printers is lower than that of laser printers.
However, they are also considerably slower. Another drawback of ink-jet
printers is that they require a special type of ink that is apt to smudge on
inexpensive copier paper. Since ink-jet printers require smaller mechanical
parts than laser printers, they are especially popular as portable printers.
In addition, colour ink-jet printers provide an inexpensive way to print full-
colour documents.
(iii) Laser Printer: It works on the principle of a photocopier. It utilizes a laser
beam to produce an image on a drum. The light of the laser alters the
electrical charge on the drum wherever it hits. The drum is then rolled
through a reservoir of toner, which is picked up by the charged portions of
the drum. Finally, the toner is transferred to the paper through a
combination of heat and pressure. Since the entire page is transmitted to a
drum before the toner is applied, laser printers are sometimes called page
printers. In addition to text, laser printers are very adept at printing
graphics. However, you need significant amounts of memory in the printer to
print high-resolution graphics. The speed of laser printers ranges from
about 4 to 20 pages of text per minute (ppm). A typical rate of 6 ppm is
equivalent to about 40 characters per second (cps).
(iv) Thermal printer: Thermal printers are printers that produce images by
pushing electrically heated pins against special heat-sensitive paper.
Thermal printers are inexpensive and are used in most calculators and many
fax machines. They produce low-quality print, and the paper tends to curl
and fade after a few weeks or months.
Plotter
A Plotter is a device that is used to draw charts, graphs, maps etc., with two or
more automated pens. Multi-colour plotters use different-coloured pens to
produce a multi-coloured output
Different types of plotters are available in the market. A drum plotter has a
paper wrapped around a moving drum and the pens move on the paper to print the
output. A flatbed plotter has a flat surface on which the paper is placed and the
pens move to draw the output. An electrostatic plotter has a negatively charged
paper on which the drawing is made using a positively charged toner. Plotters are
considerably more expensive than printers. These were the first of the devices
that could print full sized engineering drawings with colour. They are frequently
used for Computer Aided Engineering (CAE) applications such as Computer Aided
Design (CAD) and Computer Aided Manufacturing (CAM).

Speakers
The speakers are used to produce audio output. The computers have sound cards
that enable the computer to produce audio output through the speakers.
Now-a-days, 3D audio is a technique for giving more depth to traditional stereo
sound. Typically 3D sound or 3D audio is produced by placing a device in a room
with stereo speakers. The device dynamically analyses the sound coming from the
speakers and sends feedback to the sound system so that it can read just the
sound to give the impression that the speakers are further apart. 3D audio devices
are particularly popular for improving computer audio where the speakers tend to
be small and close together. There are a number of 3D audio devices that can be
attached to a computer’s sound card.
Computer Languages

The user of a computer must be able to communicate with it. That means, he must
be able to give the computer commands and understand the output that the
computer generates. This is possible due to the invention of computer languages.
Basically, there are two main categories of computer languages, namely Low Level
Language and High Level Language. Let us take a brief look at both these types of
computer languages.
Low Level Languages
Low level languages are the basic computer instructions or better known as
machine codes. A computer cannot understand any instruction given to it by the
user in English or any other high level language. These low level languages are very
easily understandable by the machine.
The main function of low level languages is to interact with the hardware of the
computer. They help in operating, syncing and managing all the hardware and
system components of the computer. They handle all the instructions which form
the architecture of the hardware systems.
Machine Language
This is one of the most basic low level languages. The language was first developed
to interact with the first generation computers. It is written in binary code or
machine code, which means it basically comprises of only two digits – 1 and 0.
Assembly Language
This is the second generation programming language. It is a development on the
machine language, where instead of using only numbers, we use English words,
names, and symbols. It is the most basic computer language necessary for any
processor.
High Level Language
When we talk about high level languages, these are programming languages. Some
prominent examples are PASCAL, FORTRAN, C++ etc.
The important feature about such high level languages is that they allow the
programmer to write programs for all types of computers and systems. Every
instruction in high level language is converted to machine language for the
computer to comprehend.
Scripting Languages
Scripting languages or scripts are essentially programming languages. These
languages employ a high level construct which allows it to interpret and execute
one command at a time.
Scripting languages are easier to learn and execute than compiled languages. Some
examples are AppleScript, JavaScript, Pearl etc.
Object-Oriented Languages
These are high level languages that focus on the ‘objects’ rather than the ‘actions’.
To accomplish this, the focus will be on data than logic.
The reasoning behind is that the programmers really cares about the object they
wish to manipulate rather than the logic needed to manipulate them. Some
examples include Java, C+, C++, Python, Swift etc.
Procedural Programming Language
This is a type of programming language that has well structured steps and complex
procedures within its programming to compose a complete program.
It has a systematic order functions and commands to complete a task or a program.
FORTRAN, ALGOL, BASIC, COBOL are some examples.
Language Processors
A language processor, or language translator, is a computer program that convert
source code from one programming language to another language or human
readable language. They also find errors during translation.
The language processors can be any of the following three types:
Compiler
The language processor that reads the complete source program written in high-
level language as a whole in one go and translates it into an equivalent program in
machine language is called a Compiler. Example: C, C++, C#.
In a compiler, the source code is translated to object code successfully if it is
free of errors. The compiler specifies the errors at the end of the compilation
with line numbers when there are any errors in the source code. The errors must
be removed before the compiler can successfully recompile the source code again
the object program can be executed number of times without translating it
again.
Assembler
The Assembler is used to translate the program written in Assembly language
into machine code. The source program is an input of an assembler that contains
assembly language instructions. The output generated by the assembler is the
object code or machine code understandable by the computer. Assembler is
basically the 1st interface that is able to communicate humans with the machine.
We need an assembler to fill the gap between human and machine so that they
can communicate with each other. code written in assembly language is some sort
of mnemonics(instructions) like ADD, MUL, MUX, SUB, DIV, MOV and so on. and
the assembler is basically able to convert these mnemonics in binary code. Here,
these mnemonics also depend upon the architecture of the machine.
Interpreter
The translation of a single statement of the source program into machine code is
done by a language processor and executes immediately before moving on to the
next line is called an interpreter. If there is an error in the statement, the
interpreter terminates its translating process at that statement and displays an
error message. The interpreter moves on to the next line for execution only
after the removal of the error. An Interpreter directly executes instructions
written in a programming or scripting language without previously converting them
to an object code or machine code. An interpreter translates one line at a time
and then executes it.
Example: Perl, Python and Matlab.
Difference Between Compiler and Interpreter

Compiler Interpreter

A compiler is a program that converts the An interpreter takes a source


entire source code of a programming program and runs it line by line,
language into executable machine code for translating each line as it comes to
a CPU. it.

The compiler takes a large amount of time An interpreter takes less amount
to analyze the entire source code but the of time to analyze the source code
overall execution time of the program is but the overall execution time of
comparatively faster. the program is slower.

The compiler generates the error message


only after scanning the whole program, so Its Debugging is easier as it
debugging is comparatively hard as the continues translating the program
error can be present anywhere in the until the error is met.
program.

It requires less memory than a


The compiler requires a lot of memory for
compiler because no object code is
generating object codes.
generated.
Compiler Interpreter

No intermediate object code is


Generates intermediate object code.
generated.

For Security purpose compiler is more The interpreter is a little


useful. vulnerable in case of security.

Examples: Python, Perl, JavaScript,


Examples: C, C++, C#
Ruby.

Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.

Use of the Algorithms:


Algorithms play a crucial role in various fields and have many applications. Some
of the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and
are used to solve problems ranging from simple sorting and searching to
complex tasks such as artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as
finding the optimal solution to a system of linear equations or finding the
shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in
fields such as transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence
and machine learning, and are used to develop intelligent systems that can
perform tasks such as image recognition, natural language processing, and
decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights
from large amounts of data in fields such as marketing, finance, and
healthcare.
These are just a few examples of the many applications of algorithms. The use of
algorithms is continually expanding as new technologies and fields emerge, making
it a vital component of modern society.
Algorithms can be simple and complex depending on what you want to achieve.
It can be understood by taking the example of cooking a new recipe. To cook a
new recipe, one reads the instructions and steps and executes them one by one, in
the given sequence. The result thus obtained is the new dish is cooked perfectly.
Every time you use your phone, computer, laptop, or calculator you are using
Algorithms. Similarly, algorithms help to do a task in programming to get the
expected output.
The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output will be
the same, as expected.
What is the need for algorithms?
1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster, and
easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult or
impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze data,
make predictions, and provide solutions to problems.
What are the Characteristics of an Algorithm?
As one would not follow any written instructions to cook the recipe, but only the
standard one. Similarly, not all written instructions for programming are an
algorithm. For some instructions to be an algorithm, it must have the following
characteristics:
 Clear and Unambiguous: The algorithm should be unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs. It may or may not take input.
 Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well. It should produce at least 1
output.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a
finite time.
 Feasible: The algorithm must be simple, generic, and practical, such that it
can be executed with the available resources. It must not contain some future
technology or anything.
 Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be implemented in
any language, and yet the output will be the same, as expected.
 Input: An algorithm has zero or more inputs. Each that contains a fundamental
operator must accept zero or more inputs.
 Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
 Definiteness: All instructions in an algorithm must be unambiguous, precise,
and easy to interpret. By referring to any of the instructions in an algorithm
one can clearly understand what is to be done. Every fundamental operator in
instruction must be defined without any ambiguity.
 Finiteness: An algorithm must terminate after a finite number of steps in all
test cases. Every instruction which contains a fundamental operator must be
terminated within a finite amount of time. Infinite loops or recursive
functions without base conditions do not possess finiteness.
 Effectiveness: An algorithm must be developed by using very basic, simple,
and feasible operations so that one can trace it out by using just paper and
pencil.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given problem.
 In an Algorithm the problem is broken down into smaller pieces or steps
hence, it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in Algorithms(imp).

PseudoCode
A Pseudocode is defined as a step-by-step description of an algorithm.
Pseudocode does not use any programming language in its representation instead
it uses the simple English language text as it is intended for human
understanding rather than machine reading.
Pseudocode is the intermediate state between an idea and its
implementation(code) in a high-level language.
What is the need for Pseudocode
Pseudocode is an important part of designing an algorithm, it helps the
programmer in planning the solution to the problem as well as the reader in
understanding the approach to the problem. Pseudocode is an intermediate state
between algorithm and program that plays supports the transition of the
algorithm into the program.
How to write Pseudocode?
Before writing the pseudocode of any algorithm the following points must be kept
in mind.

 Organize the sequence of tasks and write the pseudocode accordingly.


 At first, establishes the main goal or the aim.
Example:
Use standard programming structures such as if-else, for, while, and cases the
way we use them in programming. Indent the statements if-else, for, while loops
as they are indented in a program, it helps to comprehend the decision control
and execution mechanism. It also improves readability to a great extent.
 Use appropriate naming conventions. The human tendency follows the approach
of following what we see. If a programmer goes through a pseudo code, his
approach will be the same as per that, so the naming must be simple and
distinct.
 Reserved commands or keywords must be represented in capital letters.
Example: if you are writing IF…ELSE statements then make sure IF and ELSE
be in capital letters.
 Check whether all the sections of a pseudo code are complete, finite, and clear
to understand and comprehend. Also, explain everything that is going to
happen in the actual code.
 Don’t write the pseudocode in a programming language. It is necessary that
the pseudocode is simple and easy to understand even for a layman or client,
minimizing the use of technical terms.
 Difference between Algorithm and Pseudocode

Algorithm Pseudocode

An Algorithm is used to provide a solution A Pseudocode is a step-by-step


to a particular problem in form of a well- description of an algorithm in code-
defined step-based form. like structure using plain English text.
Algorithm Pseudocode

An algorithm only uses simple English Pseudocode also uses reserved


words keywords like if-else, for, while, etc.

These are fake codes as the word


These are a sequence of steps of a
pseudo means fake, using code like
solution to a problem
structure and plain English text

There are certain rules for writing


There are no rules to writing algorithms
pseudocode

Algorithms can be considered Pseudocode cannot be considered an


pseudocode algorithm

It is difficult to understand and It is easy to understand and


interpret interpret

You might also like