0% found this document useful (0 votes)
28 views74 pages

Fundamentals of Computing Systems

The document covers the fundamentals of computing systems, detailing the evolution of computers from early counting devices to modern computing technologies. It outlines the five generations of computers, highlighting key advancements such as the transition from vacuum tubes to transistors, the introduction of integrated circuits, and the rise of personal computing. Additionally, it discusses computer hardware components, including input, output, and storage devices, emphasizing their roles in processing and delivering information.

Uploaded by

Bryan Ferry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views74 pages

Fundamentals of Computing Systems

The document covers the fundamentals of computing systems, detailing the evolution of computers from early counting devices to modern computing technologies. It outlines the five generations of computers, highlighting key advancements such as the transition from vacuum tubes to transistors, the introduction of integrated circuits, and the rise of personal computing. Additionally, it discusses computer hardware components, including input, output, and storage devices, emphasizing their roles in processing and delivering information.

Uploaded by

Bryan Ferry
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 74

Fundamentals of Computing

Systems
Fundamentals of Computing Systems introduces the basic building
blocks and operation of computers, covering how they accept input,
process data, store information, and produce output, all through the
interaction of physical hardware and programmed software. A
computer system consists of physical hardware (like the CPU, memory,
and input/output devices) and software (like the operating system and
applications) that work together to turn raw data into meaningful
information for various tasks, from simple calculations to complex
applications.
The Evolution of Computers
The history of computers spans thousands of years, from early counting devices to
the powerful systems we use today. Here's an overview of the key milestones in the
evolution of computers:
1. Early Counting Devices (Pre-Computer Era)

The Abacus (c. 4000 BCE)


The abacus, created by the Chinese, is often regarded as the first
computing device. It consisted of beads strung on rods and was used
to perform simple arithmetic operations like addition and subtraction.
Over time, different versions of the abacus spread across Asia,
becoming an essential tool for calculations.

Napier's Bones (1617)


Invented by John Napier, Napier's Bones were a set of ivory rods
engraved with numbers, designed to assist with multiplication and
division. This invention also introduced the concept of the decimal
point, a crucial development in simplifying calculations.
2. Mechanical Calculators (17th-19th Century)

Pascaline (1642-1644)
French mathematician Blaise Pascal developed the Pascaline, the first mechanical calculator
capable of performing addition and subtraction. It used gears and wheels to calculate, and its
purpose was to help Pascal’s father, a tax collector, with his work.
Stepped Reckoner (1673)
German philosopher and mathematician Gottfried Wilhelm Leibniz improved Pascal's design,
developing the Stepped Reckoner. It was capable of performing addition, subtraction,
multiplication, and division, and it used fluted drums instead of gears.
Difference Engine (1820s)
Charles Babbage, often called the "Father of Modern Computing," designed the Difference
Engine, a mechanical device meant to calculate polynomial functions. Though it was never fully
built during his lifetime, it demonstrated the potential for automatic computation.
Analytical Engine (1830s)
Babbage also developed the Analytical Engine, a more advanced version of the Difference Engine.
It was the first design for a general-purpose mechanical computer. It included a control unit,
memory, and an input/output system using punch cards. Although it was never constructed, its
principles anticipated modern computers.
3. The Rise of Electronic Computing (1930s-1940s)

Tabulating Machine (1890)


Herman Hollerith, an American statistician invented this machine in the year 1890.
Tabulating Machine was a mechanical tabulator that was based on punch cards. It was
capable of tabulating statistics and record or sort data or information. This machine
was used by U.S. Census in the year 1890. Hollerith's Tabulating Machine Company
was started by Hollerith and this company later became International Business
Machine (IBM) in the year 1924.
Differential Analyzer (1930s)
Differential Analyzer was the first electronic computer introduced in the year 1930 in
the United States. It was basically an analog device that was invented by Vannevar
Bush. This machine consists of vacuum tubes to switch electrical signals to perform
calculations. It was capable of doing 25 calculations in a few minutes.
Mark I
In the year 1937, major changes began in the history of computers when Howard Aiken
planned to develop a machine that could perform large calculations or calculations
involving large numbers. In the year 1944, Mark I computer was built as a partnership
between IBM and Harvard. It was also the first programmable digital computer marking
a new era in the computer world.
4. The Era of Transistors (1950s-1960s)

Transistor Computers (1950s)


In the 1950s, the invention of the transistor revolutionized
computing. Transistors were smaller, more reliable, and
energy-efficient compared to vacuum tubes. They played a
key role in making computers more compact and affordable.
UNIVAC I (1951)
The Universal Automatic Computer I (UNIVAC I),
developed by Eckert and Mauchly, was the first
commercially successful computer. It was used for scientific
and business applications and demonstrated the potential of
electronic computing.
5. The Rise of Integrated Circuits (1960s-1970s)

Integrated Circuits (1960s)


The introduction of Integrated Circuits (ICs) allowed multiple
transistors to be placed on a single chip, which dramatically reduced
the size and cost of computers while improving their performance.
IBM System/360 (1964)
The IBM System/360 was a family of mainframe computers that
utilized integrated circuits, setting a new standard for computing in
business, government, and academia. It became one of the first
systems to offer compatibility across different machines.
Minicomputers and Microcomputers
With the development of the microprocessor, the size of computers
shrank even further, leading to the creation of
affordable minicomputers like the PDP-8 and PDP-11. These
smaller systems paved the way for the personal computer revolution.
6. The Personal Computer Revolution (1970s-1980s)

Apple II (1977)
The Apple II, developed by Steve Jobs and Steve Wozniak, was one of
the first successful personal computers. It used a microprocessor and
could run basic software applications like word processors and games.
IBM PC (1981)
The introduction of the IBM PC in 1981 standardized the personal
computer market, offering a system that could be easily upgraded and
compatible with a wide variety of software. It played a major role in the
spread of personal computing.
The Macintosh (1984)
Apple’s Macintosh introduced the concept of the graphical user
interface (GUI), making computers more user-friendly and accessible to
a broader audience.
7. The Internet and Networking (1990s-Present)

The World Wide Web (1990s)


The invention of the World Wide Web by Tim Berners-
Lee revolutionized the way people used computers. It made information
accessible globally and led to the creation of web browsers
like Netscape Navigator and Internet Explorer.
Cloud Computing (2000s-Present)
Cloud computing allowshave been users to store and access data
remotely via the internet, making it easier to scale computing
resources. Services like Google Drive, Dropbox, and Amazon Web
Services (AWS) transformed how businesses and individuals manage
data.
8. The Modern Day and the Future of Computing

Artificial Intelligence (AI):


AI is rapidly becoming a cornerstone of modern computing. Machine
learning and deep learning algorithms enable computers to make decisions,
recognize patterns, and even understand human language, leading to
advancements in everything from virtual assistants to autonomous vehicles.
Quantum Computing (Emerging):
Quantum computing promises to revolutionize fields like cryptography and
materials science by solving problems that are beyond the reach of classical
computers. Though still in its early stages, quantum computers could one day solve
complex problems exponentially faster than traditional systems.
The Internet of Things (IoT):
The Internet of Things (IoT) is allowed fifth-generation, allowing them to
collect and share data. From smart homes to wearable tech, IoT devices are
transforming the way we interact with the world around us.
Generations of Computer
The Five Generations of
Computers

14
Generations of Computer
• The computer has evolved from a large-sized simple
calculating machine to a smaller but much more powerful
machine.

• The evolution of computer to the current state is defined


in terms of the generations of computer.

• Each generation of computer is designed based on a new


technological development, resulting in better, cheaper
and smaller computers that are more powerful, faster and
efficient than their predecessors.

15
Generations of Computer
• Currently, there are five generations of computer. In
the following subsections, we will discuss the
generations of computer in terms of the technology
used by them (hardware and software), computing
characteristics (speed, i.e., number of instructions
executed per second), physical appearance, and their
applications.

16
First Generation Computers
(1940-1956)
• The first computers used vacuum tubes(a sealed glass tube containing a near-vacuum which
allows the free passage of electric current.) for
circuitry and magnetic drums for memory.
• They were often enormous and taking up entire room.
• First generation computers relied on machine language.
• They were very expensive to operate and in addition to using a great deal of electricity, generated
a lot of heat, which was often the cause of malfunctions(defect or breakdown).
• The UNIVAC and ENIAC computers are examples of first-generation computing devices.

17
First Generation Computers
Advantages :
• It was only electronic device
• First device to hold memory

Disadvantages :
• Too bulky i.e large in size
• Vacuum tubes burn frequently
• They were producing heat
• Maintenance problems

18
Second Generation Computers
(1956-1963)
• Transistors replaced vacuum tubes and ushered in the second
generation of computers.
• Second-generation computers moved from cryptic binary machine
language to symbolic.
• High-level programming languages were also being developed at this
time, such as early versions of COBOL and FORTRAN.
• These were also the first computers that stored their instructions in
their memory.

19
Second Generation
Computers
Advantages :
• Size reduced considerably
• The very fast
• Very much reliable

Disadvantages :
• They over heated quickly
• Maintenance problems

20
Third Generation Computers
(1964-1971)
• The development of the integrated circuit was the hallmark of the
third generation of computers.
• Transistors were miniaturized and placed on siliconchips, called
semiconductors.
• Instead of punched cards and printouts, users interacted with third
generation computers through keyboards
and monitors and interfaced with an operating system.
• Allowed the device to run many different applications at one time.

21
Third generation computers
Advantages :
• ICs are very small in size
• Improved performance
• Production cost cheap

Disadvantages :
• ICs are sophisticated

22
Fourth Generation Computers
(1971-present)
• The microprocessor brought the fourth generation of computers, as
thousands of integrated circuits were built onto a single silicon chip.
• The Intel 4004 chip, developed in 1971, located all the components of
the computer.
• From the central processing unit and memory to input/output
controls—on a single chip.
• . Fourth generation computers also saw the development of GUIs,
the mouse and handheld devices.

23
Fourth Generation
Computers

24
Fifth Generation Computers
(present and beyond)
• Fifth generation computing devices, based on artificial intelligence.
• Are still in development, though there are some applications, such
as voice recognition.
• The use of parallel processing and superconductors is helping to make
artificial intelligence a reality.
• The goal of fifth-generation computing is to develop devices that
respond to natural language input and are capable of learning and
self-organization.

25
Fifth Generation Computers

26
Computer hardware refers to the physical components of a computer that you
can see and touch. These components work together to process input and deliver
output based on user instructions. In this article, we’ll explore the different types of
computer hardware, their functions, and how they interact to make your computer
work

Computer Hardware
Computer hardware is a physical device of computers that we can see and
touch. E.g. Monitor, Central Processing Unit, Mouse, Joystick, etc. Using these
devices, we can control computer operations like input and output.
Computer Hardware Parts
These hardware components are further divided into the following categories, which are:
•Input Devices

•Output Devices

•Storage Devices

•Hardware Components

Input Devices
Input devices allow users to interact with a computer by entering data or commands. These
devices convert the input into a format that the computer can process.
Now we discuss some input devices:
•Keyboard: The most widely used input device, featuring 104 keys, including alphabetic,
numeric, and function keys. Modern keyboards connect via Bluetooth, replacing traditional wired
connections.

•Mouse: A pointing device that controls the cursor on the screen. It features left, right, and middle
buttons for selection and interaction. The sensor inside the mouse detects its movement speed,
adjusting the cursor accordingly.
•Scanner: Scans documents, images, and other media, converting them
into digital formats for editing or processing, similar to a Xerox machine.

•Trackball: A stationary pointing device with a ball that the user rotates to
control the cursor, requiring less space than a traditional mouse.

•Light Pen: A light-sensitive pen used to draw or select objects on a CRT


screen by detecting raster patterns, offering a direct interaction with the
display.

•Microphone: Converts sound into electrical signals. It captures voice


input for speech recognition and voice commands on the computer.

•Optical Character Reader (OCR): Scans printed or handwritten text,


converting it into digital data by detecting reflected light from the
characters, similar to a scanner.

•Bar Code Reader: Reads bar codes and converts them into digital data
for processing. The bar code consists of light and dark lines that encode
information.
Output Devices
Output devices display the results of tasks given to the computer in a human-readable
form. Let’s discuss some common output devices:
•Monitor: The main output device. It is also called VDU(visual display unit) and it looks
like a TV screen. The Monitor displays the information from the computer. It is used to
display text, video, images, etc.

•Printer: A printer is an output device that transfers data from the computer in a printed
format by using text or images on paper. There are both coloured and black & white
printers. Further, there are also different types of printers, like Laser Printer, Dot-matrix
printers, and Inkjet printers.

•Plotter: It is similar to a printer but potters are large. A plotter is used to generate
large drawings, architectural blueprints, etc. on paper and these are high-quality images
and drawings and large.

•Speakers: It is a very common output device and it gives sound as an output. The
speaker is generally used to play music or anything having sound.
Storage Devices
Some devices are used for storage purposes and are known as secondary
storage devices. Some of them are discussed below:
1. CD (Compact disc): A CD is circular and made up of thin platted glass and
plastic polycarbonate material. It has a storage capacity of 600 MB to 700 MB
of data. It has a standard size of 12 cm with a hole in the centre of about 1.5
cm and 1.2 mm in thickness. There are 3 types of CDs, which are:
•CD-ROM (CD - Read Only Memory): Contents of this type of CD cannot be
erased by the user. Only the publisher is allowed to access the data imprinted
on this CD. CD-ROM is used for commercial purposes like for a music album or
any application package by a software company.

•CD-R (CD-Recordable): In this, content or data can be stored once. After


that, they can be read many times but the data or content cannot be rewritten
or erased. (Kind of one-time use)

•CD-RW(CD-Rewritable): As the name suggests, this type of CD is used to


rewrite the content or erase previous content and again write new content
many times.
2. DVD (Digital Video/Versatile Disc): A DVD is the same as a CD but with some
more features. A DVD comes in single and dual-layer formats. It has much greater
storage capacity in comparison to CD. The storage capacity of a DVD with a one-sided
single layer is - 4.7 GB, one-sided double layer - 8.5 GB, double-sided single layer - 9.4
GB, and double-sided double layer - 17 GB. There are also some types of DVDs, which
are :
•DVD-ROM: In this type, the contents of the DVD cannot be written on or erased by
the user. DVD ROM is used for applications and databases for distributing them in
large amounts.

•DVD-R / DVD+R: DVD-R (DVD minus R) and DVD+R (DVD plus R) are two different
kinds of discs and they are once recordableformatst. Also, they have no difference
virtually.

•DVD-RW / DVD+RW: This is a kind of rewritable disc and it allows up to 1,000


rewrites.

•DVD-RAM: DVD RAM is accessed like a hard disk. It provides high data security and
storage capacity. This is a kind of rewritable disc and it allows up to 1,00,000 rewrites.
3. Hard DiskAtheAn hard disk is a non-volatile storage device that uses its
read/write heads to store digital data on the magnetic surface of a rigid plate. It is
generally 3.5 inches in size for desktops and 2.5 inches in size for laptops. A hard
disk can be classified further into 3 types, which are:
•Internal Hard Disk: It has a common storage capacity stated as GB or TB. A
system case or cabinet is the place where it is located. It can perform faster
operations and its storage is fixed. It is mainly used to store large data
files and programs.

•Internal Cartridges: The Internal hard disk can't be removed from the system
cabinet easily. To resolve this problem Internal Cartridges are introduced. So, Internal
cartridges make it easy to remove CDs. It has a storage capacity of 2 GB to 160 GB.
It is used as an alternative to an internal hard disk.

•Hard Disk Packs are used by organizations such as banks and government sector
organizations to store large amounts of data. They have a storage capacity of PB
(peta bytes).
History of Computer Software
Timeline Development Description
Here are The Timeline of Software Development and Advancements
1.Development of early computers
1940s Early Software
ENIAC and EDSAC.

1.FORTRAN: For scientific and


engineering calculations
1950s High-level Languages
2.COBOL: For business applications

3.LISP: For AI research

1.IBM's OS/360 developed.


1960s Operating Systems 2.Multics Project laid the groundwork
for Unix.
1.Unix was developed.

1970s Expansion of High-level Languages and Unix 2.C programming language was created.

3.DBMSs such as those using SQL emerged.

1.MS-DOS became a standard operating system


for IBM PCs.

2.Apple introduced the Macintosh with GUI.


1980s Personal Computing and Graphical User Interfaces
3.Microsoft Windows was invented.

4.Spreadsheet software Lotus 1-2-3 was invented.

1.WWW was developed.

2.Linux emerged as a powerful OS.


1990s Internet and Open Source
3.Java programming language was developed.

4.Microsoft released Windows 95.

1.Cloud computing services began with Amazon


AWS.
2000s Web 2.0 and Mobile Computing
2.The iPhone was introduced in 2007.
1.Microsoft launched Windows 10.

2.Mobile apps have become ubiquitous.

2010s Mobile Apps and Artificial Intelligence 3.AI and machine learning saw significant
advancements.

1.Quantum computing software started to


develop.
2020s Cloud Native and Quantum Computing
2.Cloud-native applications started to become
more popular.
Language
1. Machine Language (also called Machine Code)

•Definition: Machine language is the lowest-level programming language, directly understood by


the computer’s central processing unit (CPU). It consists entirely of binary code (1s and 0s) and
represents the raw instructions that the CPU can execute.

•Characteristics:

• Binary Format: Machine language instructions are written in binary (0s and 1s) because the
computer’s hardware understands electrical signals that represent on/off states.

• Specific to Architecture: The machine language for one type of CPU architecture (e.g.,
Intel x86, ARM) will be different from that of another because the underlying hardware
designs vary.

• No Abstraction: Machine language is the closest to the actual hardware and provides no
abstractions, meaning the programmer must manage all aspects of memory, data
manipulation, and control directly.

•Example: A machine code instruction might look like 10110000 01100001, which could correspond
to an operation like loading a value into a register on a specific CPU architecture.
2. Assembly Language

Definition: Assembly language is a low-level programming language that serves as a symbolic


representation of machine code. Each assembly language instruction corresponds directly to a machine
language instruction, but the code is written using readable text and mnemonic codes rather than binary.
•Characteristics:
• Human-Readable: Assembly language uses symbolic names (mnemonics) for operations, such
as MOV for moving data, ADD for addition, etc., making it more readable than raw binary machine
code.
• One-to-One Mapping: Every assembly language instruction corresponds to a specific machine
code instruction for a particular CPU architecture.
• Architecture-Specific: Assembly language is closely tied to the CPU’s architecture, so programs
written in assembly for one machine will not work on another without modification.

• Control: It allows fine-grained control over the hardware (e.g., manipulating registers and
memory addresses directly).

•Example:assemblyCopy codeMOV AX, 1 ; Move the value 1 into register AX ADD AX, BX ; Add
the value in register BX to AX These are mnemonics and are typically converted to machine code
by an assembler.
3. High-Level Language

•Definition: High-level languages are programming languages that provide a greater level of abstraction from the hardware,
making them easier for humans to read and write. High-level languages are designed to be portable across different
hardware platforms.

•Characteristics:

• Abstraction: High-level languages abstract away the details of the computer’s hardware, memory management,
and CPU instructions. This allows programmers to focus on solving problems rather than dealing with hardware
specifics.

• Portability: Programs written in high-level languages can often run on different types of computers (with little or no
modification) because the language and its compiler/interpreter handle hardware differences.

• Built-in Libraries: High-level languages come with extensive libraries and frameworks that simplify the
development process.

• Readability: High-level languages are designed to be easy to read and write, often resembling natural language to
some extent.

• Automatic Memory Management: Many high-level languages manage memory automatically (e.g., through
garbage collection), so programmers don’t need to handle memory directly, unlike in assembly or machine code.
•Examples:

• Python: A very high-level, interpreted language.

• Java: A high-level, object-oriented language that runs on


the Java Virtual Machine (JVM).

• C/C++: High-level languages, though closer to the


hardware than some others, still abstract many hardware
details compared to assembly or machine code.

•Example:pythonCopy codea = 5 b = 10 c = a + b print(c)


Summary of Differences

Machine Assembly High-Level


Aspect
Language Language Language
Lowest, directly
Abstraction Low, close to High, abstracted
interpreted by
Level machine code from hardware
the CPU
Very readable,
Not readable by More readable,
Readability close to natural
humans uses mnemonics
language
Not portable Not portable Highly portable
Portability (hardware- (architecture- (across
specific) specific) platforms)
Less control,
Control over Full control over Direct control
more
Hardware hardware over hardware
convenience
Assembly (e.g.,
Example Machine code Python, C, Java,
x86, ARM
Languages (binary) Ruby, etc.
assembly)
Language Translators Compiler- interpreter linker- loader
- program execution
The compilation and execution of a program involves several
key components working together, including language
translators (compilers and interpreters), linkers, and loaders
. The specific process varies depending on the programming
language used.
Language translators:
Translators convert source code written in a high-level
programming language into a lower-level language that a
computer can execute.
The two main types are compilers and interpreters.
Linker
The linker is a utility program that combines multiple object files (compiled code modules) and libraries into a single
executable program.
•Combines object files: A program might be split across several source code files. Each file is compiled separately
into an object file. The linker brings all these object files together.
•Resolves symbols: During compilation, the compiler may leave "holes" for functions or variables that are defined in
other files or libraries. The linker finds the correct memory addresses for these external references and fills in the
placeholders.
•Static vs. Dynamic Linking:
•Static linking: The linker copies all necessary code from libraries into the final executable. This makes the
executable larger but self-contained and portable.
•Dynamic linking: The linker includes only the names of the shared libraries in the executable. The actual
linking to the shared libraries (like .dll files on Windows or .so files on Linux) is performed at runtime by the
loader. This saves memory and disk space.
Loader
The loader is a component of the operating system that prepares an executable program for execution
and brings it into memory.
•Allocates memory: The loader determines the required memory size and allocates space in the main
memory for the program's code and data.
•Performs relocation: It modifies the executable program's addresses to match the actual memory
addresses where it is loaded. This is necessary because the program may not be loaded at the fixed
address it was compiled for.
•Loads the program: It reads the executable file from storage and copies the instructions and data into
the allocated memory space.
•Sets up execution: It sets up the initial register values, such as the program counter, to point to the
program's entry point before transferring control of the CPU to the program.
Program execution workflow
The full process of converting source code into a running program
combines all these steps.
1.Source code: A programmer writes the code in a high-level language.
2.Preprocessing (for C/C++): The preprocessor handles tasks like
including header files and expanding macros.
3.Compiling: A compiler translates the source code into assembly code
and then into machine-readable object code. For interpreted languages,
this step is replaced by the interpreter evaluating the code at runtime.
4.Assembling: An assembler converts the assembly code into relocatable
object code.
5.Linking: The linker combines the object code with any necessary library
files to produce a single executable file.
6.Loading: The loader copies the executable file from disk into the main
memory and performs final preparations for execution.
7.Execution: The CPU fetches instructions from memory, decodes them,
and executes them one by one. This involves the program counter
tracking the next instruction, the arithmetic logic unit (ALU) performing
operations, and registers holding intermediate data.
In computing fundamentals, design refers to the structured process of planning a
solution before implementation, while a flowchart is a graphical tool that visually
represents an algorithm, process, or workflow using standardized symbols and
arrows to show the sequence of steps.

Flowcharts serve as a blueprint for program development, aiding in


communication, analysis, documentation, and debugging by providing a clear,
logical overview of complex processes or algorithms.
Design in Computing Fundamentals
Program design is a crucial preliminary phase before writing code. It
involves:
1.Understanding the Problem: Clearly defining the problem to be
solved.
2.Planning the Solution: Breaking down the problem into smaller
steps.
3.Creating a Blueprint: Developing a detailed plan for the solution,
often using a flowchart or pseudocode.
4.Testing the Design: Reviewing the plan to ensure it's logical and
efficient.
A flowchart is simply a graphical representation of steps. It shows steps in sequential order
and is widely used in presenting the flow of algorithms, workflow or processes. Typically, a
flowchart shows the steps as boxes of various kinds, and their order by connecting them with
arrows.
Flowchart Symbols

Different flowchart shapes have different conventional meanings. The meanings of some of the more
common shapes are as follows:

Terminator
The terminator symbol represents the starting or ending point of the system.

Process
A box indicates some particular operation.
Algorithm
An algorithm is a set of steps for accomplishing a task or solving a problem.
Typically, algorithms are executed by computers, but we also rely on
algorithms in our daily lives. Each time we follow a particular step-by-step
process, like making coffee in the morning or tying our shoelaces, we are in
fact following an algorithm.

In the context of computer science, an algorithm is a mathematical process


for solving a problem using a finite number of steps. Algorithms are a key
component of any computer program and are the driving force behind various
systems and applications, such as navigation systems, search engines, and
music streaming services.
How do algorithms work?
Algorithms use a set of initial data or input, process it through a series of
logical steps or rules, and produce the output (i.e., the outcome, decision,
or result).
Example of algorithm Let’s consider for example an algorithm that calculates the square of a
given number.

•Input: the input data is a single-digit number (e.g., 5).


•Transformation/processing: the algorithm takes the input (number 5) and performs the
specific operation (i.e., multiplies the number by itself).
•Output: the result of the calculation is the square of the input number, which, in this case,
would be 25 (since 5 * 5 = 25).
We could express this as an algorithm in the following way:

Algorithm: Calculate the square of a number


1.Start.
2.Input the number (N) whose square you want to find.
3.Multiply the number (N) by itself.
4.Store the result of the multiplication in a variable (result).
5.Output the value of the variable (result), which represents the square of the input number.
6.End.
Design Techniques in computing, algorithms, and problem-solving, we mean systematic approaches
or strategies used to develop efficient solutions. Each design technique has its own strengths, and the
choice depends on the problem type, constraints, and desired efficiency.

1. Divide and Conquer


2. Brute Force
3. Greedy algorithm
4. Dynamic Programming
Divide and Conquer
With Examples
Introduction
• Divide and Conquer is a problem-solving technique where a problem
is broken down into smaller sub-problems, solved individually, and
then combined to get the final solution.
Steps in Divide and Conquer
• 1. Divide: Break the problem into smaller sub-problems.
• 2. Conquer: Solve the sub-problems recursively.
• 3. Combine: Merge the solutions of the sub-problems to get the final
answer.
Example: Merge Sort
• • Divide: Split the array into two halves.
• • Conquer: Recursively sort both halves.
• • Combine: Merge the two sorted halves.
Example: Mergesort
• DIVIDE the input sequence in half
• RECURSIVELY sort the two halves
• basis of the recursion is sequence with 1 key
• COMBINE the two sorted subsequences by merging them
Mergesort Example
5 2 4 6 1 3 2 6

5 2 4 6 1 3 2 6

5 2 4 6 1 3 2 6

5 2 4 6 1 3 2 6

2 5 4 6 1 3 2 6

2 4 5 6 1 2 3 6

1 2 2 3 4 5 6 6
Example: Binary Search
• • Divide: Find the middle element of the array.
• • Conquer: Check if the middle element is the target.
• • If not, recursively search in the left or right half.
Advantages
• • Efficient for large problems.
• • Reduces time complexity in many cases.
• • Well-suited for recursive implementation.
Merge Sort Process
Unsorted Array

Left Half Right Half

Merge Sorted Halves


Binary Search Process

Array

Left Half Right Half

Check Middle Element


Example: Quick Sort
• • Divide: Choose a pivot element and partition the array around it.
• • Conquer: Recursively sort the left and right partitions.
• • Combine: The array is sorted after recursive calls complete.
Example: Maximum and Minimum
Problem
• • Divide: Split the array into two halves.
• • Conquer: Find the maximum and minimum in each half recursively.
• • Combine: Compare results to get overall maximum and minimum.
Example: Strassen’s Matrix
Multiplication
• • Divide: Split matrices into smaller sub-matrices.
• • Conquer: Perform recursive multiplications on sub-matrices.
• • Combine: Use Strassen’s formula to merge results efficiently.
• • Advantage: Reduces complexity compared to normal matrix
multiplication.
Example: Closest Pair of Points
• • Divide: Split the set of points into two halves.
• • Conquer: Find the closest pair in each half recursively.
• • Combine: Check the strip around the middle line for cross-pairs.
• • Used in computational geometry.

You might also like