0% found this document useful (0 votes)
15 views27 pages

CF Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views27 pages

CF Unit 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Computer Fundamentals

What is a computer?
A computer is an electronic device that manipulates information, or data. It takes
inputs data from the input devices, stores & processed it and produces output.
"A computer is a programmable electronic device that takes data, perform instructed
arithmetic and logical operations, and gives the output."

Characteristics Of Computer:--
Speed:
Computers are a high-speed electronic machine. They can carry around 3-4 million
instruction per second. Even advanced computers can handle trillions of instructions per
second, cutting down the time to perform any digital tasks.

Diligence:
A human cannot work for several hours without resting, yet a computer never tires. A
computer can conduct millions of calculations per second with complete precision without
stopping. A computer can consistently and accurately do millions of jobs or calculations.
There is no weariness or lack of concentration. Its memory ability also places it ahead of
humans.

Reliability:
A computer is reliable. The output results never differ unless the input varies. the output is
totally depend on the input. when an input is the same the output will also be the same. A
computer produces consistent results for similar sets of data, if we provide the same set of
input at any time we will get the same result.

Automation:
The world is quickly moving toward AI (Artificial Intelligence)-based technology. A
computer may conduct tasks automatically after instructions are programmed. By
executing jobs automatically, this computer feature replaces thousands of workers.
Automation in computing is often achieved by the use of a program, a script, or batch
processing.

Versatility:
Versatility refers to a capacity of computer. Computer perform different types of tasks
with the same accuracy and efficiency. A computer can perform multiple tasks at the same
time this is known as versatility. For example, while listening to music, we may develop
our project using PowerPoint and Word pad, or we can design a website.

Memory:
A computer can store millions of records. these records may be accessed with complete
precision. Computer memory storage capacity is measured in Bytes, Kilobytes(KB),
Megabytes(MB), Gigabytes(GB), and Terabytes(TB). A computer has built-in memory
known as primary memory.
Evolution and history of Computers :
The first counting device was used by the primitive people. They used sticks, stones and
bones as counting tools. As human mind and technology improved with time more
computing devices were developed. Some of the popular computing devices starting with
the first to recent ones are described below;
• Abacus
The history of computer begins with the birth of abacus which is believed to be the first
computer. It is said that Chinese invented Abacus around 4,000 years ago.
It was a wooden rack which has metal rods with beads mounted on them. The beads were
moved by the abacus operator according to some rules to perform arithmetic calculations.
Abacus is still used in some countries like China, Russia and Japan. An image of this tool
is shown below;

Napier's Bones
It was a manually-operated calculating device which was invented by John Napier (1550-
1617) of Merchiston. In this calculating tool, he used 9 different ivory strips or bones
marked with numbers to multiply and divide. So, the tool became known as "Napier's
Bones. It was also the first machine to use the decimal point.
Pascaline :
Pascaline is also known as Arithmetic Machine or Adding Machine. It was invented
between 1642 and 1644 by a French mathematician-philosopher Biaise Pascal. It is
believed that it was the first mechanical and automatic calculator.
Pascal invented this machine to help his father, a tax accountant. It could only perform
addition and subtraction. It was a wooden box with a series of gears and wheels. When a
wheel is rotated one revolution, it rotates the neighboring wheel. A series of windows is
given on the top of the wheels to read the totals. An image of this tool is shown below;

Stepped Reckoner or Leibnitz wheel:


It was developed by a German mathematician-philosopher Gottfried Wilhelm Leibnitz in
1673. He improved Pascal's invention to develop this machine. It was a digital mechanical
calculator which was called the stepped reckoner as instead of gears it was made of fluted
drums. See the following image;
Difference Engine:
In the early 1820s, it was designed by Charles Babbage who is known as "Father of
Modern Computer". It was a mechanical computer which could perform simple
calculations. It was a steam driven calculating machine designed to solve tables of numbers
like logarithm tables.

Analytical Engine:
This calculating machine was also developed by Charles Babbage in 1830. It was a
mechanical computer that used punch-cards as input. It was capable of solving any
mathematical problem and storing information as a permanent memory.
Tabulating Machine:
It was invented in 1890, by Herman Hollerith, an American statistician. It was a
mechanical tabulator based on punch cards. It could tabulate statistics and record or sort
data or information. This machine was used in the 1890 U.S. Census. Hollerith also started
the Hollerith?s Tabulating Machine Company which later became International Business
Machine (IBM) in 1924.

Differential Analyzer:
It was the first electronic computer introduced in the United States in 1930. It was an
analog device invented by Vannevar Bush. This machine has vacuum tubes to switch
electrical signals to perform calculations. It could do 25 calculations in few minutes.
Mark I:
The next major changes in the history of computer began in 1937 when Howard Aiken
planned to develop a machine that could perform calculations involving large numbers. In
1944, Mark I computer was built as a partnership between IBM and Harvard. It was the
first programmable digital computer.

Basic Organization of Digital Computer

Input Unit: Input Units or devices are used to input the data or instructions into the
computers. Some most common input devices are mouse and keyword. Some of the
popular input devices are:
Keyboard, Mouse, Scanner, Joystick, Light Pen.
Output Unit: Output Units or devices are used to provide output to the user in the desired
format. The most popular examples of output devices are the monitor and the printer. Some
of the popular visual output devices are:
Monitor, Printer, Projector.
Control Unit: As its name states, this unit is primarily used to control all the computer
functions and functionalities. All the components or devices attached to a computer interact
with each other through the control unit. In short, the control unit is referred to as 'CU'.
Arithmetic Logic Unit: The arithmetic logic unit helps perform all the computer system's
arithmetic and logical operations. In short, the arithmetic logic unit is referred to as 'ALU'.
Memory: Memory is used to store all the input data, instructions, and output data. Memory
usually has two types: Primary Memory and Secondary Memory. The memory found
inside the CPU is called the primary memory, whereas the memory that is not the integral
part of the CPU is called secondary memory.

Number System
A number system is a method to represent numbers mathematically. It can use arithmetic
operations to represent every number uniquely. To represent a number, it requires a base or
radix.

Decimal Number System :


If the Base value of a number system is 10. This is also known as base-10 number system
which has 10 symbols, these are: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. Position of every digit has a
weight which is a power of 10.
then it is called the Decimal number system which has the most important role in the
development of science and technology.

For example, 10264 has place values as,


(1 × 104) + (0 × 103) + (2 × 102) + (6 × 101) + (4 × 100)
= 1 × 10000 + 0 × 1000 + 2 × 100 + 6 × 10 + 4 × 1
= 10000 + 0 + 200 + 60 + 4
= (10264)

Binary Number System :


Number System with base value 2 is termed as Binary number system. It uses 2 digits i.e. 0
and 1 for the creation of numbers. The numbers formed using these two digits are termed
Binary Numbers. The binary number system is very useful in electronic devices and
computer systems because it can be easily performed using just two states ON and OFF i.e.
0 and 1.
Decimal Numbers 0-9 are represented in binary as: 0, 1, 10, 11, 100, 101, 110, 111, 1000,
and 1001

For example, 14 can be written as (1110)2, 19 can be written as (10011)2, 50 can be


written as (110010)2.
Example of 19 in the binary system

Octal Number System:


Octal Number System is one in which the base value is 8. It uses 8 digits i.e. 0-7 for the
creation of Octal Numbers. Octal Numbers can be converted to Decimal values by
multiplying each digit with the place value and then adding the result.
Example: Convert 2158 into decimal.
Solution:
(215)8 = 2 × 82 + 1 × 81 + 5 × 80
= 2 × 64 + 1 × 8 + 5 × 1
= 128 + 8 + 5
= (141)10

Hexadecimal Number System:


Number System with base value 16 is termed as Hexadecimal Number System. It uses 16
digits for the creation of its numbers. Digits from 0-9 are taken like the digits in the
decimal number system but the digits from 10-15 are represented as A-F i.e. 10 is
represented as A, 11 as B, 12 as C, 13 as D, 14 as E, and 15 as F.
Hexadecim 0 1 2 3 4 5 6 7 8 9 A B C D E F
al
Decimal 0 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1
0 1 2 3 4 5
Example: Convert 7CF (hex) to decimal.
Solution:
Given hexadecimal number is 7CF.
In hexadecimal system,
7=7
C = 12
F = 15
To convert this into a decimal number system, multiply each digit with the powers of 16
starting from units place of the number.
7CF = (7 × 162) + (12 × 161) + (15 × 160)
= (7 × 256) + (12 × 16) + (15 × 1)
= 1792 + 192 + 15
= 1999
Types of Computer:-
On the basis of data handling capabilities, the computer is of three types:-
1. Analogue Computer
2. Digital Computer
3. Hybrid Computer
1) Analogue Computer
Analogue computers are designed to process analogue data. Analogue data is continuous
data that changes continuously and cannot have discrete values. We can say that analogue
computers are used where we don't need exact values always such as speed, temperature,
pressure and current.
2) Digital Computer
Digital computer is designed to perform calculations and logical operations at high speed.
It accepts the raw data as input in the form of digits or binary numbers (0 and 1) and
processes it with programs stored in its memory to produce the output. All modern
computers like laptops, desktops including smartphones that we use at home or office are
digital computers.
3) Hybrid Computer
Hybrid computer has features of both analogue and digital computer. It is fast like an
analogue computer and has memory and accuracy like digital computers. It can process
both continuous and discrete data. It accepts analogue signals and convert them into digital
form before processing. So, it is widely used in specialized applications where both
analogue and digital data is processed. For example, a processor is used in petrol pumps
that converts the measurements of fuel flow into quantity and price. Similarly, they are
used in airplanes, hospitals, and scientific applications.

On the basis of size, the computer can be of five types:-


1) Supercomputer
Supercomputers are the biggest and fastest computers. They are designed to process huge
amount of data. A supercomputer can process trillions of instructions in a second. It has
thousands of interconnected processors.
2) Mainframe computer
Mainframe computers are designed to support hundreds or thousands of users
simultaneously. They can support multiple programs at the same time. It means they can
execute different processes simultaneously. These features of mainframe computers make
them ideal for big organizations like banking and telecom sectors, which need to manage
and process high volume of data.
3) Miniframe or Minicomputer
It is a midsize multiprocessing computer. It consists of two or more processors and can
support 4 to 200 users at one time. Miniframe computers are used in institutes and
departments for tasks such as billing, accounting and inventory management. A
minicomputer lies between the mainframe and microcomputer as it is smaller than
mainframe but larger than a microcomputer.
4) Workstation
Workstation is a single user computer that is designed for technical or scientific
applications. It has a faster microprocessor, a large amount of RAM and high speed
graphic adapters. It generally performs a specific job with great expertise; accordingly,
they are of different types such as graphics workstation, music workstation and engineering
design workstation.
5) Microcomputer
Microcomputer is also known as a personal computer. It is a general-purpose computer that
is designed for individual use. It has a microprocessor as a central processing unit,
memory, storage area, input unit and output unit. Laptops and desktop computers are
examples of microcomputers. They are suitable for personal work that may be making an
assignment, watching a movie, or at office for office work.

Generations of Computers:-
A generation of computers refers to the specific improvements in computer technology
with time. In 1946, electronic pathways called circuits were developed to perform the
counting. It replaced the gears and other mechanical parts used for counting in previous
computing machines.
In each new generation, the circuits became smaller and more advanced than the previous
generation circuits. The miniaturization helped increase the speed, memory and power of
computers. There are five generations of computers which are described below;
First Generation Computers:
The first generation (1946-1959) computers were slow, huge and expensive. In these
computers, vacuum tubes were used as the basic components of CPU and memory. These
computers were mainly depended on batch operating system and punch cards. Magnetic
tape and paper tape were used as output and input devices in this generation;
Some of the popular first generation computers are;
ENIAC ( Electronic Numerical Integrator and Computer)
EDVAC ( Electronic Discrete Variable Automatic Computer)
UNIVACI( Universal Automatic Computer)
IBM-701
IBM-650
Second Generation Computers
The second generation (1959-1965) was the era of the transistor computers. These
computers used transistors which were cheap, compact and consuming less power; it made
transistor computers faster than the first generation computers.
Assembly language and programming languages like COBOL and FORTRAN, and Batch
processing and multiprogramming operating systems were used in these computers.
Some of the popular second generation computers are;
IBM 1620
IBM 7094
CDC 1604
CDC 3600
UNIVAC 1108
Third Generation Computers:
The third generation computers used integrated circuits (ICs) instead of transistors. A
single IC can pack huge number of transistors which increased the power of a computer
and reduced the cost. The computers also became more reliable, efficient and smaller in
size. These generation computers used remote processing, time-sharing, multi
programming as operating system. Also, the high-level programming languages like
FORTRON-II TO IV, COBOL, PASCAL PL/1, ALGOL-68 were used in this generation.
Some of the popular third generation computers are;
IBM-360 series
Honeywell-6000 series
PDP(Personal Data Processor)
IBM-370/168
TDC-316

Fourth Generation Computers


The fourth generation (1971-1980) computers used very large scale integrated (VLSI)
circuits; a chip containing millions of transistors and other circuit elements. These chips
made this generation computers more compact, powerful, fast and affordable. These
generation computers used real time, time sharing and distributed operating system. The
programming languages like C, C++, DBASE were also used in this generation.
Some of the popular fourth generation computers are;
DEC 10
STAR 1000
PDP 11
CRAY-1(Super Computer)
CRAY-X-MP(Super Computer)

Fifth Generation Computers


In fifth generation (1980-till date) computers, the VLSI technology was replaced with
ULSI (Ultra Large Scale Integration). It made possible the production of microprocessor
chips with ten million electronic components. This generation computers used parallel
processing hardware and AI (Artificial Intelligence) software. The programming languages
used in this generation were C, C++, Java, .Net, etc.
Some of the popular fifth generation computers are;
Desktop
Laptop
NoteBook
UltraBook
ChromeBook
Boolean Algebra with Truth Table:

What is Boolean Algebra?


Boolean Algebra is a branch of algebra that deals with boolean values—true and false. It
is fundamental to digital logic design and computer science, providing a mathematical
framework for describing logical operations and expressions.
There are totally SEVEN Logic Gates in which three are basic gates [AND, OR, NOT],
Two are Universal Gates [NAND, NOR] and Two are Exclusive gates[XOR, XNOR].

Basic Logic Gates:-

AND Gate:-
AND Gate has two or more inputs and one output. In this gate the output is high when all
the inputs are high. If any one of the input is low the output is Low. It operation performed
is multiplication indicated by . or *.The equation is represented as X=A.B . The Two
input AND gate with graphical symbol , Boolean Expression and Truth table.

Boolean Expression Boolean Diagram Truth Table


A B X=A.B
0 0 0
X=A.B 0 1 0
1 0 0
1 1 1
OR Gate:-
OR Gate as Two or more inputs and only one output. In this gate the output is high when
any one of the inputs are High. If all the inputs are low the output is Low. It operation
performed is addition indicated by a +. The equation is represented as X=A+B. The two
input OR gate with graphical symbol, Boolean Expression and Truth table.
Boolean Expression Boolean Diagram Truth Table

A B X=A.B
0 0 0
X = A+B 0 1 0
1 0 0
NOT Gate:- 1 1 1
NOT gate has only input and only one output . In this gate the
output is High when any one of the inputs are Low. NOT gate is also called as complement
gate. It operation performed is addition indicated by a ~ or Inverter . The equation is
represented as X=X1. The NOT gate with graphical symbol, Boolean Expression and Truth
table .
Boolean Expression Boolean Diagram Truth Table

X X1
0 1
X = X1 1 0
Universal Gates
1) NAND Gate
2) NOR Gate

NAND Gate:-
It is the combination of AND and NOT gate. This gate is the reverse of AND Gate. If all
the inputs are high the output is Low. The operation performed is multiplication inversed.
The equation is represented as . The two input NAND gate with graphical
symbol, Boolean Expression and Truth table .
Boolean Expression Boolean Diagram Truth Table
NOR Gate:-
It is the combination of OR and NOT gate. This gate is the reverse of OR Gate. If all the
inputs are Low the output is High. The operation performed is addition inversed. The
equation is represented as or . The two input NAND gate
with graphical symbol, Boolean Expression and Truth table .
Boolean Expression Boolean Diagram Truth Table

Exclusive Gates:-
1) XOR Gate
2) XNOR Gate

XOR Gate:-

The 'Exclusive-OR' gate is a circuit which will give a high output if one of its inputs is high
but not both of them. The XOR operation is represented by an encircled plus sign.
Boolean Expression Boolean Diagram Truth Table

XNOR Gate:-
The 'Exclusive-NOR' gate is a circuit that does the inverse operation to the XOR gate. It
will give a low output if one of its inputs is high but not both of them. The small circle
represents inversion.
Boolean Expression Boolean Diagram Truth Table
Software and its Types.

What is a Software?
In a computer system, the software is basically a set of instructions or commands that tell a
computer what to do. In other words, the software is a computer program that provides a set
of instructions to execute a user’s commands and tell the computer what to do. For example
like MS-Word, MS-Excel, PowerPoint, etc.
1) System Software
2) Application Software
3) Utility Software
System Software:-
System software is software that provides a platform for other software. Some examples
can be operating systems, antivirus software, disk formatting software, computer
language translators, etc. System software is a type of computer program that is designed
to run a computer’s hardware and application programs and examples of system software
include operating systems (OS) (like macOS, Linux, Android, and Microsoft Windows).

System Software
 Operating System
 Language Processor
 Device Driver
Application Software:-
The term “application software” refers to software that performs specific functions for a
user. When a user interacts directly with a piece of software, it is called application
software.
Application software programs are created to help with a wide range of tasks. Here are
a few examples:
 Information and data management
 Management of documents (document exchange systems)
 Development of visuals and video
 Management of accounting, finance, and payroll
 Management of resources (ERP and CRM systems)
 Management of a project
 Management of business processes
 Software for education (LMS and e-learning systems)
 Software for healthcare applications
Utility Software:-
Utility Software is a type of software which is used to analyse and maintain a computer. This
software is focused on how OS works on that basis it performs tasks to enable the smooth
functioning of the computer. This software may come with OS like windows defender and
disk cleanup tools. Antivirus, backup software, file manager, and disk compression tool all
are utility software.
Examples of Utility Software
Antivirus software, Disk cleaners, Backup and recovery software, Disk encryption
software, File compression software.

Computer Language
A computer language is a group of instructions that are used to create
computer programs. This is the brief of computer languages. The main goal is
to achieve human-computer interaction.

Types of Computer Languages

1. Low Level Language: A Low-level computer language includes only


1’s and 0’s. This language was used in first and second generation
computers. A Low level language is very easily understood by a
computer but hard to understand for Humans.
2. Machine Language: As discussed above, Machine level language is a
type of Low level language. Machine language is considered to be the
oldest computer language. Machine language is developed by only using
binary numbers i.e., 0 and 1. So, the instructions or the statements in
this language use a sequence of 0’s and 1’s.

[Link] Level Language:-High level language is the next development in the


evolution of computer languages. Examples of some high-level languages are given below,

 PROLOG (for “PROgramming LOGic”);


 FORTRAN (for ‘FORrmula TRANslation’);
 LISP (for “LISt Processing”);
 Pascal (named after the French scientist Blaise Pascal).

Translator programs

Compiler:-
The language processor that reads the complete source program written in high-level
language as a whole in one go and translates it into an equivalent program in machine
language is called a Compiler. Example: C, C++, C#.
In a compiler, the source code is translated to object code successfully if it is free of errors.
The compiler specifies the errors at the end of the compilation with line numbers when there
are any errors in the source code.
Assembler
The Assembler is used to translate the program written in Assembly language into machine
code. The source program is an input of an assembler that contains assembly language
instructions. The output generated by the assembler is the object code or machine code
understandable by the computer. Assembler is basically the 1st interface that is able to
communicate humans with the machine.

Interpreter

All high-level languages need to be converted to machine code so that the computer can
understand the program after taking the required inputs. The software by which the
conversion of the high-level instructions is performed line-by-line to machine-level
language, other than compiler and assembler, is known as INTERPRETER.

planning a computer program:-


Problem Solving is the sequential process of analyzing information related to a given
situation and generating appropriate response options.
There are 6 steps that you should follow in order to solve a problem:
1. Understand the problem
2. Formulate a model
3. Develop an Algorithm
4. Write the Program
5. Test the Program
6. Evaluate the Solution
[Link] the problem
The first step to solving any problem is to make sure that you understand the
problem that you are trying to solve. You need to know:
 What input data/information is available?
 What does it represent?
 What format is it in?
 Is anything missing?
 Do I have everything that I need?
2. Formulate a model
Now we need understand the processing part of the problem. Many problems break
down into small problems that require some kind of simple mathematical
computations in order to process the data.
3. Develop an Algorithm
Now that we understand the problem and have formulated a model, it is time to
come up with a precise of what we want the computer to do.
Two commonly used representation for an algorithm is by using
1) Pseudo code /Algorithm
2) Flow chart.
Algorithm:
Algorithm is a precise sequence of instructions for solving a problem.
An algorithm is a set of rules or steps to solve a problem or achieve a goal.
Flowchart:
Flowchart is a graphical representation of the given problem with a standard
symbols.
[Link] the Program:
Program can be defined as sequence of instruction to be carried out to perform a
specific task in a particular programming language.
Now that we have a precise set of steps for solving the problem, most of the hard
work has been done.
5. Test the program
Once you have a program written that compiles, you need to make sure that it solves
the problem that it was intended to solve and that the solutions are correct. Running
a program is the process of telling the computer to evaluate the compiled
instructions.

6. Evaluate the solution:


Once your program produces a result that seems correct, you need to re consider the
original problem and make sure that the answer is formatted into a proper solution to
the problem.
Algorithm:
In computer programming terms, an algorithm is a set of well-defined instructions to
solve a particular problem. It takes a set of input(s) and produces the desired output.
For example,
An algorithm to add two numbers:
1. Take two number inputs
2. Add numbers using the + operator
3. Display the result

Step 1: Start

Step 2: Read A and B

Step 3: Sum = A + B

Step 4: Display Sum

Step 5: Stop

Flowchart
Pseudo code

A Pseudocode is defined as a step-by-step description of an algorithm. Pseudocode


does not use any programming language in its representation instead it uses the
simple English language text as it is intended for human understanding rather than
machine reading.
Pseudocode is the intermediate state between an idea and its
implementation(code) in a high-level language.

 Fo
m
la
a
m
de
 Fo
m
la
a
m
de

You might also like