0% found this document useful (0 votes)
874 views155 pages

Intro to Computer Engineering Course

The document provides an overview of an introductory computer engineering course. It outlines the course structure, objectives, topics, textbooks, and assessment criteria. The course aims to introduce students to fundamental computer engineering concepts, digital logic design, and assembly language programming.

Uploaded by

kurelolu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
874 views155 pages

Intro to Computer Engineering Course

The document provides an overview of an introductory computer engineering course. It outlines the course structure, objectives, topics, textbooks, and assessment criteria. The course aims to introduce students to fundamental computer engineering concepts, digital logic design, and assembly language programming.

Uploaded by

kurelolu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 155

Babcock University School of Engineering,

Department of Computer Engineering

CPE 112: INTRODUCTION TO


COMPUTER ENGINEERING
COURSE CODE: BU-CPE 101

COURSE TITLE:INTRODUCTION TO COMPUTER ENGINEERING

COORDINATOR: Dr. (ENGR.) S. O. OGUNLERE. PhD

Calendar
 This is a 14-week course. You will spend 45 hours of study over the 14 weeks.
Preamble

This course aims at developing engineering skills in the design and analysis of
digital logic components and circuits, making students thoroughly familiar
with the basics of gate-level circuit design starting from single gates and
building up to complex systems, and providing hands-on experience and
exposure to circuit design using state-of-the-art computer aided design tools
and programmable logic devices.
Course Description
This course will include:
 Introduction to basic concepts of computers with a brief discussion of complete history.
 Discussion of the various structural and functional components of computer system
including the internal and external peripherals.
 Overview of computer engineering design
 Basic Computer Architecture and Organization including Machine Representation of
instructions and data; Instruction format: OPCODE OPERAND.
 Understanding fundamental principles of electronic technology and practice in applying
abstract concepts to real problems in Digital computer systems.
 The concept of using basic logic gates, symbols and truth tables; memory and other
peripheral devices organized to perform complex calculations at high speed by
automatically processing information in the form of electrical pulses.
 Use of Boolean Algebra, Theorems Minimization methods
 Number systems and data representation/ information processing by computers.
 Character representation; Numeric and non-numeric, Alphanumeric, EBCDIC, BCD and
ASCII
Course Objectives
On successful completion of this course, you should be able to:

 Discuss the computer as an electronic machine, its component, history and its uses
 Differentiate between data, information and various data conversion in the computer
system.
 Discuss Computer hardware and software in a digital computer system
 Interconnect the various computer structures to form a useful and powerful machine to
process information at high speed
 Model logic signals such as Basic Logics Gates, using truth tables to produce an output from
various inputs (Combinational Systems).
 Present Truth Tables and Boolean algebra functions as simplification and minimization
processes
 Discuss simple Number Systems, Two’s Complements and Data Representations
 Know about how Computer carry out data encoding/conversion processes
 Discuss how to design a simple CPU using digital logic analysis
 Discuss how different instruction is being issued and carried out by the computer
Course Goal:
 To provide an introduction to computer engineering concepts, both
hardware and software, with emphasis placed on digital logic concepts.
Topics include binary number representations, Boolean algebra,
simplification methods for combinational circuits, introduction to sequential
circuits, and introduction to assembly language programming. This course
prepares students to take more advanced courses in any of the branches of
computer engineering.

PRE-REQUISITES: None

LABORATORY PROJECTS:
 Students get hands-on experience with designing digital circuits and
programming microprocessors/microcontrollers.
Learning Outcomes
At the end of the course the student should be able to:
• represent and manipulate information in binary form in a written and readable form;
• design, physically implement, and debug basic combinational and sequential logic circuits;
• write structural and data flow models of logic circuits in a hardware description language;
• implement designs represented in a register transfer language;
• discuss and explain the organisation and operation of a basic digital computer;
• describe the execution of machine language computer programmes by writing or flow chart;
• write elementary assembly language programmes and discuss their translation to machine language
programmes;
• use Boolean algebra or K-maps to simplify complex Boolean expressions;
• convert numbers between any two number systems, especially decimal, binary, octal and hex and represent
sign numbers;
• design arithmetic circuits to perform addition and subtraction of signed numbers and detect overflow
conditions;
• implement functions using AND/OR gates, OR/AND gates, NORs only, NANDs only, multiplexers or decoders.
• design the basic flip flops using sequential logic;
• design, implement and test a simple circuit based on a specified word problem;
• programme simple microcontrollers in assembly language; and
• identify a local environment-related need, write and present group reports on hardware and software design
projects using cooperative learning approach (learning team work in problem solving and improvisation).
REQUIRED TEXTS:
1. McGraw Hill, Introduction to Computing Systems: From bits & gates to C & beyond, Patt &
Patel, 2003, Second Edition
2. Digital Logic and Disgn, 3rd Edition; by Sam ogunlere
3. Digital Computer fundamentals, 6th edition; by Thomas c. Bartee.
4. Fundamentals of Digital Logics; by Stephen Brown & Zvonko Vranesic

Grading Criteria and Scale


Score Grade
80 – 100 A
60 - 79 B
50- 59 C
45 - 49 D
40 - 44 E
0 - 39 F
Course Structure
There are twelve modules in this course. They are as follows:
MODULE – 1: Introduction to Computer System I Week 1
1.1 Basic Computer Concepts
1.2 Primary Units of a Computer – Input, Processing and Output (IPO)

MODULE – 2: Introduction to Computer System II Week 2


2.1 Historical Developments of Computers
2.2 Classifications of Computers by Generations
2.3 Types of Computers

MODULE – 3: Introduction to Computer System III Week 2


3.1 Computer Systems Classification according to Size and Processing Power
3.2 Computer Configurations
Course Structure Cont’d
MODULE – 4: Introduction to Computer System IV Week 3
4.1 Computer Hardware Concepts
4.2 Computer Software Concepts (System and Application Software)
4.3 Different Types of Computer Programming Languages

MODULE – 5: Basic Computer Architecture and Organization Week 4


5.1 Introduction to the concept of computer as a hierarchical system/ Structure and Function
5.2 Concepts of Computer Architecture /Organization and the major levels of hierarchy
5.3 Description in terms of the collective function of computer cooperating components: BUS
system, Cache, Main Memory and processor.

MODULE – 6: Introduction to Basic Logic Gates, Circuits minimizations using Boolean algebra
and Truth Table Minimization theorem methods Weeks 5 & 6
6.1 Introductions to Digital Logic as the foundation to digital computer systems
6.2 Introductions to Logic Gates and Basic Logic Operations including the use of Truth Table as
simplification/ minimization process
6.3 Introductions to Combinational Logic Circuits including the use of Boolean algebra as s
implification/ minimization process
Course Structure Cont’d

MODULE – 7 Data Representation and Number system I Week 7


7.1 Data Representation and Number system
7.2 Memory Structures in a Computer
7.3 Number Systems (Decimal, Binary, Octal and Hexadecimal)

MODULE – 8 Data Representation and Number system II Weeks 7 & 8


8.1 Computer Arithmetic and the Number Systems
8.2 Negative Numbers Representation in Computer

MODULE – 9: Character Representation/ Information Processing, Numeric and Non-


Numeric Encoding Week 9
9.1 Introductions to numeric and non-numeric encoding
9.2 Types of Encoding: ASCII, BCD and EBCDIC
9.3 Parity, Error Checking and Correcting (ECC)
Course Structure Cont’d
MODULE – 10: Computer Systems Software I Week 9
10.1 Classification of Computer Software (System and Application software)
10.2 Different types of Computer Software

MODULE – 11: Computer Systems Software II Week 10


11.1 Ownership Rights and Delivery Methods of Computer Software
11.2 Installed Software versus Web-Based Software
11.3 Object Linking and Embedding (OLE)

MODULE – 12 Basic Microprocessor Structure and Function Weeks 1 & 12


12.1 Introductions to Processor Organizations and Family
12.2 Introductions to Register Organizations
12.3 Introductions to Instruction Execution Cycles, OPCODE and OPERAND

Reversion and Final Exam Weeks 13 & 14


Assessment Criteria

This course marking scheme is shown in the table below:

• Class Attendance 5
• Quizzes & Tests 10
• Assignments 10
• Mid–Semester Examination 15
Continuous Assessment (Total) 40
• Final Semester Examination 60
Total 100
MODULE – 1
INTRODUCTION TO COMPUTER SYSTEM I

Learning Outcomes
At the end of this module, you should be able to:
• Identify the basic components of the computer system;
• Explain the primary units ;
• Discuss the input units;
• Explain the processing units; and
• Describe the output units.
Introduction to Computer Engineering

WHAT IS COMPUTER ENGINEERING?

 Computer Engineering involves the design and development of


systems based on computers and complex digital logic devices.

 These systems find use in such diverse tasks as computation,


communication, entertainment, information processing, artificial
intelligence, and control.
1.1 Basic Computer concepts

What is a computer?
A computer is an electronic device that can accept data, store data and
manipulate the data to produce information or a result.

 What Is Computer Engineering?


Computer Engineering involves the design and development of systems based on
computers and complex digital logic devices. These systems find use in such
diverse tasks as computation, communication, entertainment, information
processing, artificial intelligence, and control.

 Difference between data & Information


It is essential to know that data is referred to as raw facts (yet to be manipulated)
while information is the processed data used in making decision.
1.2 Primary Units of a Computer
The three classical units of a computer tagged IPO as depicted in Fig.1.1(a)
are:
a) Input
b) Processing, and
c) Output

Figure 1.1(a): A digital computer block diagram showing the three basic units
Primary Units of a Computer Cont’d
The Major computer units can also be viewed as depicted in Figure 1.1b
CA = Central Arithmetical, CC = Central Controller

Figure 1.1(b): Block diagram structure of a digital computer


1.2.1 The Input Unit

• This is the unit that accepts data, instructions and programs into the
computer.
It is the avenue whereby users “talk” to or communicate with the computer
system.
 Examples of input devices are:
- Keyboard (basic input device for entering characters, numbers etc. into
the system)
- Scanner (turning hardcopy material like pictures into electronic or
softcopy formats)
- Mouse (a pointing device that helps in selecting objects on the screen)
- Touch screen, Light pen, Joystick
- Disk, and other computers through network connection
1.2.2 The Output Unit

• This is the unit that makes the information (processed data or the result)
available to users.

• Devices in this category may display information on the screen, send output
to other computers, display or print error messages, send requests or even
save information for use.
 Examples are:
- Monitor commonly called the screen (Cathode Ray Tube (CRT))
- Liquid Crystal Display (LCD),
- Printer (to get hardcopy)
- Plotter, etc ……
1.2.3 The Processing Unit

This is the unit where the processing of data is done, where data is
manipulated.
It is divided into three main parts namely:
a) The control unit/Processor Unit: Directs and coordinates the flow of
instructions and activities within the computer system.
b) The Arithmetic and Logic Unit (ALU):
AU - performs arithmetic operations like addition, subtraction,
multiplication,
LU - performs comparison operations resulting in true or false outcome.

c) The main storage /memory: Here, we have the internal memory referred to
as Read Only Memory (ROM) and the Random Access Memory (RAM)
The Processing Unit Cont’d
ROM (Read Only Memory):
• Responsible for the booting or starting up of the computer system.
• ROM chips are installed by the computer manufacturer
• Instructions cannot be altered by the user.

RAM (Random Access Memory – Main Memory):


• For execution or processing.
• The RAM is volatile, loses its contents on switching the system off.
• RAM data can be accessed in equal amount of time any where located.
• The size and access time have a great impact on the overall processing speed of a
computer system.

Auxiliary storage is a secondary memory that is non-volatile,


• Usually cheaper and larger in capacity than RAM
• Slower; often referred to as Disk Drive; e.g.: IDE, SATA, SSD, Flash Drive and
Optical Drive (DVD ROM)
The Processing Unit Cont’d

Processor – Memory Relationship


TUTOR MARKED ASSESSMENT (TMA)

1. The part of the processing unit of the computer system that coordinates the
flow of instruction, directs all activities is called the?

(a) A.L.U (b) C.P.U


(c) Control Unit (d) Disk Drive
MODULE – 2

INTRODUCTION TO COMPUTER SYSTEM II


Learning Outcomes
At the end of this module, you should be able to:
• Discuss the historical development of computer; and
• Classify computers according to their generations.
Historical Development of Computers

Computer pioneers were the earliest people whose ideas, contributions, and inventions helped
in the development of the computer system.
These great people are:

• Blaise Pascal, born in 1623, a French scientist and philosopher, invented the first adding
machine (calculating machine) in 1642.

• Charles Babbage, born in the year 1791, an English mathematician who was referred to as
the father of modern day computer, conceived the idea of making a machine to help in
astronomical calculations. He developed a machine called Analytical Engine in 1833.

• Ada Lovelace wrote a program for the Analytical Engine of Babbage. As a result, she was
known as the first lady programmer. A programming language called ADA a standard
language for most US Government Agencies was named after her
Historical Development of Computers Cont’d

• Hermann Hollerith, born in the year 1860, designed a system of recording data as holes on
punch cards. It became one of the basic input mechanisms for digital computers. The card
was called Hollerith card and was used by the US census bureau to quicken work on census
data.

• Howard Aiken of Harvard University, in the 1930s, started work on electro-mechanical


machine called Mark I.

• Eckert and Mauchlys of Moore School of Electrical Engineering of the University of


Pennsylvania, developed the first electronic computer called Electrical Numerical Integrator
and Calculator (ENIAC) in 1946.

• John Von Newman, a mathematician at the Institute of Advanced Studies, Princeton,


brought the idea of storing both instructions and data in the computer’s primary memory
(Stored Program Concept). His idea gave birth to Electronic Discrete Variable Automatic
Computer (EDVAC), which was the first program computer in the US, in 1945.
Classification of Computers by Generations
The Computer System was invented in the 1940s. It has undergone various
developments over the years.

1st Generation computers


• These were the computers manufactured in the middle 1940s and early 1950s
• Using the vacuum tube based on Thomas Edison’s invention of the light bulb in
1878.
 Examples of these are EDVAC, LEO, ENIAC, UNIVAC 1, and MARK 1.
2nd Generation Computers
• Computers manufactured in the late 1950s and early 1960s made of Transistors
• Transistors based on William Shockley development of Transistor in 1947 at BELL
Laboratory.
• Transistor is a solid state device that consumes less electricity, have more storage
capacity
• This aided the production of smaller systems called minicomputers.
Examples are ATLAS, MARK II, IBM, and the IBM 7000 series.
Classification of Computers by Generations Cont’d

3rd Generation Computers


• Computers manufactured in the late 1960s and early 1970s.
• Made of integrated circuits (IC).
• Jack kilby and Robert Noyce developed the Integrated Circuits (IC)
• Computers became more powerful, faster, reliable and cheap than 1st and 2nd generation In
1968, Noyce and Gorden Moore co-founded “intel”
Examples include ICL 1900 series, IBM series, and Honeywell 6000.

4th Generation Computers


• Computers in use today; were manufactured in the late 1970s and early 1980s.
• Made of complex integrated circuits known as “Large Scale Integration” (LSI) and Very Large Scale
• Integration (VLSI).
• Have greater number of microscopic circuits on chips.
• Era of software programs like graphics packages, database systems, word processors
• First microprocessor Intel 4004 that could run conventional computer programs with multitasking
• capability (handling many tasks)
 Examples: Microcomputers which are the largest and most inexpensive computer systems.
Classification of Computers by Generations Cont’d

5th Generation Computers


• These are the computers that will use Artificial Intelligence (AI)

• Have the ability to perform actions that are characteristic of human intelligence, such
as reasoning, mimicking and learning.

Example: Speech and pattern recognition


TUTOR MARKED ASSESSMENT (TMA)
1. What is the meaning of the acronym ENIAC?
(a) Electrical Numerical Integrator and Calculator
(b) Electrical Numerical Integration and Calculation
(c) Electronic Number Integrator and Calculator

2. Who invented the first adding machine?


(a) Charles Babbage (b) John Von Newman
(c) Blaise Pascal (d) Ada Lovelace

3. The technology behind the first generation computer is


(a) Silicon Chips (b) Transistor
(c) Vacuum Tube (d) Integrated Circuit
MODULE – 3

INTRODUCTION TO COMPUTER SYSTEM III


3.1 Types of Computers
There are three basic types of computers with respect to how data are
represented:

1. Digital Computers
The word ‘digital’ as used means whole numbers (discrete).
• These computers process data in form of discrete or separate values that is 0,1,2,3 etc by
operating on it in steps.
• They cannot work with values in intermediate intervals such as 11/2, 1/3 etc.
• Digital computers are more accurate and more flexible than analog computers because they
use the digital form of electricity signals (see figure below:

Digital signal
Types of Computers Cont’d

2. Analog Computers
In contrast to digital devices, analog devices have continuous values.
• These computers process data in form of variables, that is, quality that changes every time or
continuous signals as can be seen in figure 1.3).
• They are like measuring instruments such as thermometers and voltmeter
• They are mainly used in scientific and industrial control applications.
• They are devices that can measure the numerically defined variables of an abstract system in
terms of some physical quality;

 Examples are speedometer of a car and odometer of a pressure gauge.

Analog signal
Types of Computers Cont’d

3. Hybrid Computers
These types of computers process data both in digital and analog forms.

For example:
• Setting (programming) on a modern day television involves both digital and analog.

• They are special purpose computers that have found many applications in control and
feedback processes e.g. robots (used in an industrial environment).
3.2 Computer systems classification according to size and processing power

computer systems are available according to their sizes and processing power
which can be classified as follows:
1. SMALL or MICRO COMPUTERS
• Central processing unit (CPU) is based on a microprocessor.
• Smallest and inexpensive computers
 Example: Micro or PCs (Personal Computers) e. g. Desktop, portables (laptops,
notebooks) and hand-held units.

2. MEDIUM-SIZE or MINI COMPUTERS


• Computer systems that fall between micro computers and mainframe computers.
• More expensive than the micro computers and are used by medium sized companies
• Have higher processing power than the micro computers.
 Examples are MIR 9300, DEL, HEWLET PACKARD 3000, IBM system 38 and MU 400
Computer systems classification according to size and processing power Cont’d

3. MAINFRAME COMPUTERS
• These are large computer systems that are used by big organisations
• processing business transactions like payroll, salary, inventory and routine paperwork.
• They can operate 24 hours a day serving hundred of users
 Example: IBM 360/370 system, NCRV-8800 systems).

4. SUPER COMPUTERS
• These are the fastest largest and most expensive computer system.
• They are used usually in scientific and research laboratories.
• Also used in sending astronauts into the outer space, for weather forecasting
 Example: CRAY, X-Map and CRAY 2, VAX, ICL2690 etc….

Note that the Mini, mainframe and supper computers can be used as what is known as SERVER
3.3 Computer Configurations

There are two basic components of a computer system: Hardware and Software.

• The Hardware are the various physical components of a computer system. Any
part that can be held or felt or touched is hardware (Tangible parts)

• The Software consist of the non-tangible elements. It includes instructions and


other commands that control the operation of the computer system.

See the next page for details


Computer Configurations Cont’d

COMPUTER CONFIGURATION

Hardware Software

System Unit Peripherals


System Software Application Software

Main Other Internal Utilities


CPU Secondary Operating Translators
Components
Memory Memory System

Input Output External


Storage
Off the shelf
User Define
software

Electronic Spreadsheet Comm. SW Acct. Pack Payroll Stud Reg. Budget


(MS Word)
(Excel) (Internet Exp
Computer Configurations Cont’d

The Software Concepts


• Software is the intangible part of a computer system.
• It is a set of instructions written by a computer expert or a programmer that represents the
logical steps the computer follows to solve a particular problem or do a specific task.

Generally, there are 2 main categories of software, namely:


– system software
– application software (made up of ‘off the shelf’ and In-house/user defined software)
Computer Configurations Cont’d

SYSTEM SOFTWARE
• System software is a program or collection of programs which links other software like the
application software with the system hardware.
• It acts like the intermediary

The major system software


Operating System (OS): a software designed by system developers to control and manage the
resources of a computer system.
E.g. Ms-DOS, PC-DOS, Windows, Unix, Linux, Ubuntu etc

Types of operating systems


Single User, Multi-User and Network.
TUTOR MARKED ASSESSMENT (TMA)
1. Digital computers are more ……………………… and ..…………………………. than analog computers
because they use the digital form of electricity signals.
(a) Costly and effective (b) Accurate and flexible
(c) Faster and affordable (d) None of these

2. What is the best example of software that is shared by multiple computers?


(a) A workbook, e-mailed to the members of a subcommittee
(b) A report edited simultaneously by several users on a network
(c) A presentation view by overhead projector at a symposium
(d) A computer game played by a student on a graphing calculator.

3. Select all input devices from the list below.


(a) Plotters, Printers, Mouse, Joystick
(b) Monitor, Scanner, Speakers, Keyboard
(c) Keyboard, Mouse, Scanner, Joystick
(d) Scanner, Speakers, Plotters, Printers
MODULE – 4

INTRODUCTION TO COMPUTER SYSTEM IV


Learning Outcomes
At the end of this module, you should be able to:
• Discuss computer hardware concepts;
• Explain computer software concepts; and
• Describe the types of computer programming languages.
4.1 Computer Hardware Concepts

The hardware consists of the system unit and peripherals


 System unit:
• The Microprocessor or the Central Processing Unit (CPU)
• Main memory (RAM)
• Internal secondary memory
• Other internal components held and made to function together by the mother-
board.
Computer Hardware Concepts Cont’d

 Peripherals:

Peripheral devices are the hardware components of a computer system that are either attached
or connected externally to the system unit through a wired or wireless medium. The common
ones are:

• Input – Keyboard, Mouse, Scanner, Flash etc..

• Output - Printer, Monitor, Flash, Ploter, etc..

 External storage devices:

• Flash disk, Hard disk, Magnetic tape, Optical Disk, Cassette tape
4.2 Computer Software Concepts
 Software is the intangible part of a computer system

2 main categories of software, namely:


• system software or Operation software (OS)
– software designed by system developers to control and manage the resources of a
computer system.
– Example: -MS-DOS (Microsoft Disk Operating System), Windows, LINUS, Novell
Netware Windows, and Unix.
– - PC-DOS (Personal Computer Disk Operating System)

• application software (made up of ‘off the shelf’ and In-house/user defined software)
Types of operating systems

Single User Operating System

Multi-User Operating System

Network Operating System


Translato
r
A translator is the middle thing responsible for the conversion or translation of a program
written in a language format other than machine language (source code) into the machine
language version.

Translator
Source Code Machine/Object
code

Translation process from high level to low level (Machine level)


Translator
Cont’d

There are three categories of translators namely:


– Assembler (a program that translate a program written in assembly language
into machine code)

– Interpreter (a program that does its translation of high level language program
to machine code line by line basis. If the first line is not settled it does not go to
the next).

– Compiler (a program that translate a high level program into its machine
equivalent in batches).
Utilities Program

These are programs responsible for routine operations within the computer system.

• Backup utilities: These are programs designed to backup or keep a duplicate copy of the
contents of a hard disk drive on CD, DVD, flash drive, tape etc.
• Data Compression programs: These are programs used for compressing large files, thereby
freeing up disk space.
• Diagnostic software: These are programs designed to help in finding and correcting
problems on the computer.
• Disk defragmenter: A program that re-organises contiguously programs that were initially
non-contiguous on the memory to free more space for incoming programs and to improve
access time.

• Vaccine programs: These are programs designed to prevent virus from entering the
computer system. They are popularly called antivirus software.
Programming Languages

These are languages used to write instructions that computer will follow, work
on or execute. Basically, we have the:

(i) Machine Language (0 & 1)


(ii) Assembly Language or Low Level Language (LLL)- Has mnemonic codes or
symbols
(iii) High Level Language (HLL) – English like language e.g. JAVA, C++, C, PHYTON
TUTOR MARKED ASSESSMENT (TMA)

1. Digital computers are more ……………………… and ..…………………………. than analog


computers because they use the digital form of electricity signals.
(a) Costly and effective (b) Accurate and flexible
(c) Faster and affordable (d) None of these

2. What is the best example of software that is shared by multiple computers?


(a) A workbook, e-mailed to the members of a subcommittee
(b) A report edited simultaneously by several users on a network
(c) A presentation view by overhead projector at a symposium
(d) A computer game played by a student on a graphing calculator.

3. Select all input devices from the list below.


(a) Plotters, Printers, Mouse, Joystick (b) Monitor, Scanner, Speakers, Keyboard
(c) Keyboard, Mouse, Scanner, Joystick (d) Scanner, Speakers, Plotters, Printers

4. Why is the binary system useful in electronic data processing?


MODULE – 5

BASIC COMPUTER ARCHITECTURE AND ORGANIZATION


Learning Outcomes
By the end of this module, you should be able to:
• Define the concept of computer hierarchical system
• Differentiate between Computer Structure and Function
• Explain Concepts of Computer Architecture /Organization and the major
levels of hierarchy.
• Describe the collective function of computer cooperating components
5.1 Introduction to the concept of computer as a hierarchical system

A computer system consists of an interrelated set of components characterized


in terms of:

• Structure - the way in which components are interconnected, and

• Function - the operation of the individual components.

Four basic functions a computer can perform are:


- Data processing
- Data storage
- Data movement, and
- Control

See Diagram of functional view of computer in the next slide


The four main structural components of a computer are:
- Central processing unit (CPU): Controls the operation of the computer and performs its
data processing functions; often simply referred to as processor.

-Main memory: Stores data.

-I/O: Moves data between the computer and its external environment.

- System interconnection: Some mechanism that provides for communication among CPU,
main memory, and I/O. A common

E.g. the system bus, consisting of a number of conducting wires to which all the
other components are attached.

As depicted in the next slide diagram


Computer Architecture / Organization
Computer architecture - is the science of integrating computer organization components such
as gates, memory cells, and interconnections to achieve a level of functionality and
performance.

It may be view as:


The style of physical construction and the layout of the different parts of the computer.

computer’s organization - is hierarchical. Each major component can be further described by


decomposing it into its major subcomponents and describing their structure and function.

Computer system major components are:


- Processor
- Memory and
- Input /Output (I/O).
Processor or CPU- Major components are control unit, registers, ALU, and instruction execution
unit.

• Control Unit: Provides control signals for the operation and coordination of all processor
components

• ALU: Arithmetic logic unit -This is the part of the CPU that executes individual instructions
involving data (operands).

• Register: A high speed memory location in the CPU which holds a fixed amount of data.
Registers of most current systems hold 64 bits or 8 bytes of data.

• PC: Program counter - Also called the instruction pointer, is a register which holds the
memory address of the next instruction to be executed.

• IR: Instruction register - A register which holds the current instruction being executed.

• Acc: Accumulator - A register designated to hold the result of an operation performed by


the ALU
The Block Diagram of a Basic Computer Structure
Computer Architecture / Organization Cont’d

Architecture extends upward into computer software because a processor’s architecture must
cooperate with the operating system and system software as depicted in the Table below

High-level programming language Level 5


Assembly language Level 4
Operating system Level 3
Machine instructions Level 2
Micro architecture Level 1
Digital logic Level 0
Computer viewed as virtual machine
Layer No Broad Detailed Class of Layer Definition of Layer
Class of Level
Application Layer Computer Output: interface between human and computer.
7 SOFTWARE
Higher Order Software Human Direct Interaction: Communication with the computer is
6 Layer made in a form that is closer to human language.

Operating System Layer Software Controller: It controls the way in which other software
5 relate to the hardware
Machine Language: This is the layer that accepts only low-level
4 HARDWARE Machine Layer programming languages such as machine language.
HARDWARE
Machine Language Interpreter: It takes the machine language from
3 Microprogrammed the machine layer, interprets it or put it in the form that logic
Layer* circuits could handle
Logic Elements – Gates: Most computer operations (store,
2 Digital Logic Layer manipulate, data transfer, etc) are performed on this layer. Logic
circuits built with gates and associated storage devices like flip-
flops (counters, registers, ALU).
Electrical & Electronic Layer: All computers are built from simple
1 Physical Device Layer electrical and electronic components such as transistors, diodes,
resistors, capacitors, etc
Collective Function of Computer Cooperating Components

BUS System
A collection of parallel electrical conductors called ‘lines’ onto which a number of
components or devices may be connected or it is a common pathway or channel between
multiple devices.

E. g.
A 16-bit bus transfers 2 bytes at a time over 16 wires, and a 32-bit bus uses 32 wires, and so on.

Three types of Bus Systems:


- Address bus
- Data bus
- Control bus
Collective Function of Computer Cooperating Components Cont’d
Address bus:
- Is used to transport the source and destination addresses of information transmitted
on the data bus.
- Is also used to indicate memory locations generated by the microprocessor, bus
master, or DMA.

Data bus:
- Is the internal pathway across which data is transferred to and from the processor or
to and from memory. (see diagram next slide)
- The width and speed of the data bus directly affects performance and significantly
influence system throughput.
- Data bus width indicates how many bytes the bus can carry during each transfer.

Control bus:
- Is technically a collection of control signals.
- Is used to identify the type of bus cycle and indicate when the cycle is complete.
Collective Function of Computer Cooperating Components Cont’d

Processor – Memory Relationships


Method of Improving Data Transfer Through Bus System

See explanation of operation in the next slide


Method of Improving Data Transfer Through Bus System Cont’d

The time taken to move data from the Main Memory to the CPU = tm .

The time taken to move data from the Main Memory to the Cache = tmc .

The time taken to move data from the Cache Memory to the CPU = tcc .

If the data is first being moved from Main Memory to the CPU through the Cache, then

tm = tmc + tcc + tdc , where tdc is the delay time introduced by the Cache Memory which may be
negligible.

However, if the data is not returned to the Main Memory immediately after use, instead, it is kept in the
Cache Memory for further use, then, the data transfer time remains only t cc. Therefore, the time saved is

equivalent to the time t taken to move data from the Main Memory to the Cache if it were installed.
Computer Performance Strategies

Three main strategies used to increase computer performance are:


• Increasing cache memory capacity between the processor and main memory- As chip
density increases, more of the cache memory has been incorporated on the chip, enabling
faster cache access

• By placing multiple processors on the same chip (also known as multiple cores, or Dual
core), with a large shared cache. This provides the potential to increase performance
without increasing the clock rate.

• Bus Systems – increase Bus system size (from 16bit to 32 – to -64 –to- 128 etc)

 The performances of computers are measured by their speed and capacity to handle large
volumes of data in Millions or billions of Instructions Per Second (MIPS or BIPS).
Three ways of placing components of computer in relation to BUS System

(a) Single-Chip Computers: (Embedded Computers – BUS width = 0) –e.g. Mobile phones

(b) Single-Board Computers: (Micro & Mini Computers – BUS width = 8-bit, 16 or 32-bit, 64bit)

(c) Multiple-Board, Bus-Based microcomputers: (Mini & Mainframe Computers – BUS width =
32-bit or 64-bit)
END OF MODULE ASSESSMENT (EMA)

1) What, in general terms, is the distinction between computer organization and computer architecture?
2) What, in general terms, is the distinction between computer structure and computer function?
3) What are the four main functions of a computer?
4) List and briefly define the main structural components of a computer.
5) List and briefly define the main structural components of a processor.
6) The higher the memory size the slower the computer because the speed of data and instructions
retrieval is reduce. Explain two ways by which the speed of data and instructions retrieval can be
made faster?
7) Using a well labeled diagram, showing all possible directions, Draw the diagram of the functional
components of a general purpose digital computer.
8) Name the three classical units of a computer and describe their main functions.
9) Differentiate between a single-chip computer and multiple-board computer?
10) Describe the terms: Computer Architecture and Computer Organization and list their differences
MODULE – 6

INTRODUCTION TO BASIC LOGIC GATES,


CIRCUITS MINIMIZATIONS USING BOOLEAN ALGEBRA AND
TRUTH TABLE MINIMIZATION THEOREM METHODS
Learning Outcomes
By the end of this module, you should be able to:
• Describe digital Logic as the foundation to digital computers
• Distinguish between Logic Gates and Basic Logic Operations
• Distinguish between Combinational and Sequential Logic
Circuits.
6.1 Introduction to digital logic as the foundation to digital computers

• Digital logic is the foundation for digital computers.

• The word digital as used means whole numbers (discrete)

For example:
The information processed by a hand calculator is in the form of discrete or digital signals;
each electronic component is either ON / OFF or 1 / 0

• In digital, a major virtue of electronic circuits is the ease and speed with which digital
signals can be processed
Why Use Digital Circuits?
• Digital electronics and circuits have become very popular because of some distinct advantages
they have over analog systems
• Digital systems can be fabricated into integrated circuit (IC) chips. These ICs can be used to form
digital circuits with few external components.
• Information can be stored for short periods or indefinitely.
• Data can be used for precise calculations.
• Digital systems can be designed more easily using suitable logical families.
• Digital systems can be programmed and they show a certain manner of intelligence.

Though most real-world events are analog in nature and analog processing is usually simple.
However, digital circuits are appearing in more and more products primarily because of the
availability of low-cost reliable digital ICs, accuracy, added stability, computer compatibility,
ease of use, and simplicity of design.
Numbers Used In Digital Devices

• Digital electronic devices do not use the familiar or conventional decimal system
of numbers, but instead use binary number system which consists of 0s and 1s

• A digital circuit therefore, is one in which only two logical values are present.
Typically, a signal between 0 and 1 volt represents one value (e.g., binary 0) and a
signal between 2 and 5 volts represents the other value (e.g., binary 1).

• Voltages outside these two ranges are not permitted.


Logic Gates

• The fundamental building block of all digital logic circuits is the logic gate. Logical
functions are implemented by the interconnection of gates.

• Logic circuits are used to build computer hardware as well as other digital hardware
products.

• These are tiny electronic circuits that consume very little power and perform operation
dependably, rapidly (speedily), and efficiently. They are often referred to as ICs (Integrated
Circuits).
Logic Gates

What is a Logic Gate?

• This is a device that controls the flow of information usually in the form of pulses.

• It has one or more inputs and produces an output.


• It is a function of the current value only, no memory.
• It is also called a combinational circuit.
Logic Gates Cont’d

Three basic elementary logic operations are as follows:


Logic Gates Cont’d
Logic Gates Cont’d
Logic Gates Cont’d

Other gates could be derived from the combinations of the three basic gates.
Below are NOR and NAND gates equivalent of the basic operational gates.

1. NOR GATE
Logic Gates Cont’d

2. NAND GATE
(1) XOR Gate Configuration
• Another important gate that is a product of combinations of OR, AND and NOT gates
is known as an Exclusive OR (XOR) gate.

A.B

A.B
F=
(2) XNOR Gate Configuration
• Known as Equality Comparator
Logic Gates Cont’d
Summary of all the Logic Gates
The NAND gate as a Universal Gate
• The NAND gate is known as the universal gate because it can be used to make other types
of gates; hence they are widely available than many other gates.
6.3 Introductions Digital Logic simplification/ minimization Techniques

Three methods that can be used to optimize logic networks


Introductions Digital Logic simplification/ minimization Techniques Cont’d

1) Truth Table: Can be used to minimize Boolean expressions. Its disadvantage is that the
ultimate minimized form is not always available from the table directly. What is available is
the Sum of Product (SOP) or Product of Sum (POS). It also hides circuit information.

2) Boolean algebra: is used to minimize algebraically using Boolean Laws derived from
Boole’s Theorems. Used to simplify complex logic operations.

3) Karnaugh Map (K-Map): is a two dimensional graphical array used for minimizing
VERY complex logic operations in a simple format. It has the same number of cells as the
Truth Table.
Introductions Digital Logic simplification/ minimization Techniques Cont’d

Why Boolean algebra? And what are the benefits

• Both schematics and truth tables take too much space to describe the operation of complex circuits
with numerous inputs.

• The truth table "hides" circuit information.

• The schematic diagram is difficult to use when trying to determine output values for each input
combination.

• Boolean expressions can dramatically simplify logic circuit. A simpler expression that produces the
same output can be realized with fewer logic gates.

• A lower gate count results in cheaper circuitry, smaller circuit boards, and lower power
consumption.

• If any software uses binary logic, the logic can be represented with Boolean expressions. Applying
the rules of simplification will make the software run faster or allow it to use less memory.
Boolean’s Rules
Introductions Digital Logic simplification/ minimization Techniques Cont’d

Karnaugh Map (K-Map)

• Grid-like representation of a truth table

• An Alternative to a truth table for representing an expression

• It Provides a systematic and graphical way of performing logic


operation
THANK YOU
MODULE – 7

DATA REPRESENTATION / INFORMATION PROCESSING


AND NUMBER SYSTEMS I
Learning Outcomes

By the end of this module, you should be able to:


• Describe how data is being represented in a computer system
• Explain conversion of a number system to another number system in a computer
• Explain Number Systems (Decimal, Binary, Octal and Hexadecimal)
Data Representation and Information Processing

• Data representation refers to the methods used internally to represent data stored in a
computer.

• Computers store lots of different types of information such as:


- Numbers (Decimal, Binary, Octal & Hexadecimal)
- Text
- Graphics of many varieties (static, video, animation)
- Sound

• Computers use numeric codes in a simple format of 0s and 1s to represent all the
information they store.
Memory Structures in a Computer

In most modern computer systems, individual storage elements are organized into different
groups which are:

Bits - Memory consists of bits (0 or 1). A simple bit can represent two pieces of information.
Nibble - Collection of 4 related bits
Bytes (8 bits) - A byte can represent 256 = (28) pieces of information.
Word (16-bit or 2, bytes) - A word is the largest number of bits the computer can handle in a
single operation.

Character – are symbols (digits, alphabets, spaces and other special symbols found on a
standard keyboard) which combine to make up a word. A character is usually represented by a
byte or a multiple of bytes (256 pieces of information)
Memory Structures in a Computer

Text - Text can be represented easily by assigning a unique numeric value for each
symbol used in the text.

e. g. The widely used American Standard Code for Information Interchange


(ASCII) defines 128 (different symbols all the characters found on a standard keyboard plus
another 128 hidden symbols)

Graphics - A graphic image can be represented by a list of pixels. The pixels are organized
into many rows on the computer screen.
Summary of bit, byte, nibble word and double word

Examples of each type of binary number.


Bit 12
Nibble 10102
Byte 101001012
Word 10100101111100002
Double Word 101001011111000011001110111011012

TUTOR MARKED ASSESSMENT (TMA)


1. A 16-bit sequence is a what?
(a) byte (b) word (c) bit (d) nibble

2. A byte can represent what pieces of information?


(a) 256 (b) 258 (c) 128 (d) 512
NUMBER SYSTEMS

1. Decimal Number System (B10)……………………..0 - 9


Example: A decimal number of 75410
NUMBER SYSTEMS Cont’d
2 Binary Number System (B2)…………………..1s or 0s
• Binary number system representation is a more efficient internal way of representing
numbers. This is the number computer understands (1s, or 0s)

Example: 11101 in binary number system, has the decimal value as indicated below:
NUMBER SYSTEMS Cont’d

• Conversion of Decimal Integer to Binary


To convert a decimal integer to binary, we should repeatedly divide by 2, the remainders
give the binary digit, starting with the least significant first.
Example: 75 converted to binary will give:
75 ÷ 2 = 37 + 1 remainder
37 ÷ 2 = 18 + 1 remainder
18 ÷ 2 = 9 + 0 remainder
9 ÷ 2 = 4 + 1 remainder
4 ÷ 2 = 2 + 0 remainder
2 ÷ 2 = 1 + 0 remainder
1 ÷ 2 = 0 + 1 remainder

Therefore, (75)10 = (1001011)2


NUMBER SYSTEMS Cont’d

• Decimal Fraction to Binary

A decimal fraction can be converted into binary system by repeatedly doubling it


(multiplying by 2), subtracting and noting any whole number part that occurs.

Example: (0.6875)10 to binary gives:


0.6875 x 2 = (1).3750
0.3750 x 2 = (0).750
.750 x 2 = (1).50
.50 x 2 = (1).0
Therefore, (0.6875)10 = (0.1011)2
NUMBER SYSTEMS Cont’d

3. Octal Number System (B8)


• The octal number system uses 8 values to represent numbers. The values are 0, 1, 2, 3, 4,
5, 6, 7 with 0 having the least value and 7 having the greatest value.

Conversion between Octal and Binary


The digits that belong to the octal number system can be easily converted to the binary
system to give the table below:
Octal Number System (B8) Cont’d

It follows that any octal number can be converted to binary by using the above table.

Example: (65)8 = 110 101


Octal: 6 5
Binary: 110 101
Therefore, (65)8 = (110 101)2

Similarly, to convert a binary digit to octal, we group the binary number into units of 3 digits
(starting from the right to the left).

Example: (110 001 111 101)2 = (6175)8


Therefore, (110001111101)2 = (6175)8
Octal Number System (B8) Cont’d
• Conversion from Decimal Integer to Octal
Repeatedly divide by 8, the remainders give the octal digit

• Example:
(75)10 converted to octal will give:
75 ÷ 8 = 9 + 3 Remainder
9÷8=1+1 Remainder
1÷8=0+1 Remainder
(75)10 = (113)8

• Octal Integer Number to Decimal


Using the positional value of each octal digit, we can easily convert octal integer numbers to
decimal.
Example: (171)8 = 1 x 82 + 7 x 81 + 1 x 80
= 64 + 56 + 1
(171)8 = (121)10
4. Hexadecimal Number System (B16)

• A big problem with the binary system is verbosity. When dealing with large values, binary
numbers quickly become too unwieldy. The hexadecimal (base 16) numbering system
solves the problems.

• The hexadecimal number system is based on the binary system using a nibble or 4-bit
boundary.

• It includes 0 to 9 and the letters A, B, C, D, E and F, giving rise to the following table:
Summary of Number systems
B10 (Decimal) B2 (Binary) B8 (octal) B16 (Hexadecimal)
0 0000 00 0
1 0001 01 1
2 0010 02 2
3 0011 03 3
4 0100 04 4
5 0101 05 5
6 0110 06 6
7 0111 07 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F
3B2 = 1B8. 1B8 = 3B2. 1B16 = 4B2.
REMARKS 4B2 = 1B16 One digit of Octal number is One digit of Hex number is
equivalent to three digits of equivalent to four digits of
Binary number. Binary number.
Hexadecimal Number System (B16) Cont’d

• To convert a hexadecimal number into a binary number; simply break the binary number into
4-bit groups, beginning with the least significant bit
Example: (ABCD)16 = (1010 1011 1100 1101)2
• To Convert Binary to Hex
Break the binary number into four bit sections from least significant bit to the most significant bit.
Convert the four bit binary number to Hex equivalent.

Example: 1011 0110 1110 1100


B 6 E C

• To Convert Hex to Binary


Convert the hex into its 4-bit binary equivalent. Then combine the 4-bit section by removing the spaces.

Example: AFB4 = 1010111110110100


Hexadecimal Number System (B16) Cont’d

• To Convert Hex to Decimal


To convert from Hex to Decimal, we multiply the value in each position by its hex weight
and add each value.

Example
AFB2 = A x 163 + F x 162 + B x 161 + 2 x 160
= (10 x 4096) + 15 x 256 + (11 x 16) + (2 x 1) = (44978)10
Hexadecimal Number System (B16) Cont’d

• Decimal to Hex Conversion


To convert from decimal to Hex, we use repeated division by 16.

Example:
(44978)10 to Hex:

Therefore, (44978)10 = AFB2


THANK YOU
MODULE – 8

DATA REPRESENTATION / INFORMATION PROCESSING /NUMBER SYSTEMS II


Learning Outcomes

By the end of this module, you should be able to:


• Explain Computer Arithmetic and the Number Systems
• Describe Negative Numbers Representation in Computer.
Binary Arithmetic
1. Addition
The rules for binary addition are:
0+0=0
0+1=1
1+0=1
1 + 1 = 0 and carry 1 to the next more significant bit

Example:
00011010
+ 01001010
= 01100100
2. Subtraction
The rules for binary subtraction
0–0=0
0 – 1 = 1 and borrow 1 from the next more significant bit
1–0=1
1–1=0
Example:
00100101
00010001
00010100
3. Rules of Binary Multiplication
0x0=0
0x1=0
1x0=0
1x1=1

Binary multiplication is the same as repeated binary addition; that is, by adding the
multicand to itself the number of times of the multiplier.

Example:

00001000 x 00000011
00001000 (810)
00001000 (810)
00001000 (810)
00011000 (24­10)
8.2 Negative Numbers Representation

Three types
1. Sign and magnitude

2. Positive (0) Negative (1)


3. +5 = 0101; -5 = 1101
4. +3 = 0011; -3 = 1011
5. +7 = 0111; -7 = 1111
Not well suitable for use in computers
Negative Numbers Representation Cot’d

2. One’s Complement (Radix-1 Complement)

Change all to their complement

For example, using a byte, the one’s complement form of 00101011 (43) is 11010100 (-43).

Not too suitable for computer


Negative Numbers Representation Cot’d

3. Two’s Complement (Radix Complement)

• A simple way of finding the 2’s complement of a number is to add 1 to its 1’s complement.

• Another way is by examining all the bits from right (LSB) to left (MSB) and
complementing all the bits after the first ‘1’ is encountered.

• The advantage of 2’s complement is that it allows us to perform the operation of


subtraction by actually performing addition in signed values
THANK YOU
MODULE – 9

CHARACTER REPRESENTATION/ INFORMATION PROCESSING,


NUMERIC AND NON-NUMERIC ENCODING
Learning Outcomes
By the end of this module, you should be able to:
• Distinguish between numeric and Non-numeric Encoding
• Identify the Types of Encoding: ASCII, BCD and EBCDIC
• Explain Parity, Error Checking and Correcting (ECC)
9.1 Introduction to Numeric and Non-numeric Encoding

Computers as two-state machines.

• Communication between Man & Computer/Computer & Man


Codes & Coding Format
Types of Encoding

The three methods of encoding are:

1. American Standard Code for Information Interchange (ASCII)

2. Binary Coded Decimal (BCD)

3. Extended Binary Coded Decimal Interchange Code (EBCDIC)


Types of Encoding
1. American Standard Code for Information Interchange (ASCII) coding system

• 27=128 (128 bit of information hidden + 128 bit visual) = 256bit of information
• 27=128 + 27=128 = 28 = 256 bit information

2. Binary Coded Decimal (BCD)


• Numbers in nibbles. IBM idea. No longer in use. Very Expensive – needs more
hardware

3. EBCDIC is an acronym for Extended Binary Coded Decimal Interchange Code


• 28 = 256 bit information can be represented.
• Combination of both visual and hidden characters or symbols in one Keyboard
• IBM Idea for her minicomputers and mainframe computers
Parity, Error Checking and Correcting (ECC)

1. Parity Checking

2. Standard ECC Memory

3. Soft Errors and Software error

4. Hard errors or Hardware error


THANK YOU
MODULE – 10

COMPUTER SYSTEMS SOFTWARE I


Learning Outcomes

• At the end of this module, you should be able to:


• Explain the Classification of Computer Software (System
and Application software);
• Describe the Different types of Computer Software
Classification of Computer Software
System and Application Software
.

 Physical parts of the computer that we can easily see and touch are called
computer hardware-the tangible part

 Computer parts that cannot be seen and touch are called computer software –
the intangible parts
Types of Computer Software

Systems software
• This includes operating systems (OS), Translator and utilities.

• The OS goes into the process of booting the system.


Computer User

Operating System

Application Computer User


Types of Computer Software

Categories of Operation System


 Single tasking – E.g. MS-DOS

 Multitasking – E.g. UNIX, Windows and Linus

Operating system grouping according to the number of users

 Single User – E.g. MS-DOS, Windows and Linus

 Multiuser - E.g. UNIX, Windows Network, Linus Network, and End computing
Architectural Layout of the operations of an OS
Types of Computer Software
 Utility Software.
Utility software performs the basic operations necessary for the fundamental
performance of the computer system such as creating, copying, saving, deleting, merging
and sorting files.

 Language Translators. Language translators convert programmers-made instructions into


machine-language instructions (object code). Types of language translators are:
i. Assemblers
ii. Interpreters
iii. Compilers

 Assemblers - convert a program written in assembly language into machine language


 Interpreters - translate and execute one line at a time
 Compilers - convert a program written in any of the high-level language into its equivalent
machine language.
Applications Software

Two types of software products

i. Generic products.
ii. Customized software
• Word Processing Software
• Desktop Publishing Software
• Spreadsheet Software
• Database software
• Presentation software
• Internet Browsers
• Graphics Programs (pixel-based)
• Graphics Programs (vector-based)
• Communications software
THANK YOU
MODULE – 11

COMPUTER SYSTEMS SOFTWARE II


Learning Outcomes

At the end of this module, you should be able to:


• Describe Ownership Rights and Delivery Methods of Computer Software
• Explain Installed Software versus Web-Based Software
• Discuss Object Linking and Embedding (OLE)
Ownership Rights and Delivery Methods of Computer Software

 Commercial Software: Installation in number of computers is specified by the software


vendor/producer. User only buys the license to use it.

 Shareware: May be free of charge or the software company may charge a nominal fee. Users
can download these kinds of software from the Internet. Example

 Freeware: Software that are given away for free by the vendor/producer

 Open Source: The source codes of this software are available to be used, modified and
customized.
Installed Software Vs. Web-Based Software

 Installed Software: Software you buy from market or download from the
Internet to your computer.

 Web Based Software: Software that are run from the Internet.

 Software Suites: Related software programs are sometimes sold bundled


together as a software suite.
Object Linking and Embedding (OLE)

 Embedding: Allows you to copy and paste part of document from one format (MS Word)
to another (Excel)

 Linking: Allows you to create a link between a source format (Excel) and a destination
format (Power Point).
 In linking, if the source file data change, the destination data will change.

 In embedding, if the source data change, destination data does not change.
THANK YOU
MODULE – 12

BASIC MICROPROCESSOR STRUCTURE AND FUNCTION


Learning Outcomes

At the end of this module, you should be able to:


• Describe the Micro-processor Family Historical background
• Describe the Processor Organizations and the internal components
• Expatiate on how the sequential algorithm of the CPU executes each
basic instruction.
Introduction to Micro-processor Family Historical background

Intel Processor Family History


Intel Processor Family History Cont’d
Processor Organization

 Fetch instruction: The processor reads an instruction from memory (register, cache, main
memory).
 Interpret instruction: The instruction is decoded to determine what action is required.
 Fetch data: The execution of an instruction may require reading data from memory or an
I/O module.
 Process data: The execution of an instruction may require performing some arithmetic or
logical operation on data.
 Write data: The results of an execution may require writing data to memory or an I/O
module.
How ALU works. How CU works.
Data Flow, Fetch Cycle
i. Fetch Instruction (FI): Read the next expected instruction into
a buffer.
ii. Decode Instruction (DI): Determine the OPCODE and the
operand specifiers.
iii. Calculate Operands (CO): Calculate the effective address of
each source operand. This may involve displacement register
indirect, indirect, or other forms of address calculation.
iv. Fetch Operands (FO): Fetch each operand from memory.
Operands in registers need not be fetched.
v. Execute Instruction (EI): Perform the indicated operation and
store the result, if any, in the specified destination operand
location.
vi. Write Operand (WO): Store the result in memory.
THANK YOU

You might also like