Admas University
Computer Science Department
Computer Organization and Architecture
Chapter-2
Number Systems and Codes
1
Number Systems and Codes
2 A number system is a systematic way of representing and expressing
numbers.
Number systems are used to count, measure, and perform mathematical
operations on quantities.
There are several different types of number systems, including:
1. Decimal number system: The decimal number system, also known as the
base-10 system, is the most commonly used number system in everyday life.
It uses 10 digits (0-9) to represent numbers.
Cont’d…
1.3 Binary number system: The binary number system uses only two digits
(0 and 1) to represent numbers. It is widely used in digital electronics
and computer programming.
2. Octal number system: The octal number system uses eight digits (0-7)
to represent numbers. It is commonly used in computer
programming and in some types of telecommunications.
3. Hexadecimal number system: The hexadecimal number system uses 16
digits (0-9 and A-F) to represent numbers. It is widely used in
computer programming and digital electronics.
Cont’d…
4
In general:
Decimal (base 10 system): 0,1,2,3,4,5,6,7,8,9. (10
symbols) which Is human language
Binary (base 2 system): 0, 1 (2 symbols) which is
computer (or digital language)
Hexadecimal (or hexal) system: (base 16 system):
0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F (16 symbols)
Octal (base 8 system): 0,1,2,3,4,5,6,7 ( 8 symbols):
Cont’d…
5
Also note that the more symbols a number system has
the fewer digits it will utilize for an equivalent number
in another base.
For example:
(15)10 = (15)10 à 2 digits
= (1111)2 à 4 digits
= (F) à 1 digits
Cont’d…
6 Decimal number system, symbols = { 0, 1, 2, 3, …, 9 }
Position is important
o Example: (7594)10 = (7x103) + (5x102) + (9x101) +
(4x100).
The value of each symbol is dependent on its type and its
position in the number.
Fractions are written in decimal numbers after the
decimal point.
(2.75)10 = (2 x 100) + (7 x 10-1) + (5 x 10-2)
1.Number And Base Conversion
7 Conversion from decimal to another base (Base 10 Base M)
•To convert a decimal number X to a number in base M, divide X by M, find the remainder, again
divide the result by M, find the remainder, continue until the result is 0. Finally concatenate (collect)
the remainders starting from the last up to the first.
Ex.1 Convert 5610 to base 2 (binary)
X=56, M=2
5610 =1110002
EX.2 Convert 7810 to base 8 (Octal)
7810=1168
EX. 3 Convert 3010 to base 16 (hexadecimal)
3010=1E16
EX. 4 Convert (160)10 to base 16 (hexadecimal)
16010=A016
Conversion from base M to base 10 (decimal)
8 To convert a number X consists of digits X1 X2 X3 …Xn in base M to decimal; simply
expand the number with base M. That is (X1X2X3…Xn) m =X1*mn-1+X2*mn-2
+X3*mn-3+ …. + XI*mn-i+… Xn-1*m1+Xn*m0 = Y10
Examples:
Convert (1001001)2 to decimal =73
Convert (234)8 to decimal =156
Convert (101)8 to decimal =65
Convert (A1B) 16 to decimal = 2587
Convert (101)16 to decimal =257
Conversion from binary (base2) to Octal (base 8) or Hexadecimal (base16) and vice versa
9 To convert a number in binary to octal ,group three binary digits together
starting from the last digit (right and if there are no enough digits add
zeros in front , (left) and find the corresponding Octal of each group.
Example: Convert 1001001 to octal
1001001=001,001,001
= 1118
Convert 101101001 to octal
101101001=101,101,001
=5518
Cont’d…
10 To convert binary to hexadecimal group four binary digits together starting from right and if there are
no enough digits add zeros at the left.
Example: Convert 111100100 to hexadecimal
111100100=0001, 1110, 0100 =1 14 4 = 1E4 16
To convert from Octal to binary, convert each octal digit to its equivalent 3 bit binary starting from
right.
Example: Convert (675)8 to binary
6758 =110, 111, 101 =1101111012
Convert 2318 to binary
2318 = 010, 011, 001
=100110012
Cont’d…
11 Toconvert from Hexadecimal to binary convert each hex digit to its
equivalent 4-bit binary, starting from right.
Example: Convert 23416 to binary
23416 =0010, 0011, 0100
= 10001101002
Convert 2AC to binary
2AC16 =0010, 1010, 1100
=10101011002
Conversion from Octal to hexadecimal and vice versa
12 To convert from Octal to hexadecimal, first we have to convert to binary and the binary to
hexadecimal.
To convert from hexadecimal to Octal, first we have to convert to binary and then the binary
to Octal.
EX.1. Convert 2358 to hexadecimal Convert 1A to Octal
2358=010, 011, 101
1A=0001, 1010
=00011010
=010011101
=000, 011, 010
=0000, 1001, 1101
=0 3 2
=0 9 13
=328
=9D16
Cont’d…
Representation of Negative numbers
13
There are three ways of representing negative numbers
in a computer
Using Sign Bit
One’s Complement
Two’s Complement
Sign- magnitude representation
In signed binary representation, the left-most bit is used to indicate the sign of the number.
14
Traditionally, 0 is used to denote a positive number and 1 is used to denote a negative number. But the magnitude part will be
the same for the negative and positive values.
For example, 11111111 represents-127 while 01111111 represents + 127
In a 5- bit representation, we use the first bit for sign and the remaining 4- bits for the magnitude.
So, using this 5-bit representation the range of numbers that can be represented is from -15 (01111) to 15(11111)
Ex1: Represent-12 using 5-bit sign magnitude representation.
Solution: First, we convert 12 to binary i. e 1100
Now -12 = 11100 Problems of signed Magnitude representation
Ex2: Represent –24 using 8-bits
3 – 4 = 3 + (-4)
=00000011 +
24=00011000 10000100
=10000111
-24 = 10011000 = -7 which is the wrong answer.
Cont’d…
15 This representation has two problems one is it reduces the maximum magnitude, and
the second one is speed efficiency.
To see the second problem let us perform addition in the signed binary
representation. We want to add +7 and –5
+7 represented by 00000111
-5 represented by 10000101
The binary sum is =10001100, or-12
This is not the correct result. The correct result is +2.
In other words, the binary addition of signed numbers does not “Work Correctly”.
The solution to this problem is called the two’s complement representation.
One’s complement
16 In one’s complement representation, all positive integers are represented in their correct binary format.
For example +3 are represented as usual by 00000011. However, its complement, -3, is obtained by
complementing every bit in the original representation.
Each 0 is transformed into a1 and each 1 into a0. In our example, the one’s complement representation of -3 is
11111100.
Ex: +2 is 00000010 -2 is 11111101
Note that in this representation positive numbers start with a 0 on the left, and negative numbers start with a 1 on
the left.
Ex1. Add -4 and +6
- 4 is 11111011
+ 6 is 00000110
The sum is (1) 00000001, Where 1 indicates a carry
The correct result should be 2 or 00000010.
Cont’d…
17 Ex2:
-3+-2
-3 is 11111100
-2 is 11111101
11111001or -6 plus a carry
The correct result is -5. The representation of –5 is 11111010.
This representation does represent positive and negative numbers: however, the
result of an ordinary addition does not always come out correctly
It is evolved from the one’s complement and is called the two’s complement
representation.
Two’s Complement Representation
18 A negative number represented in two’s complement is obtained by first computing the one’s complement and
then adding one.
Ex: +3 is represented in signed binary as 00000011
Its one’s complement representation is 11111100.
The two’s complement is obtained by adding one.
It is 11111101.
Ex let’s try addition.
(3) 00000011
+ (5) + 00000101
(8) 0001000
The result is correct
Cont’d…
19
Reading Assignment on Decimal number conversion and Binary
Arithmetic?
Coding
20
Coding Methods
It is possible to represent any of the character in our language in a way as a series of
electrical switches in arranged manner.
These switch arrangements can therefore be coded as a series of an equivalent
arrangements of bits.
There are different coding systems that convert one or more character sets into
computer codes, some are: -BCD (Binary Coded Decimal),
ASCII (American Standard Code for Information Interchange),
EBCDIC (Extended Binary Coded Decimal Interchange Code) and
Unicode are the most popular text coding systems invented.
BCD – Binary Coded Decimal
21 The BCD used a group of four bits to represent information in the computer system.
It had a maximum of 16 different alternative characters or numbers.
It could represent only numbers and some special symbols.
With BCD, each digit of a number is converted into its binary equivalent rather than converting
the entire decimal number to its binary form.
Example: The BCD value of the decimal number 5319 is 0101 0011 0001 1001
Converting decimal number to BCD equivalence
To convert a decimal number to its equivalent BCD simply convert each decimal digit to its 4bit binary and combine the groups
together
Example: Convert 432 to BCD
432=0100 0011 0010
=10000110010BCD
To convert BCD to decimal group into four & find its corresponding decimal.
ASCII Code
22
ASCII (American Standard Code for Information Interchange) is a character
encoding standard that assigns unique numeric codes to represent characters in
the English alphabet, as well as punctuation marks, digits, and other symbols
commonly used in computing.
Originally developed for teletype machines in the 1960s, ASCII uses 7-bit binary
numbers (i.e., sequences of 0s and 1s) to represent a total of 128 different characters,
including uppercase and lowercase letters, digits, punctuation marks, and control
codes for basic formatting and communication.
For example, the ASCII code for the letter "A" is 65, the code for the
digit "0" is 48, and the code for the space character is 32.
EBCDIC (Extended Binary Coded Decimal Interchange Code)
23
EBCDIC (Extended Binary Coded Decimal Interchange Code) is another character
encoding standard that was developed by IBM in the 1960s. Like ASCII, EBCDIC
assigns unique numeric codes to represent characters in the English alphabet, as well
as punctuation marks, digits, and other symbols commonly used in computing.
However, EBCDIC uses an 8-bit binary code to represent a total of 256 different
characters, which is more than twice the number of characters that can be
represented in ASCII.
EBCDIC was used extensively on IBM mainframe computers and some other
systems, particularly in the banking and finance industries.
Unicode
24
Unicode is a character encoding standard that was introduced in the 1990s as a way
to represent characters from all the world's writing systems using a single character
set.
Unlike ASCII and EBCDIC, which are limited to representing characters in
the English alphabet and a few other symbols, Unicode can represent characters
from virtually every writing system in use today, including scripts used in languages
such as Chinese, Japanese, Arabic, and many others.
Unicode has become the dominant character encoding standard in modern
computing environments, and is used in a wide variety of applications, including
operating systems, web browsers, and programming languages.
!!
25
You
h a n k
T A
Q &