0% found this document useful (0 votes)
34 views

Image Compression

Here are the steps to decode the message 0.23355 using arithmetic decoding with the given symbol probabilities: 1. Calculate the CDFs of the symbols: a: 0.2 e: 0.5 i: 0.6 o: 0.8 u: 0.9 !: 1 2. The message fraction 0.23355 lies between the CDFs of symbols 'e' and 'i'. Therefore, the first symbol is 'e'. 3. Update the range: New range is 0.23355 - 0.5 = 0.23355 - 0.2 = 0.03355 4. The updated message fraction 0.03355 lies between the

Uploaded by

Chandresh Parekh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Image Compression

Here are the steps to decode the message 0.23355 using arithmetic decoding with the given symbol probabilities: 1. Calculate the CDFs of the symbols: a: 0.2 e: 0.5 i: 0.6 o: 0.8 u: 0.9 !: 1 2. The message fraction 0.23355 lies between the CDFs of symbols 'e' and 'i'. Therefore, the first symbol is 'e'. 3. Update the range: New range is 0.23355 - 0.5 = 0.23355 - 0.2 = 0.03355 4. The updated message fraction 0.03355 lies between the

Uploaded by

Chandresh Parekh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Image Compression

What is Image Compression?


• Image compression is a kind of data
compression that encodes a digital image
using minimum bytes by removing/reducing
redundancies in the data.
• Compressed image requires less memory to
store and less time to transmit/receive.
Why Image compression?
• Compressed image requires significantly less memory
to store and very less time to transmit/receive.
• Sample calculation(If no compression): Calculate the
memory required to store an HD movie of 90 minutes
and the time required to download it over a channel
with 10 Mbps speed. Ignore framing and
synchronizing bits.
• M=1920*1080*24*30*60*90/106 Mbits
• T=M/10.
Applications
• Movies
• Graphics
• Video conferencing
• Remote sensing
• Document imaging
• Medical imaging
• FAX
Data Compression
• A process of reducing the amount of data required
to represent same quantity of information.
• Some key Questions:
• Is data different from information?
• Can information be measured?
• How to compress/decompress?
• Is there any loss/ degradation in information after
decompression compared to uncompressed data?
Data and Information
• Data is the means to convey the information.
• Let n1 bits (of uncompressed data) represent information, I,
in a representation.
• Let n2 bits represent the same information, I, in
compressed representation.
• Then, compression ratio, C= n1/n2.
• The relative data redundancy, R = 1- (1/C), gives amount of
redundant data present in un-compressed representation
relative to compressed representation.
• Example: Suppose C is 10:1 for some image, then R= 0.9 or
90%, meaning 90% of data is redundant in the image.
Types of Data redundancies
• Images are 2D arrays and they suffer from three
different types of data redundancies which can
be identified and exploited for image
compression:
• Coding redundancy
• Spatial(Intra Pixel) and temporal(Inter pixel)
redundancy
• Irrelevant information(Psychovisual
Redundancy)
Huffman Coding
• A kind of entropy encoding
• Lossless
• Optimum for per symbol coding
• Source coding
• Smallest possible number of bits per symbol.
• Variable length block code
EXAMPLE-1

• A MEMORYLESS SOURCE EMITS SIX


MESSAGES WITH MESSAGE PROBABILITIES
0.3,0.15,0.12,0.10, 0.25 & 0.08.FIND
BINARY HUFFMAN CODE.
STEP-1
• ARRANGE THE MESSAGES IN THE ORDER
OF DESCENDING PROBABILITIES.

MESSAGE PROB. MESSAGE PROB.


M1 0.30 M1 0.30
M2 0.15 M5 0.25
M3 0.12 M2 0.15
M4 0.10 M3 0.12
M5 0.25 M4 0.10
M6 0.08 M6 0.08
STEP-2
• COMBINE LAST TWO MESSAGES INTO A SINGLE
MESSAGE AND RE-ARRANGE THE MESSAGES IN
THE ORDER OF THEIR DESCENDING
PROBABILITIES.
MESSAGE PROB.
MESSAGE PROB.
M1 0.30
M1 0.30
M5 0.25
M5 0.25
M2 0.15
M11 0.18
M3 0.12
M2 0.15
M4 0.10
M3 0.12
M6 0.08
STEP-3
• REPEAT STEP-2 TILL NO. OF MESSAGES GET
REDUCED TO 2(BECAUSE ,BINARY).
MESSAGE PROB.
MESSAGE PROB.
M1 0.30
M1 0.30
M5 0.25
M22 0.27
M11 0.18
M5 0.25
M2 0.15
M11 0.18
M3 0.12

MESSAGE PROB.
MESSAGE PROB. M33 0.43
M44 0.57 M1 0.30
M33 0.43 M22 0.27
STEP-4
• ASSIGN 0 AND 1(BINARY SYMBOLS ) TO THE LAST
TWO MESSAGES AS PER THEIR PROB..USUALLY 0 IS
USED FOR MESSAGE WITH GREATER PROB..

MESSAGE PROB. CODE


M44 0.57 0
M33 0.43 1
STEP-5
• UN-GROUP THE MESSAGE WHICH IS ASSINGNED 0,
INTO THE MESSAGES FROM WHICH IT EMERGED. i.e.
UN-GROUP M44 INTO M1 AND M22, AS M44 IS
GROUPING OF M1 AND M22.ASSIGN 0 AS MSB FOR
M1 AND M22 AS M44 IS ASSIGNED 0.NOW, TO
DISTINGUISH M1 FROM M22 ADD ONE MORE ‘0’ TO
M1 AND A ‘1’ TO M22.THIS IS BECAUSE M1 HAS
GREATER PROBABILITY THAN M22.

MESSAGE PROB. CODE


M33 0.43 1
M1 0.30 00
M22 0.27 01
STEP-6
• AGAIN UN-GROUP M33 INTO M5 AND M11.
• ASSIGN 10 TO M5 AND 11 TO M11.
• BOTH M5 AND M11 HAS A ‘1’ AS THEIR MSB AS
M33 IS ASSIGNED A ‘1’ AND THE TWO , IN GROUP,
FORMS M33.

MESSAGE PROB. CODE


M1 0.30 00
M22 0.27 01
M5 0.25 10
M11 0.18 11
STEP-7
• REPEAT STEP-6 TILL WE GET ALL THE MESSAGES IN
THEIR ORIGINAL FORM FROM THEIR GROUPED
FORM.
MESSAGE PROB. CODE
M1 0.30 00
M5 0.25 10
M11 0.18 11
M2 0.15 010
M3 0.12 011
RESULT
MESSAGE PROB. CODE
M1 0.30 00
M5 0.25 10
M2 0.15 010
M3 0.12 011
M4 0.10 110
M6 0.08 111
AVG. LENGTH OF CODE
• IT IS GIVEN AS
• L=pi*Li
• FOR THE ABOVE EXAMPLE IT IS
• L=0.3(2)+0.25(2)+0.15(3)+0.12(3)+0.1(3)+0.08(3)=2
.45 BITS.
• (A uniform binary code will require
3-bits/message).
• THE ENTROPY FOR THE ABOVE CASE IS H(m)=-
(pi*log2 pi)=2.418 BITS.
• THIS SHOWS THAT HUFFMAN CODE REQUIRES
LITTLE MORE THAN THE ENTROPY(THE MINIMUM
REQUIRED).
THE CODE EFFICIENCY AND REDUNDANCY

• EFFICIENCY IS THE RATIO OF ENTROPY TO


AVG. LENGTH OF THE CODE FOR A GIVEN
CASE.
• FOR EXMPLE-1 IT IS η=H(m)/L=0.976.
• REDUNDANCY IS γ=1-η=0.024 FOR
EXAMPLE-1.
Homework
• Assign Huffman code for encoding gray levels
in the following image and find compression
ratio for the same.
2 3 2 1
3 3 3 2
2 3 3 2
0 3 2 1
Arithmetic coding
• Non-block code
• Loss less
• Entropy like coding
• There is no one to one correspondence
between a source symbol(message) and a
code word.
• An entire sequence of symbols is assigned a
single number, q(0<q<1)
Example
• Determine the arithmetic coding for the
sequence AMAR emitted by a source
generating three symbols A, M and R.
Step1
• Prepare a table with source symbols ,
corresponding probabilities and the CDF of the
source.
Symbol Prob. CDF
1
A 0.5 0.5 R
M 0.25 0.75 0.75
M
R 0.25 1 0.5
A
0
Step2
• Narrow the first symbol to the its subinterval
and expand the subinterval to full height and
mark its lower and upper ends with values of
narrowed range.
0.5
1
R R
0.75
M M
0.5
A A
0
0
Step3
• Divide the narrowed range in accordance with
the original source symbol probabilities. Use
the following formulae for this:
• UL=LL + (diff.of narrowed range*CDF value of
the symbol)
0.5
1
R R
0.75 =0 + (0.5*0.75) = 0.375
M M
0.5 =0 + (0.5*0.5) = 0.25
A A
0
0
Step4
• Continue steps 2&3 till the entire sequence is
completed.
0.5 0.375
1
R R R
0.75 0.375 = 0.25 + (0.125*0.75) =0.34375
M M M
0.5 0.25 = 0.25 + (0.125*0.5) = 0.3125
A A A
0
0 0.25
Last step
• Continue steps 2&3 till the entire sequence is
completed.
0.5 0.375 0.3125
1
R R R R
0.75 0.375 0.34375 = 0.25 + (0.0625*0.75) = 0.296875
M M M M
0.5 0.25 = 0.25 + (0.0625*0.5) = 0.28125
0.3125
A A A A
0
0 0.25 0.25

•Any number between 0.296875 and 0.312500 can be used as


the final code.
•Typically, average of this two numbers (=0.3046875, here) is
used as the arithmetic code.
Homework
• What type of redundancy is removed by the
arithmetic coding? Decode the message
0.23355 using arithmetic decoding. Symbol
and its probability are given below:
Symbol Probability
a 0.2
e 0.3
i 0.1
o 0.2
u 0.1
! 0.1

You might also like