Vector Quantization

  Aniruddh Tyagi
     02-06-12
Voronoi Region
• Blocks:
   o   A sequence of audio.
   o   A block of image pixels.
       Formally: vector example: (0.2, 0.3, 0.5, 0.1)
• A vector quantizer maps k-dimensional vectors in the vector
  space Rk into a finite set of vectors Y = {yi: i = 1, 2, ..., N}. Each
  vector yi is called a code vector or a codeword. and the set of all
  the codewords is called a codebook. Associated with each
  codeword, yi, is a nearest neighbor region called Voronoi region,
  and it is defined by:


• The set of Voronoi regions partition the entire space Rk .
Two Dimensional Voronoi Diagram




Codewords in 2-dimensional space. Input vectors are marked with an
x, codewords are marked with red circles, and the Voronoi regions are
separated with boundary lines.
The Schematic of a Vector
Quantizer
Compression Formula
• Amount of compression:
   o Codebook size is K, input vector of dimension L
   o In order to inform the decoder of which code vector is
     selected, we need to use bits.
        E.g. need 8 bits to represent 256 code vectors.
   o Rate: each code vector contains the reconstruction value of L
     source output samples, the number of bits per sample would
     be: .
   o Sample: a scalar value in vector.
   o K: level of vector quantizer.
VQ vs SQ
Advantage of VQ over SQ:
• For a given rate, VQ results in a lower distortion than
  SQ.
• If the source output is correlate, vectors of source
  output values will tend to fall in clusters.
   o   E.g. Sayood’s book Exp 9.3.1
• Even if no dependency: greater flexibility.
   o   E.g. Sayood’s book Exp 9.3.2
Algorithms
• Lloyd algorithm: pdf-optimized
  quantizer, assume that distribution is
  known
• LBG: (VQ)
  o   Continuous (require integral ooperation)
  o   Modified: with training set.
LBG Algorithm
– Determine the number of codewords, N, or the size of the
  codebook.
– Select N codewords at random, and let that be the initial
  codebook. The initial codewords can be randomly chosen from the set
  of input vectors.
– Using the Euclidean distance measure clusterize the vectors
  around each codeword. This is done by taking each input vector and
  finding the Euclidean distance between it and each codeword. The
  input vector belongs to the cluster of the codeword that yields the
  minimum distance.
LBG Algorithm (contd.)
4.Compute the new set of codewords. This is done by obtaining the
average of each cluster. Add the component of each vector and divide by
the number of vectors in the cluster.



where i is the component of each vector (x, y, z, ... directions), m is the
number of vectors in the cluster.

5.Repeat steps 2 and 3 until the either the codewords don't change or
the change in the codewords is small.
Other Algorithms
• Problems: LBG is a greedy algorithm, may fall into
  Local minimum.
• Four methods selecting initial vectors:
  o   Random
  o   Splitting ( with perturbation vector) Animation
  o   Train with different subset
  o   PNN (pairwise nearest neighbor)
• Empty cell problem:
  o   No input correspond to a output vector
  o   Solution: give to other clusters, e.g. most populate cluster.
LBG for image compression
• Taking blocks of images as vector L=NM.
• If K vectors in code book:
   o need to use bits.
   o Rate:
• The higher the value K, the better quality, but lower compression
  ratio.
• Overhead to transmit code book:




• Train with a set of images.
Rate_Dimension Product
• Rate-dimension product
  o The size of the codebook increase exponentially with the rate.
  o Suppose we want to encode a source using R bits/sample. If
    we use an L-d quantizer, we would group L samples together
    into vectors. This means that we would have RL bits available
    to represent wach vector.
  o With RL bits, we can represent 2^(RL) output vectors.
Tree structured VQ
• Set vectors in different quadrant. Only signs of vectors need to
  be compared. Thus reduce the number of comparisons by 2^L
  for L-d vector problem.
• It works well for symmetric distribution. But not when we lose
  more and more symmetry.
Tree Structured Vector Quantizer
•   Extend to non-symmetric case:
    o   Divide the set of output points into two groups, g0 and g1, and assign to
        each group a test vector s.t. output points in each group are closer to test
        vector assigned to that group than to the test vector assigned to the other
        group.
    o   Label the two test vectors 0 and 1.
    o   When we got an input vector, compare it against the test vectors.
        Depending on the outcome, the input is compared to the output points
        associated with the test vector closest to the input.
    o   After these two comparisons, we can discard half of the output points.
    o   Comparison with the test vectors takes the place of looking at the signs of
        the components to decide which set of output points to discard from
        contention.
    o   If the total number of output points is K, we make( K/2)+2 comparisons
        instead of K comparisons.
    o   Can continue to expand the number of groups. Finally: 2logK comparisons
        instead of K.( 2 comparisons with the test vectors and a total of logK stages
Tree Structured VQ (continued)
• Since the test vectors are assigned to groups: 0, 1,
  00,01,10,11,000,001,010,011,100,101,110,111 etc.
  which are the nodes of a binary tree, the VQ has the
  name “Tree Structured VQ”.
• Penalty:
  o Possible increase in distortion: it is possible that at some
    stage the input is closer to one test vector while at the same
    time being closest to an output belonging to the rejected
    group.
  o Increase storage: output points from VQ codebook plus the
    test vectors.
Additional Links
• Slides are adapted from:
http://www.data-compression.com
and
http://www.geocities.com/mohamedqase
m/vectorquantization/vq.html

More Related Content

PPTX
PPTX
Image restoration and degradation model
PPTX
Vector quantization
PPT
2.5 backpropagation
PDF
Lecture 4 Relationship between pixels
PPTX
Digital Image Processing
PPT
Discrete cosine transform
PDF
Recurrent and Recursive Networks (Part 1)
Image restoration and degradation model
Vector quantization
2.5 backpropagation
Lecture 4 Relationship between pixels
Digital Image Processing
Discrete cosine transform
Recurrent and Recursive Networks (Part 1)

What's hot (20)

PPTX
Watershed Segmentation Image Processing
PPTX
Digital image processing- Compression- Different Coding techniques
PPT
Back propagation
PPTX
Image compression in digital image processing
PPTX
ESTIMATING NOISE PARAMETER & FILTERING (Digital Image Processing)
PPTX
Region based segmentation
PPTX
Psuedo color
PPT
Chapter 5 Image Processing: Fourier Transformation
PPTX
Image Representation & Descriptors
PPT
Wavelet transform in image compression
PPTX
Fundamentals steps in Digital Image processing
PPTX
Digital image processing
ODP
image compression ppt
PPTX
NOISE FILTERS IN IMAGE PROCESSING
PPTX
Chapter 8 image compression
PPTX
Histogram Processing
PDF
Digital Image Processing: Image Segmentation
PPTX
Advantages of vector quantization over scalar quantization (1)
PPTX
Region based image segmentation
PDF
Logistic regression in Machine Learning
Watershed Segmentation Image Processing
Digital image processing- Compression- Different Coding techniques
Back propagation
Image compression in digital image processing
ESTIMATING NOISE PARAMETER & FILTERING (Digital Image Processing)
Region based segmentation
Psuedo color
Chapter 5 Image Processing: Fourier Transformation
Image Representation & Descriptors
Wavelet transform in image compression
Fundamentals steps in Digital Image processing
Digital image processing
image compression ppt
NOISE FILTERS IN IMAGE PROCESSING
Chapter 8 image compression
Histogram Processing
Digital Image Processing: Image Segmentation
Advantages of vector quantization over scalar quantization (1)
Region based image segmentation
Logistic regression in Machine Learning
Ad

Similar to vector QUANTIZATION (20)

PPTX
Multimedia lossy compression algorithms
PDF
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
PPTX
machine learning.pptx
PDF
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
PPTX
Kohonen self organizing maps
PPTX
Fundamentals of Machine Learning.pptx
PPTX
Sess07 Clustering02_KohonenNet.pptx
PPT
Constraint_Satisfaction problem based_slides.ppt
PPTX
Deep Learning Bangalore meet up
PPTX
DLBLR talk
PPTX
Convolutional Error Control Coding
PPTX
IEEE DSP Workshop 2011
PPTX
IEEE DSP Workshop 2011
PDF
4-RSSI-Spectral Domain Image Transforms_1.pdf
PPTX
Chapter 10: Error Correction and Detection
PDF
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
PDF
Deep learning book_chap_02
PPTX
10 Backpropagation Algorithm for Neural Networks (1).pptx
PPTX
linear block code.pptxjdkdidjdjdkdkidndndjdj
PDF
MDCT audio coding with pulse vector quantizers
Multimedia lossy compression algorithms
Performance of-block-codes-using-the-eigenstructure-of-the-code-correlation-m...
machine learning.pptx
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
Kohonen self organizing maps
Fundamentals of Machine Learning.pptx
Sess07 Clustering02_KohonenNet.pptx
Constraint_Satisfaction problem based_slides.ppt
Deep Learning Bangalore meet up
DLBLR talk
Convolutional Error Control Coding
IEEE DSP Workshop 2011
IEEE DSP Workshop 2011
4-RSSI-Spectral Domain Image Transforms_1.pdf
Chapter 10: Error Correction and Detection
Dynamic time wrapping (dtw), vector quantization(vq), linear predictive codin...
Deep learning book_chap_02
10 Backpropagation Algorithm for Neural Networks (1).pptx
linear block code.pptxjdkdidjdjdkdkidndndjdj
MDCT audio coding with pulse vector quantizers
Ad

More from aniruddh Tyagi (20)

PDF
whitepaper_mpeg-if_understanding_mpeg4
PDF
BUC BLOCK UP CONVERTER
PDF
digital_set_top_box2
PDF
Discrete cosine transform
PDF
EBU_DVB_S2 READY TO LIFT OFF
PDF
ADVANCED DVB-C,DVB-S STB DEMOD
PDF
DVB_Arch
PDF
haffman coding DCT transform
PDF
Classification
PDF
tyagi 's doc
PDF
quantization_PCM
PDF
ECMG & EMMG protocol
PDF
7015567A
PDF
Basic of BISS
PDF
euler theorm
PDF
fundamentals_satellite_communication_part_1
PDF
quantization
PDF
art_sklar7_reed-solomon
PDF
DVBSimulcrypt2
whitepaper_mpeg-if_understanding_mpeg4
BUC BLOCK UP CONVERTER
digital_set_top_box2
Discrete cosine transform
EBU_DVB_S2 READY TO LIFT OFF
ADVANCED DVB-C,DVB-S STB DEMOD
DVB_Arch
haffman coding DCT transform
Classification
tyagi 's doc
quantization_PCM
ECMG & EMMG protocol
7015567A
Basic of BISS
euler theorm
fundamentals_satellite_communication_part_1
quantization
art_sklar7_reed-solomon
DVBSimulcrypt2

Recently uploaded (20)

PDF
EGCB_Solar_Project_Presentation_and Finalcial Analysis.pdf
PDF
Ericsson 5G Feature,KPIs Analysis_ Overview, Dependencies & Recommendations (...
PPTX
How to Convert Tickets Into Sales Opportunity in Odoo 18
PDF
GDG Cloud Southlake #45: Patrick Debois: The Impact of GenAI on Development a...
PDF
Fitaura: AI & Machine Learning Powered Fitness Tracker
PPT
Storage Area Network Best Practices from HP
PDF
Ebook - The Future of AI A Comprehensive Guide.pdf
PDF
ment.tech-Siri Delay Opens AI Startup Opportunity in 2025.pdf
PDF
Gestión Unificada de los Riegos Externos
PPTX
Presentation - Principles of Instructional Design.pptx
PDF
Connector Corner: Transform Unstructured Documents with Agentic Automation
PDF
Introduction to MCP and A2A Protocols: Enabling Agent Communication
PDF
Human Computer Interaction Miterm Lesson
PDF
Data Virtualization in Action: Scaling APIs and Apps with FME
PPTX
CRM(Customer Relationship Managmnet) Presentation
PDF
Technical Debt in the AI Coding Era - By Antonio Bianco
PPTX
Report in SIP_Distance_Learning_Technology_Impact.pptx
PDF
State of AI in Business 2025 - MIT NANDA
PDF
Internet of Things (IoT) – Definition, Types, and Uses
PDF
Chapter 1: computer maintenance and troubleshooting
EGCB_Solar_Project_Presentation_and Finalcial Analysis.pdf
Ericsson 5G Feature,KPIs Analysis_ Overview, Dependencies & Recommendations (...
How to Convert Tickets Into Sales Opportunity in Odoo 18
GDG Cloud Southlake #45: Patrick Debois: The Impact of GenAI on Development a...
Fitaura: AI & Machine Learning Powered Fitness Tracker
Storage Area Network Best Practices from HP
Ebook - The Future of AI A Comprehensive Guide.pdf
ment.tech-Siri Delay Opens AI Startup Opportunity in 2025.pdf
Gestión Unificada de los Riegos Externos
Presentation - Principles of Instructional Design.pptx
Connector Corner: Transform Unstructured Documents with Agentic Automation
Introduction to MCP and A2A Protocols: Enabling Agent Communication
Human Computer Interaction Miterm Lesson
Data Virtualization in Action: Scaling APIs and Apps with FME
CRM(Customer Relationship Managmnet) Presentation
Technical Debt in the AI Coding Era - By Antonio Bianco
Report in SIP_Distance_Learning_Technology_Impact.pptx
State of AI in Business 2025 - MIT NANDA
Internet of Things (IoT) – Definition, Types, and Uses
Chapter 1: computer maintenance and troubleshooting

vector QUANTIZATION

  • 1. Vector Quantization Aniruddh Tyagi 02-06-12
  • 2. Voronoi Region • Blocks: o A sequence of audio. o A block of image pixels. Formally: vector example: (0.2, 0.3, 0.5, 0.1) • A vector quantizer maps k-dimensional vectors in the vector space Rk into a finite set of vectors Y = {yi: i = 1, 2, ..., N}. Each vector yi is called a code vector or a codeword. and the set of all the codewords is called a codebook. Associated with each codeword, yi, is a nearest neighbor region called Voronoi region, and it is defined by: • The set of Voronoi regions partition the entire space Rk .
  • 3. Two Dimensional Voronoi Diagram Codewords in 2-dimensional space. Input vectors are marked with an x, codewords are marked with red circles, and the Voronoi regions are separated with boundary lines.
  • 4. The Schematic of a Vector Quantizer
  • 5. Compression Formula • Amount of compression: o Codebook size is K, input vector of dimension L o In order to inform the decoder of which code vector is selected, we need to use bits.  E.g. need 8 bits to represent 256 code vectors. o Rate: each code vector contains the reconstruction value of L source output samples, the number of bits per sample would be: . o Sample: a scalar value in vector. o K: level of vector quantizer.
  • 6. VQ vs SQ Advantage of VQ over SQ: • For a given rate, VQ results in a lower distortion than SQ. • If the source output is correlate, vectors of source output values will tend to fall in clusters. o E.g. Sayood’s book Exp 9.3.1 • Even if no dependency: greater flexibility. o E.g. Sayood’s book Exp 9.3.2
  • 7. Algorithms • Lloyd algorithm: pdf-optimized quantizer, assume that distribution is known • LBG: (VQ) o Continuous (require integral ooperation) o Modified: with training set.
  • 8. LBG Algorithm – Determine the number of codewords, N, or the size of the codebook. – Select N codewords at random, and let that be the initial codebook. The initial codewords can be randomly chosen from the set of input vectors. – Using the Euclidean distance measure clusterize the vectors around each codeword. This is done by taking each input vector and finding the Euclidean distance between it and each codeword. The input vector belongs to the cluster of the codeword that yields the minimum distance.
  • 9. LBG Algorithm (contd.) 4.Compute the new set of codewords. This is done by obtaining the average of each cluster. Add the component of each vector and divide by the number of vectors in the cluster. where i is the component of each vector (x, y, z, ... directions), m is the number of vectors in the cluster. 5.Repeat steps 2 and 3 until the either the codewords don't change or the change in the codewords is small.
  • 10. Other Algorithms • Problems: LBG is a greedy algorithm, may fall into Local minimum. • Four methods selecting initial vectors: o Random o Splitting ( with perturbation vector) Animation o Train with different subset o PNN (pairwise nearest neighbor) • Empty cell problem: o No input correspond to a output vector o Solution: give to other clusters, e.g. most populate cluster.
  • 11. LBG for image compression • Taking blocks of images as vector L=NM. • If K vectors in code book: o need to use bits. o Rate: • The higher the value K, the better quality, but lower compression ratio. • Overhead to transmit code book: • Train with a set of images.
  • 12. Rate_Dimension Product • Rate-dimension product o The size of the codebook increase exponentially with the rate. o Suppose we want to encode a source using R bits/sample. If we use an L-d quantizer, we would group L samples together into vectors. This means that we would have RL bits available to represent wach vector. o With RL bits, we can represent 2^(RL) output vectors.
  • 13. Tree structured VQ • Set vectors in different quadrant. Only signs of vectors need to be compared. Thus reduce the number of comparisons by 2^L for L-d vector problem. • It works well for symmetric distribution. But not when we lose more and more symmetry.
  • 14. Tree Structured Vector Quantizer • Extend to non-symmetric case: o Divide the set of output points into two groups, g0 and g1, and assign to each group a test vector s.t. output points in each group are closer to test vector assigned to that group than to the test vector assigned to the other group. o Label the two test vectors 0 and 1. o When we got an input vector, compare it against the test vectors. Depending on the outcome, the input is compared to the output points associated with the test vector closest to the input. o After these two comparisons, we can discard half of the output points. o Comparison with the test vectors takes the place of looking at the signs of the components to decide which set of output points to discard from contention. o If the total number of output points is K, we make( K/2)+2 comparisons instead of K comparisons. o Can continue to expand the number of groups. Finally: 2logK comparisons instead of K.( 2 comparisons with the test vectors and a total of logK stages
  • 15. Tree Structured VQ (continued) • Since the test vectors are assigned to groups: 0, 1, 00,01,10,11,000,001,010,011,100,101,110,111 etc. which are the nodes of a binary tree, the VQ has the name “Tree Structured VQ”. • Penalty: o Possible increase in distortion: it is possible that at some stage the input is closer to one test vector while at the same time being closest to an output belonging to the rejected group. o Increase storage: output points from VQ codebook plus the test vectors.
  • 16. Additional Links • Slides are adapted from: http://www.data-compression.com and http://www.geocities.com/mohamedqase m/vectorquantization/vq.html