Image Compression using
Singular Value Decomposition
Why Do We Need Compression?
To save
• Memory
• Bandwidth
• Cost
How Can We Compress?
• Coding redundancy
  – Neighboring pixels are not independent but
    correlated




• Interpixel redundancy
• Psychovisual redundancy
Information vs Data

         REDUNDANTDAT
              A




           INFORMATION




DATA = INFORMATION + REDUNDANT DATA
Image Compression

•Lossless Compression

•Lossy Compression
Overview of SVD

• The purpose of (SVD) is to factor matrix A into

          T
  USV .
• U and V are orthonormal matrices.
• S is a diagonal matrix
• . The singular values σ1 > · · · > σn > 0 appear in
  descending order along the main diagonal of S.
  The numbers σ12· · · > σn2 are the eigenvalues of
      T       T
  AA and A A.
                               T
                      A= USV
Procedure to find SVD
• Step 1:Calculate AAT and ATA.

• Step 2: Eigenvalues and S.

• Step 3: Finding U.

• Step 4: Finding V.

• Step 5: The complete SVD.
Step 1:Calculate AA and A A.
                      T      T




• Let         then
Step 2: Eigenvalues and S.
• Singular Values are




• Therefore
Step 3: Finding U.
Step 4: Finding V.
• Similarly
Step 5:Complete SVD
SVD Compression
How SVD can compress any form of data.
• SVD takes a matrix, square or non-
  square, and divides it into two
  orthogonal matrices and a diagonal
  matrix.
• This allows us to rewrite our original
  matrix as a sum of much simpler rank
  one matrices.
• Since σ1 > · · · > σn > 0 , the first term of
  this series will have the largest impact on
  the total sum, followed by the second
  term, then the third term, etc.
• This means we can approximate the
  matrix A by adding only the first few
  terms of the series!
• As k increases, the image quality
  increases, but so too does the amount of
  memory needed to store the image. This
  means smaller ranked SVD
  approximations are preferable.
If we are going to increase the rank then we can improve the quality of the image
and also the memory used is also high
SVD vs Memory
• Non-compressed image, I, requires

With rank k approximation of I,
• Originally U is an m×m matrix, but we
  only want the first k columns. Then UM
  = mk.
• similarly VM = nk.
            AM = UM+ VM+∑ M
             AM = mk + nk + k
            AM = k(m + n + 1)
Limitations
• There are important limits on k for which
  SVD actually saves memory.
                   AM ≤IM
                  k(m + n + 1) < mn
                   k <mn/(m+n+1)
• The same rule for k applies to color images.
• In the case of color IM =3mn. While
                   AM =3k(m+n+1)
                       AM ≤IM
                 → 3k(m+n+1) < 3mn
               Thus, k <mn/(m+n+1)
1. www.wikipedia.com

2. www.google.com

3. www.imagesco.com
4. www.idocjax.com
5. www.howstuffworks.com
6. www.mysvd.com
Image compression using singular value decomposition
Image compression using singular value decomposition

Image compression using singular value decomposition

  • 1.
  • 2.
    Why Do WeNeed Compression? To save • Memory • Bandwidth • Cost
  • 3.
    How Can WeCompress? • Coding redundancy – Neighboring pixels are not independent but correlated • Interpixel redundancy • Psychovisual redundancy
  • 4.
    Information vs Data REDUNDANTDAT A INFORMATION DATA = INFORMATION + REDUNDANT DATA
  • 5.
  • 6.
    Overview of SVD •The purpose of (SVD) is to factor matrix A into T USV . • U and V are orthonormal matrices. • S is a diagonal matrix • . The singular values σ1 > · · · > σn > 0 appear in descending order along the main diagonal of S. The numbers σ12· · · > σn2 are the eigenvalues of T T AA and A A. T A= USV
  • 7.
    Procedure to findSVD • Step 1:Calculate AAT and ATA. • Step 2: Eigenvalues and S. • Step 3: Finding U. • Step 4: Finding V. • Step 5: The complete SVD.
  • 8.
    Step 1:Calculate AAand A A. T T • Let then
  • 9.
  • 10.
    • Singular Valuesare • Therefore
  • 11.
  • 13.
    Step 4: FindingV. • Similarly
  • 14.
  • 15.
    SVD Compression How SVDcan compress any form of data. • SVD takes a matrix, square or non- square, and divides it into two orthogonal matrices and a diagonal matrix. • This allows us to rewrite our original matrix as a sum of much simpler rank one matrices.
  • 17.
    • Since σ1> · · · > σn > 0 , the first term of this series will have the largest impact on the total sum, followed by the second term, then the third term, etc. • This means we can approximate the matrix A by adding only the first few terms of the series! • As k increases, the image quality increases, but so too does the amount of memory needed to store the image. This means smaller ranked SVD approximations are preferable.
  • 18.
    If we aregoing to increase the rank then we can improve the quality of the image and also the memory used is also high
  • 19.
    SVD vs Memory •Non-compressed image, I, requires With rank k approximation of I, • Originally U is an m×m matrix, but we only want the first k columns. Then UM = mk. • similarly VM = nk. AM = UM+ VM+∑ M AM = mk + nk + k AM = k(m + n + 1)
  • 20.
    Limitations • There areimportant limits on k for which SVD actually saves memory. AM ≤IM k(m + n + 1) < mn k <mn/(m+n+1) • The same rule for k applies to color images. • In the case of color IM =3mn. While AM =3k(m+n+1) AM ≤IM → 3k(m+n+1) < 3mn Thus, k <mn/(m+n+1)
  • 21.
    1. www.wikipedia.com 2. www.google.com 3.www.imagesco.com 4. www.idocjax.com 5. www.howstuffworks.com 6. www.mysvd.com

Editor's Notes

  • #22 6.www.mysvd.com www.imagesco.com www.mysvd.com