1 / 26

Losslessy Compression of Multimedia Data

Losslessy Compression of Multimedia Data. Hao Jiang Computer Science Department Sept. 25, 2007. Lossy Compression. Apart from lossless compression, we can further reduce the bits to represent media data by discarding “unnecessary” information.

garan
Download Presentation

Losslessy Compression of Multimedia Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007

  2. Lossy Compression • Apart from lossless compression, we can further reduce the bits to represent media data by discarding “unnecessary” information. • Media such as image, audio and video can be “modified” without seriously affecting the perceived quality. • Lossy multimedia data compression standards include JPEG, MPEG, etc.

  3. Methods of Discarding Information • Reducing resolution Original image 1/2 resolution and zoom in

  4. Reduce pixel color levels Original image ½ color levels

  5. For audios and videos we can similarly reduce the sampling rate, the sample levels, etc. • These methods usually introduce large distortion. Smarter schemes are necessary! 2.3bits/pixel (JPEG)

  6. Distortion • Distortion: the amount of difference between the encoded media data and the original one. • Distortion measurement • Mean Square Error (MSE) mean( ||xorg – xdecoded||2) • Signal to Noise Ratio (SNR) SNR = 10log10 (Signal_Power)/(MSE) (dB) • Peak Signal to Noise Ratio PSNR = 10log10(255^2/MSE) (dB)

  7. The Relation of Rate and Distortion • The lowest possible rate (average codeword length per symbol) is correlated with the distortion. Bit Rate H 0 D_max D

  8. Quantization • Maps a continuous or discrete set of values into a smaller set of values. • The basic method to “throw away” information. • Quantization can be used for both scalars (single numbers) or vectors (several numbers together). • After quantization, we can generate a fixed length code directly.

  9. Uniform Scalar Quantization Assume x is in [xmin, xmax]. We partition the interval uniformly into N nonoverlapping regions. Quantization step D =(xmax-xmin)/N xmin xmax Quantization value Decision boundaries A quantizer Q(x) maps x to the quantization value in the region where x falls in.

  10. Quantization Example Q(x) = [floor(x/D) + 0.5] D Q(x)/ D 2.5 1.5 0.5 -3 -2 -1 0 1 2 3 x/D -0.5 -1.5 -2.5 Midrise quantization

  11. Quantization Example Q(x) = [round(x/D)] D Q(x)/ D 3 2 1 -3 -2 -1 0 1 2 3 x/D -1 -2 -3 Midrise quantization

  12. Quantization Error • To minimize the possible maximum error, the quantization value should be at the center of each decision interval. • If x randomly occurs, Q(x) is uniformly distributed in [-D/2, D/2] Quantization error xn+1 xn x Quantization value

  13. Quantization and Codewords 001 010 011 100 101 000 xmin xmax Each quantization value can be associated with a binary codeword. In the above example, the codeword corresponds to the index of each quantization value.

  14. Another Coding Scheme • Gray code 000 001 011 010 110 111 xmin xmax • The above codeword is different in only 1bit • for each neighbors. • Gray code is more resistant to bit errors than the • natural binary code.

  15. Bit Assignment • If the # of quantization interval is N, we can use log2(N) bits to represent each quantized value. • For uniform distributed x, The SNR of Q(x) is proportional to 20log(N) = 6.02n, where N=2n dB About 6db gain 1 more bit bits

  16. Non-uniform Quantizer • For audio and visual, the tolerance of a distortion is proportional to the signal size. • So, we can make quantization step D proportional to the signal level. • If signal is not uniformly distributed, we also prefer non-uniform quantization. Perceived distortion ~ D / s 0

  17. Vector Quantization Decision Region Quantization Value

  18. Predictive Coding … - - - - + + + + 1 2 1 1 -2 -1 -1 -1 3 1 1 1 1 + • Lossless difference coding revisited encoder 0 1 3 4 5 3 2 1 0 3 4 5 6 7 0 … + + 1 3 4 5 3 2 1 0 3 4 5 6 7 decoder

  19. Predictive Coding in Lossy Compression 1 3 4 5 3 2 1 0 3 4 5 6 7 1 1 1 1 -1 -1 -1 -1 1 1 1 1 1 Encoder 0 - + + + + - - - … Q Q Q Local decoder … + + + 0 1 2 3 4 3 2 1 0 1 2 3 4 5 Q(x) = 1 if x > 0, 0 if x == 0 and –1 if x < 0

  20. A Different Notation Entropy coding Audio samples or image pixels + - 0101… Buffer Lossless Predictive Encoder Diagram

  21. A Different Notation Entropy decoding Reconstructed audio samples or image pixels Code stream + + Buffer Lossless Predictive Decoder Diagram

  22. A different Notation Lossy Predictive Coding 0101… Coding Audio samples or image pixels Q + - Local Decompression + Buffer

  23. General Prediction Method • For image: • For Audio: • Issues with Predictive Coding • Not resistant to bit errors. • Random access problem. C B A X D C B A X

  24. Transform Coding + + + + + + + + + + + + 1 3 4 5 3 2 1 0 3 4 5 6 ½ ½ ½ ½ ½ ½ 2 4.5 2.5 0.5 3.5 5.5 1 3 4 5 3 2 1 0 3 4 5 6 - - - - - - ½ ½ ½ ½ ½ ½ -1 -0.5 0.5 0.5 -0.5 0.5

  25. Transform and Inverse Transform We did a transform for a block of input data using x1 x2 y1 y2 ½ ½ ½ -½ = The inverse transform is: y1 y2 x1 x2 1 1 1 -1 =

  26. Transform Coding • A proper transform focuses the energy into small number of numbers. • We can then quantize these values differently and achieve high compression ratio. • Useful transforms in compressing multimedia data: • Fourier Transform • Discrete Cosine Transform

More Related