310 likes | 515 Views
Chapter 8 – Compression. Aims: Outline the objectives of compression. Define the key methods used to compress data. Outline methods used to compress images, video and audio. Key factors in reducing the amount of data storage.
E N D
Chapter 8 – Compression Aims: Outline the objectives of compression. Define the key methods used to compress data. Outline methods used to compress images, video and audio.
Key factors in reducing the amount of data storage • Getting rid of redundant data. This involves determining the parts of the data that are not required • Identifying irrelevant data. This involves identifying the parts of the data which are perceived to be irrelevant. • Converting the data into a different format. This will typically involve changing the way that the data is processed and stored • Reducing the quality of the data. Often the user does not require the specified quality of the data.
RGB #FF0000 (255,0,0) #00FF00 (0,255,0) #0000FF (0,0,255) #FFFFFF (255,255,255) #000000 (0,0,0) #6496C8 (100,150,200) #C89664 (200,150,100) #64C896 (100,200,150)
Audio, image and video compression • JPEG/GIF. The JPEG (Joint Photographic Expert Group) compression technique is well matched to what the human eye and the brain perceive. • MPEG. MPEG (Motion Picture Experts Group) uses many techniques to reduce the size of the motion video data. • MPEG (MP-3). The digital storage of audio allows for the data to be compressed.
Lossy and lossless compression • Lossless compression. Where the data, once uncompressed, will be identical to the original uncompressed data. This will obviously be the case with computer-type data, such as data files, computer programs, and so on, as any loss of data may cause the file to be corrupted. • Lossy compression. Where the data, once uncompressed, cannot be fully recovered. It normally involves analysing the data and determining which data has little effect on the perceived information.
Entropy and source coding • Entropy coding. This does not take into account any of the characteristics of the data and treats all the bits in the same way. As it does not know which parts of the data can be lost, it produces lossless coding. Typical coding techniques are: • Statistical encoding. Analysing the occurrence and patterns of data. • Suppressing repetitive sequences. • Source encoding. This normally takes into account characteristics of the information.
Entropy coding Normally, general data compression does not take into account the type of data which is being compressed and is lossless. As it is lossless it can be applied to computer data files, documents, images, and so on. The two main techniques are statistical coding and repetitive sequence suppression. • Huffman. Huffman coding uses a variable length code for each of the elements within the data. This normally involves analysing the data to determine the probability of its elements. The most probable elements are coded with a few bits and the least probable coded with a greater number of bits. This could be done on a character-by-character basis, in a text file, or could be achieved on a byte-by-byte basis for other files. • Lempel-Ziv. Around 1977, Abraham Lempel and Jacob Ziv developed the Lempel–Ziv class of adaptive dictionary data compression techniques (also known as LZ-77 coding), which are now some of the most popular compression techniques.
Huffman Letter: ‘b’ ‘c’ ‘e’ ‘i’ ‘o’ ‘p’ No. of occurrences: 12 3 57 51 33 20 ‘e’ ‘i’ ‘o’ ‘p’ ‘b’ ‘c’ 57 51 33 20 12 3 ‘e’ 11 ‘i’ 10 ‘o’ 00 ‘p’ 011 ‘b’ 0101 ‘c’ 0100
Huffman (cont.) 11000110100100110100 will be decoded as: ‘e’ ‘o’ ‘p’ ‘c’ ‘i’ ‘p’ ‘c’ ‘e’ 11 ‘i’ 10 ‘o’ 00 ‘p’ 011 ‘b’ 0101 ‘c’ 0100
Lempel-Ziv ‘The receiver#9#3quires a#20#5pt for it. This is automatically sent wh#6#2 it #30#2#47#5ved.’
LZW The Lempel–Ziv–Welsh (LZW) algorithm (also known LZ-78) builds a dictionary of frequently used groups of characters (or 8-bit binary values A simple example is to use a six-character alphabet and a 16-entry dictionary, thus the resulting code word will have 4 bits. If the transmitted message is: ababacdcdaaaaaaef Then the transmitter and receiver would initially add the following to its dictionary: 0000 ‘a’ 0001 ‘b’ 0010 ‘c’ 0011 ‘d’ 0100 ‘e’ 0101 ‘f’ 0110–1111 empty
LZW (cont.) 0000 ‘a’ 0001 ‘b’ 0010 ‘c’ 0011 ‘d’ 0100 ‘e’ 0101 ‘f’ 0110–1111 empty 0000 ‘a’ 0001 ‘b’ 0010 ‘c’ 0011 ‘d’ 0100 ‘e’ 0101 ‘f’ 0110 ‘ab’ 0111–1111 empty
Statistical coding • Statistical encoding is an entropy technique which identifies certain sequences within the data. These ‘patterns’ are then coded so that they have fewer bits. Frequently used patterns are coded with fewer bits than less common patterns. For example, text files normally contain many more ‘e’ characters than ‘z’ characters. Thus the ‘e’ character could be encoded with a few bits and the ‘z’ with many bits Morse coding Pure coding (example)
Repetitive character sequence suppression • Repetitive sequence suppression involves representing long runs of a certain bit sequence with a special character. A special bit sequence is then used to represent that character, followed by the number of times it appears in sequence. 8.3200000000000 could be coded as: 8.32F11 where F is a special flag.
Source compression • Source compression takes into account the type of information that is being compressed, and is typically used with image, video and audio information. For example the following might be integer values for the samples: 321, 322, 324, 324, 320, 317, 310, 311 This could be coded as difference values as: 321, +1, +2, 0, –4, –3, –7, +1
Image compression Image with a good deal of repetition Image with a good deal of changes
JPEG • It is a compression technique for grey-scale or colour images and uses a combination of discrete cosine transform, quantization, run-length and Huffman coding. The components are computed from the RGB components: Y = 0.299R+0.587G+0.114B Cb = 0.1687R–0.3313G+0.5B Cr = 0.5R–0.4187G+0.0813B
I, P and B-frames • Intra frame (I-frame). An intra frame, or I-frame, is a complete image and does not require any extra information to be added to it to make it complete. As it is a complete frame, it cannot contain any motion estimation processing. It is typically used as a starting point for other referenced frames, and is usually the first frame to be sent. • Predictive frame (P-frame). The predictive frame, or P-frame, uses the preceding I-frame as its reference and has motion estimation processing. Each macroblock in this frame is supplied as referenced to an I-frame as either a vector and difference, or if no match was found, as a completely encoded macroblock (called an intracoded macroblock). The decoder must thus retain all I-frame information to allow the P-frame to be decoded. • Bidirectional frame (B-frame). The bidirectional frame, or B-frame, is similar to the P-frame except that it references frames to the nearest preceding or future I- or P-frame. When compressing the data, the motion estimation works on the future frame first, followed by the past frame. If this does not give a good match, an average of the two frames is used. If all else fails, the macroblock can be intracoded.
Audio compression Digitized audio signal rate = 44.116 kbps = 705.6kbps