1 / 17

ERROR CONTROL CODING

ERROR CONTROL CODING. Basic concepts Classes of codes: Block Codes Linear Codes Cyclic Codes Convolutional Codes. Basic Concepts. Example : Binary Repetition Codes (3,1) code: 0 ==> 000 1 ==> 111 Received: 011. What was transmitted? scenario A: 111 with one error in 1st location

yuma
Download Presentation

ERROR CONTROL CODING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ERROR CONTROL CODING Basic concepts Classes of codes: • Block Codes Linear Codes Cyclic Codes • Convolutional Codes

  2. Basic Concepts Example: Binary Repetition Codes (3,1) code: 0 ==> 000 1 ==> 111 Received: 011. What was transmitted? scenario A: 111 with one error in 1st location scenario B: 000 with two errors in 2nd & 3rd locations. Decoding: P(A) = (1-p)2 p P(B) = (1-p)p2 P(A) > P(B) (for p<0.5) Decoding decision: 011 ==> 111

  3. Probability of Error After Decoding (3,1) repetition code can correct single errors. In general for a tc-error correcting code: Bit error probability = [for the (3,1) code, Pb= Pu] Gain: For a BSC with p= 10-2,Pb=3x10-4. Cost: Expansion in bandwidth or lower rate.

  4. Hamming Distance • Def.: The Hamming distance between two codewords ci and cj, denoted by d(ci,cj), is the number of components at which they differ. dH(011,000) = 2 dH [C1,C2]=WH(C1+C2) dH (011,111) = 1 Therefore 011 is closer to 111. • Maximum Likelihood Decoding reduces to Minimum Distance Decoding, if the priory probabilities are equal (P(0)=P(1))

  5. Geometrical IllustrationHamming Cube 000 001 011 010 101 100 110 111

  6. Error Correction and Detection Consider a code consisting of two codewords with Hamming distance dmin. How many errors can be detected? Corrected? # of errors that can be detected = td= dmin -1 # of errors that can be corrected = tc= In other words, for t-error correction, we must have dmin = 2tc + 1

  7. Error Correction and Detection (cont’d) Example: dmin= 5 Can correct two errors Or, detect four errors Or, correct one error and detect two more errors. In general d min= 2tc+ td + 1 d min > 2tc+ 1 d min >tc+ td + 1

  8. Minimum Distance of a Code • Def.: The minimum distance of a code C is the minimum Hamming distance between any two different codewords. • A code with minimum distance dmin can correct all error patterns up to and including t-error patterns, where dmin = 2tc + 1 It may be able to correct some higher error patterns, but not all.

  9. Example: (7,4) Code

  10. Coding: Gain and Cost (Revisited) • Given an (n,k) code. Gain is proportional to the error correction capability, tc. Cost is proportional to the number of check digits, n-k = r. • Given a sequence of k information digits, it is desired to add as few check digits r as possible to correct as many errors (t) as possible. What is the relation between these code parameters? Note some text books uses m rather than r for the number check bits

  11. Hamming Bound • For an (n,k) code, there are 2k codewords and 2n possible received words. • Think of the 2k codewords as centers of spheres in an n-dimensional space. • All received words that differ from codeword ci in tc or less positions lie within the sphere Si of center ci and radius tc. • For the code to be tc-error correcting (i.e. any tc-error pattern for any codeword transmitted can be corrected), all spheres Si , i =1,.., 2k , must be non-overlapping.

  12. Hamming Bound (cont’d) • In other words, When a codeword is selected, none of the n-bit sequences that differ from that codeword by tcor less locations can be selected as a codeword. • Consider the all-zero codeword. The number of words that differ from this codeword by j locations is • The total number of words in any sphere (including the codeword at the center) is

  13. Hamming Bound (cont’d) • The total number of n-bit sequences that must be available (for the code to be a tc-error correcting code) is: • But the total number of sequences is 2n. Therefore:

  14. Hamming Bound (cont’d) • The above bound is known as the Hamming Bound. It provides a necessary, but not a sufficient, condition for the construction of an (n,k) tc-error correcting code. • Example: Is it theoretically possible to design a (10,7) single-error correcting code? • A code for which the equality is satisfied is called a perfect code. • There are only three types of perfect codes (binary repetition codes, the hamming codes, and the Golay codes). • Perfect does not mean “best”!

  15. Gilbert Bound • While Hamming bound sets a lower limit on the number of redundant bits (n-k) required to correct tc errors in an (n,k) linear block code. • Another lower limit is the Singleton bound • Gilbert bound places an upper bound on the number of redundant bits required to correct tc errors. • It only says there exist a code but it does not tell you how to find it.

  16. The Encoding Problem • How to select 2k codewords of the code C from the 2n sequences such that some specified (or possibly the maximum possible) minimum distance of the code is guaranteed? • Example: How were the 16 codewords of the (7,4) code constructed? Exhaustive search is impossible, except for very short codes (small k and n) • Are we going to store the whole table of 2k(n+k) entries?! • A constructive procedure for encoding is necessary.

  17. The Decoding Problem Standard Array 0000000 1101000 0110100 1011100 1110010 0011010 1000110 0101110 1010001 0111001 1100101 0001101 0100011 1001011 0010111 1111111 0000001 1101001 0110101 1011101 1110011 0011011 1000111 0101111 1010000 0111000 1100100 0001100 0100010 1001010 0010110 1111110 0000010 1101010 0110110 1011110 1110000 0011000 1000100 0101100 1010011 0111011 1100111 0001111 0100001 1001001 0010101 1111101 0000100 1101100 0110000 1011000 1110110 0011110 1000010 0101010 1010101 0111101 1100001 0001001 0100111 1001111 0010011 1111011 0001000 1100000 0111100 1010100 1111010 0010010 1001110 0100110 1011001 0`10001 1101101 0000101 0101011 1000011 0011111 1110111 0010000 1111000 0100100 1001100 1100010 0001010 1010110 0111110 1000001 0101001 1110101 0011101 0110011 1011011 0000111 1101111 0100000 1001000 0010100 1111100 1010010 0111010 1100110 0001110 1110001 0011001 1000101 0101101 0000011 1101011 0110111 1011111 1000000 0101000 1110100 0011100 0110010 1011010 0000110 1101110 0010001 1111001 0100101 1001101 1100011 0001011 1010111 0111111 Exhaustive decoding is impossible!! Well-constructed decoding methods are required. Two possible types of decoders: • Complete: always chooses minimum distance • Bounded-distance: chooses the minimum distance up to a certain tc. Error detection is utilized otherwise.

More Related