Hamming Code - LEKULE

Breaking

30 Aug 2015

Hamming Code

We have seen, in the case of the error detection and correction codes described above, how an increase in the number of redundant bits added to message bits can enhance the capability of the code to detect and correct errors. If we have a sufficient number of redundant bits, and if these bits can be arranged such that different error bits produce different error results, then it should be possible not only to detect the error bit but also to identify its location. Actually, the addition of bits which are redundant alters the ‘distance’ Code parameter, which we know as the Hamming distance. Now we need to know what is hamming distance, it is nothing but the difference in number of bits between two code words which we are checking. We can explain it with an example,like the addition of single-bit parity results in a code with a Hamming distance of at least 2. And the smallest Hamming distance in the case of a threefold repetition code would be 3. Hamming noticed that an increase in distance enhanced the code’s ability to detect and correct errors which is highly desirable. Therefore Hamming's code was an attempt to increase the Hamming distance and at the same time to have as high information at a throughput rate as possible. The algorithm for writing the generalized Hamming code is as follows:

1. The generalized form of code is P1P2D1P3D2D3D4P4D5D6D7D8D9D10D11P5, where P and D respectively represent parity and data bits.

2. We can see from the generalized form of the code that all bit positions that are powers of 2 (positions 1, 2, 4, 8, 16) are used as parity bits.

3. All other bit positions (positions 3, 5, 6, 7, 9, 10, 11) are used to encode data.
4. A group of bits from the data bits in the code word is allotted to each parity bit, and the value of the parity bit which is 0 or 1 is used to give it certain parity to make the operation smooth.
5. Groups are formed by first checking N − 1 bits and then alternately skipping and checking N bits\ following the parity bit. Here, N is the position of the parity bit; 1 for P1, 2 for P2 , 4 for P3 , 8 for P4 and so on. For example, for the generalized form of code given above, various groups of bits formed with different parity bits would beP1D1D2D4D5, P2D1D3D4D6D7, P3D2D3D4D8D9, P4D5D6D7D8D9 D10D11 and so on. To illustrate the formation of groups further, let us examine the group corresponding to parity bit P3. Now, the position of P3 is at number 4. In order to form the group, we check the first three bits (N − 1 = 3) and then follow it up by alternately skipping and checking four bits (N = 4). We have discussed a lot about the Hamming codes and now we can conclude that these codes are capable of correcting single-bit errors on messages of any length. Although this code can detect two-bit errors, we are unable to get the error locations. The number of parity bits required to be transmitted along with the message, however, depends upon the message length, as shown above. The number of parity bits required to encode message bits is the smallest integer that satisfies the condition (2n – n)>m. 

No comments: