miércoles, 8 de mayo de 2013

Hamming Code

For this week, we were assigned to implement a block code of our own choice, and simulate the error fixing properties of the code in a transmission through a noisy channel. I chose the Hamming Code (7, 4).

The Hamming code is a linear error correcting code, in this case being (7, 4), because it encodes 4 bits of data into 7 bits, adding 3 parity bits. This Hamming code can correct any single-bit errors, or detect all single-bit and two-bit errors (but cannot correct them). This is because the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at distance at most one from the codeword transmitted by the sender.

Explanation

For encoding, I used the following matrix:


Now, as mentioned, the Hamming (7, 4) code encodes 4 bits into 7 bits, this results from a matrix multiplication between the 4 bits of data (turning them into a four length vector) and $G$. So to obtain the encoded 7 bits of data, a matrix multiplication is done, and the result is:






In this example, "1001" encoded results into "1001100". Now imagine this message is sent through a noisy channel, and reaches it's destination without any noise. To check for errors, first the received message (y) is transposed and multiplied, but now with the decode matrix:





The zero's vector tells us the received word was valid, and so no error fixing is needed.

But let's imagine that the received word suffered a single bit error, and ended like "1101100". Then the previous process would end like:



The result vector now is 010, which is binary for 2. The position (array-wise, starting from 1 at the left) of the error bit. Fixing that bit the message would be again "1001100" which would verify correctly.

Experiment

To test a little this hamming code algorithm, I created a small Python script to run an experiment on random 4 bit sequences, adding a certain noise to them, depending on some parameters. The parameters are:
  • Noise of the channel: A percentage, denoting the probability of a certain bit to change into it's opposite.
  • Number of noisy bits: The maximum number of bits that can be modified because of the noise.
  • Number of samples: How many times the experiment will run.
By default, the noise is generated at only 1 bit, which for the explanation of the algorithm we can already tell that every error that is presented is gong to be fixed, because the algorithm is designed to do so.

First



As the number of bits with noise was 1, it is expected that the graph will be a simple straight line, as every word with a single bit noise is fixed right away. Also we can see that the noise is generated correctly to the probability, as roughly half of the samples contained and error (and were fixed).

Second


In this case, 2 bits can have an error, and the probability of the error is bigger. Again, all the one bit errors are fixed, and the rest are just detected, but cannot be fixed, so they count towards the not fixed messages.

Code:


Reference:

2 comentarios: