8

I learned about hamming codes and how to use them to correct 1 bit errors and detect all 2 bit errors, but how extend this to correcting 2 bits, and maybe more?

What is the minimum number of bits needed to correct all 2 bit errors?

user623879
  • 4,066
  • 9
  • 38
  • 53

1 Answers1

7

I think I figured it out.

N=number of data bits, k=number error correcting bits(eg parity for hamming)

In any ECC scheme, you have 2^(N+k) possible bit strings.

For single bit error:

You must find k such that the total number of possible bit strings is larger than the possible number of strings with at most 1 bit error for a given string.

The total possible strings with at most 1 bit error is 2^N(n+k+1)

1 string with no error, N+k strings with 1 bit error

2^(N+k)>=(2^N)*(N+k+1)

You simply have to plugin values of k until you find the one that satisfies the above(or however you wish to solve it)

Similarly for 2 bit error, it is

1 string with no error, N+k strings with 1 bit error, N+k choose 2 strings with 2 bit error.

2^(N+k)>=(2^N)*(N+k+1 + (N+k choose 2))

user623879
  • 4,066
  • 9
  • 38
  • 53
  • Is there a 2 bit correction code more efficient than [BCH code](https://en.wikipedia.org/wiki/BCH_code#Primitive_narrow-sense_BCH_codes), where the number of required redundancy bits is the number of bits needed for the least common multiple polynomial with distance 5? In the wiki example, the encoded data would consists of 7 data bits and 8 parity bits. – rcgldr Jul 09 '18 at 21:56
  • The expressions can be simplified by dividing both sides by 2^N. – SwedishOwlSerpent Jun 11 '21 at 13:06