Singleton bound

From Wikipedia, the free encyclopedia

In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound.[1] proved by Joshi (1958) and even earlier by Komamiya (1953).

Statement of the bound[]

The minimum distance of a set of codewords of length is defined as

where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a -ary block code of length and minimum distance .

Then the Singleton bound states that

Proof[]

First observe that the number of -ary words of length is , since each letter in such a word may take one of different values, independently of the remaining letters.

Now let be an arbitrary -ary block code of minimum distance . Clearly, all codewords are distinct. If we puncture the code by deleting the first letters of each codeword, then all resulting codewords must still be pairwise different, since all of the original codewords in have Hamming distance at least from each other. Thus the size of the altered code is the same as the original code.

The newly obtained codewords each have length

,

and thus, there can be at most of them. Since was arbitrary, this bound must hold for the largest possible code with these parameters, thus:[2]

Linear codes[]

If is a linear code with block length , dimension and minimum distance over the finite field with elements, then the maximum number of codewords is and the Singleton bound implies:

,

so that

,

which is usually written as[3]

.

In the linear code case a different proof of the Singleton bound can be obtained by observing that rank of the parity check matrix is .[4] Another simple proof follows from observing that the rows of any generator matrix in standard form have weight at most .

History[]

The usual citation given for this result is Singleton (1964), but was proven earlier by Joshi (1958). According to Welsh (1988, p. 72) the result can be found in a 1953 paper of Komamiya (1953)

MDS codes[]

Linear block codes that achieve equality in the Singleton bound are called MDS (maximum distance separable) codes. Examples of such codes include codes that have only two codewords (the all-zero word and the all-one word, having thus minimum distance ), codes that use the whole of (minimum distance 1), codes with a single parity symbol (minimum distance 2) and their dual codes. These are often called trivial MDS codes.

In the case of binary alphabets, only trivial MDS codes exist.[5][6]

Examples of non-trivial MDS codes include Reed-Solomon codes and their extended versions.[7][8]

MDS codes are an important class of block codes since, for a fixed and , they have the greatest error correcting and detecting capabilities. There are several ways to characterize MDS codes:[9]

Theorem: Let be a linear [] code over . The following are equivalent:
  • is an MDS code.
  • Any columns of a generator matrix for are linearly independent.
  • Any columns of a parity check matrix for are linearly independent.
  • is an MDS code.
  • If is a generator matrix for in standard form, then every square submatrix of is nonsingular.
  • Given any coordinate positions, there is a (minimum weight) codeword whose support is precisely these positions.

The last of these characterizations permits, by using the MacWilliams identities, an explicit formula for the complete weight distribution of an MDS code.[10]

Theorem: Let be a linear [] MDS code over . If denotes the number of codewords in of weight , then

Arcs in projective geometry[]

The linear independence of the columns of a generator matrix of an MDS code permits a construction of MDS codes from objects in finite projective geometry. Let be the finite projective space of (geometric) dimension over the finite field . Let be a set of points in this projective space represented with homogeneous coordinates. Form the matrix whose columns are the homogeneous coordinates of these points. Then,[11]

Theorem: is a (spatial) -arc if and only if is the generator matrix of an MDS code over .

See also[]

Notes[]

  1. ^ Keedwell, A. Donald; Dénes, József (24 January 1991). Latin Squares: New Developments in the Theory and Applications. Amsterdam: Elsevier. p. 270. ISBN 0-444-88899-3.
  2. ^ Ling & Xing 2004, p. 93
  3. ^ Roman 1992, p. 175
  4. ^ Pless 1998, p. 26
  5. ^ Vermani 1996, Proposition 9.2
  6. ^ Ling & Xing 2004, p. 94 Remark 5.4.7
  7. ^ MacWilliams & Sloane 1977, Ch. 11
  8. ^ Ling & Xing 2004, p. 94
  9. ^ Roman 1992, p. 237, Theorem 5.3.7
  10. ^ Roman 1992, p. 240
  11. ^ Bruen, A.A.; Thas, J.A.; Blokhuis, A. (1988), "On M.D.S. codes, arcs in PG(n,q), with q even, and a solution of three fundamental problems of B. Segre", Invent. Math., 92: 441–459, doi:10.1007/bf01393742

References[]

  • Joshi, D.D (1958), "A Note on Upper Bounds for Minimum Distance Codes", Information and Control, 1 (3): 289–295, doi:10.1016/S0019-9958(58)80006-6
  • Komamiya, Y. (1953), "Application of logical mathematics to information theory", Proc. 3rd Japan. Nat. Cong. Appl. Math.: 437
  • Ling, San; Xing, Chaoping (2004), Coding Theory / A First Course, Cambridge University Press, ISBN 0-521-52923-9
  • MacWilliams, F.J.; Sloane, N.J.A. (1977), The Theory of Error-Correcting Codes, North-Holland, pp. 33, 37, ISBN 0-444-85193-3
  • Pless, Vera (1998), Introduction to the Theory of Error-Correcting Codes (3rd ed.), Wiley Interscience, ISBN 0-471-19047-0
  • Roman, Steven (1992), Coding and Information Theory, GTM, vol. 134, Springer-Verlag, ISBN 0-387-97812-7
  • Singleton, R.C. (1964), "Maximum distance q-nary codes", IEEE Trans. Inf. Theory, 10 (2): 116–118, doi:10.1109/TIT.1964.1053661
  • Vermani, L. R. (1996), Elements of algebraic coding theory, Chapman & Hall
  • Welsh, Dominic (1988), Codes and Cryptography, Oxford University Press, ISBN 0-19-853287-3

Further reading[]

Retrieved from ""