Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes

Hung-Peng Lee, Hsin-Chiu Chang

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes

Hung-Peng Lee1,, Hsin-Chiu Chang1

1Department of Computer Science and Information Engineering, Fortune Institute of Technology, Kaohsiung, ROC

Abstract

In this paper, a more efficient syndrome-weight decoding algorithm (SWDA), called the enhanced syndrome-weight decoding algorithm (ESWDA), is presented to decode up to three possible errors for the binary systematic (23, 12, 7) and (31, 16, 7) quadratic residue (QR) codes. In decoding of the QR codes, the evaluation of the error-locator polynomial in the finite field is complicated and time-consuming. To solve such a problem, the proposed ESWDA avoids evaluating the complicated error-locator polynomial, and has no need of a look-up table to store the syndromes and their corresponding error patterns in the memory. In comparison with the SWDA developed by Lin-Chang-Lee-Truong (2010), the simulation results show that the ESWDA can serve as an efficient and high-speed decoder.

At a glance: Figures

Cite this article:

  • Lee, Hung-Peng, and Hsin-Chiu Chang. "Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes." Automatic Control and Information Sciences 2.1 (2014): 7-12.
  • Lee, H. , & Chang, H. (2014). Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes. Automatic Control and Information Sciences, 2(1), 7-12.
  • Lee, Hung-Peng, and Hsin-Chiu Chang. "Decoding of the Triple-Error-Correcting Binary Quadratic Residue Codes." Automatic Control and Information Sciences 2, no. 1 (2014): 7-12.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

The triple-error-correcting binary QR codes include (23, 12, 7) QR code and (31, 16, 7) QR code, respectively. The binary (23, 12, 7) QR code is also called the binary Golay code, which is a perfect code. The Golay code is first introduced by Golay in 1949 [4]. If an overall parity check is used, the rate is exactly 1/2, so that most of the known QR codes are the best-known codes. Among them, the extended (24, 12, 8) Golay code was utilized to act as an error control on the Voyager I and II spacecraft mission, providing clear remote pictures of Jupiter and Saturn [13].

Several algebraic decoding algorithms (ADAs) had been developed to decode the binary Golay code [3, 12]. The key idea of decoding the Golay code by using ADA is to compute the unknown syndrome for determining the coefficients of the error-locator polynomial. One of the representative methods is inverse-free Berlekamp-Massey algorithm [10]. In [11], Reed et al. developed the ADA of the extended (32, 16, 8) QR code with reducible generator polynomial. However, such an algorithm is quiet complicated. Lin et al. [6] thus proposed a modified ADA to reduce the decoding complexity. For ADAs, once the coefficients of the error-locator polynomial are obtained, the error positions can be determined by using the Chien search algorithm [1], which is an exhaustive search over all the elements in the finite field. In the decoding procedure of ADAs, this step is the most time-consuming and need plenty of multiplication and division operations over the finite field.

Most recently, table-lookup decoding algorithms (TLDAs) [2, 7, 9] have played an important role in forward error correction. These types of decoders are efficient with minimum decoding delay; however, the TLDAs require a memory space in the decoder chip and increase the decoding cost rapidly when the code length is large. The SWDA proposed by Lin et al. [7] used the refined lookup table (RLT) to decode the triple-error-correcting binary Golay code and (31, 16, 7) QR code. For decoding the Golay code, the RLT consists of 42 syndromes and their corresponding coset leaders, and it only needs 168 bytes memory size. For decoding the (31, 16, 7) QR code, the RLT consists of 72 syndromes and their corresponding coset leaders, and it only requires 288 bytes memory size.

In this paper, the proposed ESWDA has faster decoding speed than the SWDA, and it does not need a memory size to store the lookup table. The key idea of the proposed ESWDA is based on the weight of syndrome difference between the syndrome of the received word and the row vector of the transpose of the parity-check matrix. Therefore, the error cases can be swiftly determined. The application of the syndrome weight and the syndrome difference can constitute a high-speed decoding algorithm. Moreover, no complicated computation in the finite field is required in the proposed ESWDA. Two examples demonstrate the decoding procedure of the proposed ESWDA. The proposed ESWDA is applicable to decode other cyclic codes such as the binary (15, 5, 7) BCH code; however, the decoding steps of the proposed ESWDA need to make some slight adjustments. The decoding steps of the binary (15, 5, 7) BCH code are demonstrated in a simple example. Computer simulation result shows that the decoding time of the proposed ESWDA is superior to the SWDA.

The remainder of this paper is organized as follows: The background of the binary QR codes is briefly reviewed in Section 2. The proposed ESWDA is described in Section 3. In Section 4, we use three examples to demonstrate the proposed ESWDA. Computer simulation results of the proposed ESWDA and the SWDA are given in Section 5. Finally, this paper concludes with a brief summary in Section 6.

2. Background of the Binary QR Codes

The binary QR codes are a nice family of linear cyclic codes. Let (n, (n+1)/2, d) denote the binary QR codes with generator polynomial g(x) over GF(2). The length of this code is a prime number of the form n = 1, where l is some integer. Also, let k = (n+1)/2 denote the message length or information length, and d denote the minimum Hamming distance of the code. The set Qn of quadratic residues modulo n is the set of nonzero squares modulo n; that is, Qn = {j | j x2 mod n, 1 ≤ x n−1}. If n = 23, then its quadratic residue set is Q23 = {1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18}. If n = 31, then its quadratic residue set is Q31 = {1, 2, 4, 5, 7, 8, 9, 10, 14, 16, 18, 20, 25, 28}.

Let the symbols C23 and C31 denote the binary Golay code and binary (31, 16, 7) QR code, respectively. Let α be a root of primitive polynomial pr23(x) = x11 + x2 + 1. If α = 2, then the 11 roots of pr23(x) are α = 2, α2 = 4, α22 = 16, α23 = 256, α24 = 160, α25 = 1064, α26 = 1605, α27 = 158, α28 = 380, α29 = 1530, α210 = 1987. Let the element = αu be a primitive 23rd root of unity in GF(2ll), where u = (211 – 1) / 23 = 89. Therefore, = 89 = 322, where 322 is the decimal representation of . The total 23 roots are listed in Table 1.

Table 1. The total 23 roots of x23 – 1 (in decimal number)

The generator polynomial of C23 is defined by

(1)

Now let α be a generator of the multiplicative group of all nonzero elements in GF(25). Then, the element β = αu, where u = (25 − 1)/31 = 1. The x31–1 can be factored into seven primitive minimal irreducible polynomials; that is, x31–1 = (x5 + x2 + 1)(x5 + x4 + x2 + x + 1)(x5 + x3 + x2 + x + 1)(x5 + x4 + x3 + x2 + 1)(x5 + x3 + 1)(x5 + x4 + x3 + x + 1)(x + 1). Then, the generator polynomial of C31 is reducible, and is defined by

(2)

where g1(x), g5(x), and g7(x) are the minimal polynomials of x31–1. That is, gr(x) = (xßr)(xß2r)…(xß24r) and ß2irGF(25) for 0 ≤ i ≤ 4 are the roots of gr(x), where r = 1, 5, and 7. If α = 2, then the total 31 roots of x31 − 1 are listed in Table 2.

Table 2. The total 31 roots of x31 – 1 (in decimal number)

Because the minimum Hamming distance of C23 and C31 is d = 7, the inequality 2v + 1 ≤ 7 is valid, where v is the actual number of errors to be corrected. Hence, the error-correcting capability is , where denotes the greatest integer less than or equal to x. The codeword is a multiple of generator polynomial g(x); that is, , where CiGF(2) for 0 ≤ in–1, and denotes the message polynomial, where miGF(2) for 0 ≤ ik–1. Let be the parity-check polynomial, where piGF(2) for 0 ≤ ink–1. Let be the parity-check polynomial, where piGF(2) for 0 ≤ ink–1. Let p(x) ≡ m(x)xnk mod g(x), then one obtains m(x)xnk = q(x)g(x) + p(x). The term (p(x) + m(x)xnk), which is a multiple of g(x), is a codeword polynomial given by

(3)

where ciGF(2) for 0 ≤ in–1. In this paper, the systematic encoding method is utilized. Now, let a codeword be transmitted through an additive white Gaussian noise (AWGN) channel to obtain a received word with the form r(x) = c(x) + e(x), where e(x) is the polynomial of the received error pattern expressed as , where eiGF(2) for 0 ≤ in–1. The syndromes polynomial is expressed as .

To simplify the polynomial expressions mentioned above, let the message, codeword, error pattern, received word, and syndrome polynomials be expressed as the binary vector forms , , , , and , respectively. The systematic codeword of the vector form is given by

(4)

where G is called the systematic generator matrix. Let P be a k(nk) matrix and Ik be a kk identity matrix, and G can be expressed as

(5)

The parity-check matrix H can be expressed as , where PT denotes the (nk)k transpose matrix of P. The vector form of the syndrome is defined by

(6)

where HT denotes the n(nk) transpose matrix of H; that is, HT can be expressed as

(7)

For C23, HT has the following form:

(8)

3. Decoding Algorithm and Theorems

In this section, the proposed ESWDA is used to decode the C23 and C31. For the development of the proposed ESWDA, the following definition, theorem and corollary given in Lin et al. [8] are needed.

Definition 1: The Hamming weight of a binary vector a is denoted by w(a), and the Hamming distance between a and b is denoted by d(a, b) = w(a + b).

Theorem 1: Let a = (a0a1 an-1) and b = (b0b1 bn-1) be two binary vectors, then

(9)

Corollary 1: If aibi = 0 for 1 ≤ in, then

(10)

The following Theorem 2 is useful to compute the syndrome of the received word when the received word shifts one bit to the left. For more detailed proof of this theorem, see [13, p118].

Theorem 2: Let s(x) be the syndrome polynomial corresponding to a received polynomial r(x). Also, let r(1)(x) be the polynomial obtained by cyclically shifting the coefficients of r(x) one bit to the left. Then the remainder obtained when dividing xs(x) by g(x) is the syndrome s(1)(x) corresponding to r(1)(x).

However, for each cyclical shift of the received word, we have to divide xs(x) by g(x). If the syndrome cyclically shifts many times, then the syndrome computation is rather time-consuming for dividing xs(x) by gn(x) many times. The following theorem provides an efficient method to compute s(i) for 0 ≤ in–1, and it can save a lot of computational time.

Theorem 3: For the binary QR codes, let rj be an element of r and hj be the jth row vector of HT for 0 ≤ jn–1. Then the syndrome s(i) of r(i) for 0 ≤ in–1 has the form

(11)

where the suffix [x] of h denotes x mod n.

Proof: Let r = (r0,…,rn-1) and r(i) = (ri,…, rn-1, r0,…, ri-1) for 0 ≤ in–1. By (7), we have s(i) = r(i)HT = = . The proof is thus completed.

Theorem 3: reveals that the syndrome of r(i) can be fast computed by the vector addition. Theorem 4 also provides an efficient method to simplify the decoding step by using the syndrome weight. For a detailed proof, see [8].

Theorem 3 reveals that the syndrome of r(i) can be fast computed by the vector addition. Theorem 4 also provides an efficient method to simplify the decoding step by using the syndrome weight. For a detailed proof, see [8].

Theorem 4: For the binary QR codes, it is assumed that there are v errors in the received word, where 1 ≤ vt. All v errors are in the parity-check bits if and only if the weight of syndrome w(s) = v.

By using Theorem 4, we can develop the following useful theorem.

Theorem 5 For the binary QR codes, if v errors are in the information bits of the received word, where 1 ≤ vt and t = , then the weight of the corresponding syndrome polynomial or syndrome vector satisfies

(12)

Proof: Let error polynomial e(x) present the v errors in the information bits; that is, w(e(x)) = v. Since s(x) ≡ r(x) ≡ e(x) (mod g(x)), we have e(x) + s(x) ≡ 0 (mod g(x)). This implies that e(x) + s(x) is a codeword and hence the codeword must satisfy w(e(x) + s(x)) ≥ d. By Corollary 1, w(e(x) + s(x)) = w(e(x)) + w(s(x)) ≥ d and then w(s(x)) ≥ dw(e(x)). Thus, the weight of the syndrome polynomial satisfies w(s(x)) ≥ dv or w(s) ≥ (dv). The proof is thus completed.

Given a received word r, the syndrome of r(i) can be fast computed by Theorem 3. According to Theorem 4, if 1 ≤ w(s) ≤ 3, then the error positions are in the parity-check bits of r. If 1 ≤ w(s(n-k)) ≤ 3, then the error positions are in the information bits of r. Let hi denote the ith row vector of HT, where 0 ≤ in–1. Also let sdw denote the syndrome difference between the syndromes of r and hi in each decoding step w. By using the weight of sdw, the error cases can be quickly determined. Let u0 = (1, 0,…, 0) be a k-tuples unit vector and uj has only one nonzero component at the jth position, where 0 ≤ jk–1. By using these properties, the proposed ESWDA can be constructed.

Let upper case P, C, and H denote the error position in the parity-check bits, center bit, and information bits of r, respectively. For C23 or C31, there are 15 error cases (P, PP, PPP, H, HH, HHH, C, PC, PH, PPC, PPH, PHH, CH, CHH, and PCH), which cover all and error patterns. If w(s) = 0, then r has no error. If 1 ≤ w(s) ≤ 3 or 1 ≤ w(s(n-k)) ≤ 3, then there are 6 error cases (P, PP, PPP, H, HH, and HHH). Let the syndrome difference sd3 = (shi) for nkin–1. If 0 ≤ w(sd3) ≤ 2, then there are 5 error cases (C, PC, PH, PPC, and PPH). Let the syndrome difference sd4 = (s(n-k)hi) for nkin–1. If nkin–2 and w(sd4) = 2, then there is only 1 error case (PHH). If i = n–1 and 1 ≤ w(sd4) ≤ 2, then there are 2 error cases (CH and CHH). Let the syndrome difference sd5 = ((shn-k)–hi) for kin–1. If w(sd5) = 1, then there is only 1 error case (PCH). The decoding steps of the proposed ESWDA work as follows:

1). (No error, P, PP, and PPP cases.) By Theorem 3, compute s and w(s). If 0 ≤ w(s) ≤ 3, then the information vector is m = (rn-k,…, rn-1). Go to step 6.

2). (H, HH, and HHH cases.) By Theorem 3, compute s(n-k) and w(s(n-k)). If 1 ≤ w(s(n-k)) ≤ 3, then the corrected information vector is m = (rn-k,…, rn-1) + (s(n-k) >>1), where “>>” denotes the logical right shift operator in programming or the extension by zeros on the right in mathematics. Go to step 6.

3). (C, PC, PH, PPC, and PPH cases.) Compute the syndrome difference sd3 = (shi) for nkin–1 and w(sd3). If 0 ≤ w(sd3) ≤ 2, then the corrected information vector is m = (rn-k,…, rn-1) + ui-(n-k). Go to step 6.

4). (PHH, CH, and CHH cases) Compute the syndrome difference sd4 = (s(n-k)hi) for nkin–1 and w(sd4). If nkin-2 and w(sd4) = 2, then the corrected information vector is m = (rn-k,…, rn-1) + (sd4 >>1). If i = n–1 and 1 ≤ w(sd4) ≤ 2, then the corrected information vector is m = (rn-k,…, rn-1) + u0 + (sd4 >> 1). Go to step 6.

5). (PCH case.) Compute the syndrome difference sd5 = ((shn-k)–hi) for kin–1 and w(sd5). If w(sd5) = 1, then the corrected information vector is m = (rn-k,…, rn-1) + u0 + ui-(n-k). Go to step 6.

6). Stop.

Table 3 lists all the 15 error cases and the number of error patterns in each decoding steps of the proposed ESWDA. The flowchart of the proposed ESWDA is shown in Figure 1.

Table 3. The number of error patterns in each decoding step of C23 and C31

Figure 1. Flowchart of the proposed ESWDA

4. Examples

In this section, three examples are presented to illustrate the proposed ESWDA. Example 1 and 2 show the decoding steps for the binary systematic Golay code. Example 3 shows the decoding steps for the binary systematic (15, 5, 7) BCH code, denoted by C15; however, the decoding steps of the proposed ESWDA have to make some slight adjustments.

Example 1: Let a message m = (000110101010) be encoded into a C23 codeword c = (11011010100000110101010). If the received word r = (11011010100010111001010), then the error pattern e = (00000000000010001100000), which means a HHH error case. The decoding steps are shown below.

1). Compute s = (10101100100) and w(s) = 5. Since w(s) > 3, go to step 2.

2). Compute s(11) = (10001100000) and w(s(11)) = 3 ≤ 3. The corrected information word m = (010111001010) + (010001100000) = (000110101010). Go to stop.

Example 2: This example demonstrates the worst decoding case. Let a message m = (000110101010) be encoded into a C23 codeword c = (11011010100000110101010). If the received word r = (01011010100100110101011), then the error pattern e = (10000000000100000000001), which means a PCH error case. The decoding steps are shown below.

1). Compute s = (11001001111) and w(s) = 7. Since w(s) > 3, go to step 2.

2). Compute s(11) = (01001001110) and w(s(11)) = 5. Since w(s(11)) > 3, go to step 3.

3). Compute sd3 = (shi) for 11 ≤ i ≤ 22 and w(sd3).

sd3 = (sh11) = (11001001111) – (11000111010) = (00001110101). w(sd3) = 5.

sd3 = (sh12) = (11001001111) – (01100011101) = (10101010010). w(sd3) = 5.

sd3 = (sh21) = (11001001111) – (10010011111) = (01011010000). w(sd3) = 4.

sd3 = (sh22) = (11001001111) – (10001110101) = (01000111010). w(sd3) = 5.

Since every w(sd3) > 2, go to step 4.

4).Compute sd4 = (s(11)hi) for 11 ≤ i ≤ 22 and w(sd4).

sd4 = (s(11)h11) = (01001001110) – (11000111010) = (10001110100). w(sd4) = 5.

sd4 = (s(11)h12) = (01001001110) – (01100011101) = (00101010011). w(sd4) = 5.

sd4 = (s(11)h21) = (01001001110) – (10010011111) = (11011010001). w(sd4) = 6.

sd4 = (s(11)h22) = (01001001110) – (10001110101) = (11000111011). w(sd4) = 7.

Since every w(sd4) > 2, go to step 5.

5).Compute sd5 = ((sh11)–hi) = (00001110101) – hi for 12 ≤ i ≤ 22 and w(sd5).

sd5 = (sh11)–h12 = (00001110101) – (11000111010) = (11001001111). w(sd5) = 7.

sd5 = (sh11)–h13 = (00001110101) – (11110110100) = (11111000001). w(sd5) = 6.

sd5 = (sh11)–h21 = (00001110101) – (10010011111) = (10011101010). w(sd5) = 6.

sd5 = (sh11)–h22 = (00001110101) – (10001110101) = (10000000000). w(sd5) = 1.

Since w(sd5) = 1, the corrected information word m = (r11,…, r22) + u0 + u22-11 = (100110101011) + (100000000000) + (000000000001) = (000110101010). Go to stop.

PCH case is the worst case; however, only 121 error patterns, namely 5.91%, will enter step 5.

Example 3: Let a message m = (00011) be encoded into a codeword of C15 with g(x) = x10 + x9 + x8 + x6 + x5 + x2 + 1, and obtain the codeword c = (101100101000011). If the received word r = (101110101011011), then the error pattern e = (000010000011000), which means a PHH error case. First, the decoding steps of the proposed ESWDA mentioned in Section 3 can be slightly adjusted as follows:

1). (No error, P, PP, and PPP cases.) By Theorem 3, compute s and w(s). If 0 ≤ w(s) ≤ 3, then the information vector is m = (rn-k,…, rn-1), and go to step 5.

2). (H, HH, HHH, PH, PPH, and PHH cases.) By Theorem 3, compute s(n-k) and w(s(n-k)). If 1 ≤ w(s(n-k)) ≤ 3, then the corrected information vector is m = (rn-k,…, rn-1) + (s(n-k) & (11111)), where the notation “&” denotes the bitwise AND operator, and go to step 5.

3). (PH and PPH cases.) Compute the syndrome difference sd3 = (shi) for nkin–1 and w(sd3). If 0 ≤ w(sd3) ≤ 2, then the corrected information vector is m = (rn-k,…, rn-1) + ui-(n-k), and go to step 5.

4). (PHH case) Compute the syndrome difference sd4 = (s(n-k)hi) for nkin–1 and w(sd4). If w(sd4) = 2, then the corrected information vector is m = (rn-k,…, rn-1) + (sd4 & (11111)), and go to step 5.

5). Stop.

For this code, there are 9 error cases. Table 4 lists all the 9 error cases and the number of error patterns in each decoding steps.

Table 4. The number of error patterns in each decoding step of C15

The decoding steps are shown below.

1). Compute s = (1001001011) and w(s) = 5. Since w(s) > 3, go to step 2.

2). Compute s(5) = (1101111101) and w(s(5)) = 8. Since w(s(5)) > 3, go to step 3.

3). Compute sd3 = (shi) for 10 ≤ i ≤ 14 and w(sd3).

sd3 = (sh11) = (1001001011) – (1101100101) = (0100101110). w(sd3) = 5.

sd3 = (sh12) = (1001001011) – (0110101111) = (1111100100). w(sd3) = 6.

sd3 = (sh13) = (1001001011) – (1101011110) = (0100010101). w(sd3) = 4.

sd3 = (sh14) = (1001001011) – (0111011001) = (1110010010). w(sd3) = 5.

sd3 = (sh15) = (1001001011) – (1110110010) = (0111111001). w(sd3) = 7.

Since every w(sd3) > 2, go to step 4.

4). Compute sd4 = (s(5)hi) for 10 ≤ i ≤ 14 and w(sd4).

sd4 = (s(5)h11) = (1101111101) – (1101100101) = (0000011000). w(sd4) = 2.

Since w(sd4) = 2, the corrected information word m = (r10,…, r14) + (sd4 & (11111)) = (11011) + ((0000011000) & (11111)) = (00011). Go to stop.

5. Simulation Results

The proposed ESWDA has been programmed in C++ language. On an Intel Q6600 PC with XP operating system, all 2k codewords with all error patterns were created to check every possible error pattern of C15, C23, and C31, respectively. In other words, the error patterns of C15, C23, and C31 are , , and , respectively. The decoding times of the proposed ESWDA and the SWDA are shown in the Table 5, Table 6, and Table 7, respectively. For v = 1, it means that one error of that code input to the decoder, and for the average decoding time, it means that all the error patterns of that code input to the decoder. In these three tables, the average decoding time of the proposed ESWDA is about 10.6 times, 19.6 times, and 4 times faster than the SWDA, respectively. The memory requirements of the two algorithms are also shown in Table 5, Table 6, and Table 7, respectively. It is obvious that the proposed ESWDA significantly reduces decoding time with the increase of the code length.

Table 5. Comparison of the decoding time (in µs) and memory requirement (in bytes) for the Golay code between two algorithms

Table 6. Comparison of the decoding time (in µs) and memory requirement (in bytes) for the (31, 16, 7) QR code between two algorithms

Table 7. Comparison of the decoding time (in µs) and memory requirement (in bytes) for the (15, 5, 7) BCH code between two algorithms

6. Conclusions

Binary QR codes are well known for their good features. A high-speed and efficient ESWDA is developed to decode C15, C23, and C31. The proposed ESWDA neither stores large lookup table in the memory nor computes complicated algebraic computations. By using Theorem 3, Theorem 4, Theorem 5, and the weight of sdw, the error cases can be quickly identified and corrected. Therefore, the proposed ESWDA is a very efficient and low-cost decoder for decoding the triple-error-correcting QR codes. The proposed ESWDA can be extended to decode other QR codes or BCH codes; however, the decoding steps of the proposed ESWDA need to make some slight adjustments.

References

[1]  Chien, R.T., “Cyclic decoding procedure for the Bose-Chaudhuri-Hocquenghem codes,” IEEE Trans. Inform. Theory, 10(4). 357-363. Oct. 1964.
In article      CrossRef
 
[2]  Chen, Y.H., Chien, C.H., Huang, C.H., Truong, T.K., and Jing, M.H., “Efficient decoding of systematic (23, 12, 7) and (41, 21, 9) quadratic residue codes,” J. Inform. Sci. and Eng., 26(5). 1831-1843. Sept. 2010.
In article      
 
[3]  Elia, M., “Algebraic decoding of the (23, 12, 7) Golay codes,” IEEE Trans. Inform. Theory, 33(1). 150-151. Jan. 1987.
In article      CrossRef
 
[4]  Golay, M.J.E., “Notes on digital coding,” Proc. IRE, 37, 657. 1949.
In article      
 
[5]  Lee, C.D., “Weak general error locator polynomials for triple-error-correcting binary Golay code,” IEEE Comm. Letters, 15(8). 857-859. Aug. 2011.
In article      CrossRef
 
[6]  Lin, T.C., Chang, H.C., Lee, H.P., Chu, S.I, and Truong, T.K., “Decoding of the (31, 16, 7) quadratic residue code,” J. Chinese Institute of Engineers, 33(4). 573-580. June 2010.
In article      CrossRef
 
[7]  Lin, T.C., Chang, H.C., Lee, H.P., and Truong, T.K., “On the decoding of the (24, 12, 8) Golay code,” Inform. Sci., 180(23). 4729-4736. Dec. 2010.
In article      CrossRef
 
[8]  Lin, T.C., Lee, H.P., Chang, H.C., Chu, S.I, and Truong, T.K., “High speed decoding of the binary (47, 24, 11) quadratic residue code,” Inform. Sci., 180(20). 4060-4068. Oct. 2010.
In article      CrossRef
 
[9]  Lin, T.C., Lee, H.P., Chang, H.C., and Truong, T.K., “A cyclic weight algorithm of decoding the (47, 24, 11) quadratic residue code,” Inform. Sci., 197. 215-222. Aug. 2012.
In article      CrossRef
 
[10]  Reed, I.S., Shih, M.T., and Truong, T.K., “VLSI design of inverse-free Berlekamp-Massey algorithm,” IEE Proc. Comput. Digit. Tech., 138(5). 295-298. Sept. 1991.
In article      CrossRef
 
[11]  Reed, I.S., Yin, X., and Truong, T.K., “Algebraic decoding of the (32, 16, 8) quadratic residue code,” IEEE Trans. Inform. Theory, 36 (4). 876-880. July 1990.
In article      CrossRef
 
[12]  Reed, I.S., Yin, X., Truong, T.K., and Holmes, J.K., “Decoding the (24, 12, 8) Golay code,” IEE Proc. Comput. Digit. Tech., 137(3). 202-206. May 1990.
In article      CrossRef
 
[13]  Wicker, S.B. Error control systems for digital communication and storage, Prentice Hall, New Jersey, 1995.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn