We are graduate students at the University of Texas in Austin, enrolled in the Communication Networks and Systems track. For our introductory Wireless Lab class project, we developed several VIs adding LDPC error correcting codes to an existing single carrier transceiver system. We collected bit error rate statistics in an AWGN channel simulator and tested our code on a live wireless link using RF hardware from National Instruments. Below, we provide a description of our implementation and a brief summary of some results.
December 7, 2007
Low density parity check codes are error correcting codes, first discovered by Robert Gallager in the 1960’s, but were largely forgotten because of the encoding and decoding complexity at the time. Although some work was done in the 1980’s, it was not until they were independently rediscovered, starting with work by David MacKay, that their use began to proliferate .
Basically, an LDPC code consists of a sparse matrix with very few ones in each row and column, called the “Parity Check Matrix”. Usually, each row of the matrix is designed to be linearly independent from the other rows, which is an assumption we have made for our project. Regular codes have an equal number of non-zero elements in each column and an equal number in each row, while irregular codes can have a varying number .
An example of an LDPC parity check matrix with three 1’s in each column and four 1’s in each row, created by Gallager .
When multiplied by a column vector “codeword” in modulo 2 arithmetic, the result should be a column vector of 0’s. The code word consists of a block of “parity” check bits, which has the same length as the number of rows, and a block of data bits. The rate of the code is the number of data bits divided by the number of columns in the matrix. The process of encoding is determining the set of parity bits that, when concatenated with the data bits, will multiply the parity check matrix and create a column of zeros. The decoding process determines the set of bits that were transmitted with this property based on the received packet.
LDPC codes are powerful because, with a sufficiently long block length, one can approach Shannon’s capacity limit for a noisy channel with arbitrary precision. In some cases, LDPC codes can be constructed to have better performance than Turbo codes, another widely used error correction code .
Additionally, LDPC codes are advantageous in that they use a lower complexity iterative decoding “belief propagation” algorithm which can be implemented in parallel in hardware. The decoder is also very good at detecting errors in the received codeword while also determining when it is unable to correctly decode the packet. Although the encoding complexity is somewhat high, the powerful properties of LDPC codes have warranted their inclusion in many standards such as IEEE 802.16, 802.20, 802.3 and DBV-RS2 .
From , suppose we would like to impose a series of M parity check on N bits. Once sent through the channel and corrupted by additive white Gaussian noise (AWGN), the decoder, knowing the same parity check matrix, can verify the M parity checks and attempt to determine the correct set of bits that were sent. If H is the MxN parity check matrix with M < N, x is a column vector bit-sequence of length N, and, 0 is a length N column of 0’s, this amounts to satisfying:
x is composed of an M length parity block and an N-M length data block. The goal of encoding is to find the M length parity check sequence such that when concatenated with the data block, the above relationship holds under modulo 2 arithmetic. A brute force approach is to divide the H matrix and x into two parts:
Solving for the parity bits c requires time proportional to M*(N-M). A faster approach is to peform the lower and upper triangular decomposition of A ahead of time so that encoding only involves solving two matrixes using backward substitution and forward substitution only. This requires time proportional to M^2 which is usually a nice improvement given N ~ 2M.
I implemented the min-col algorithm for LU decomposition described on Radford’s website . While preprocessing a large matrix can take a long time, the preprocessed output can be used each time a message is encoded which reduces the actual encoding time.
Using the sparse matrix representation in Matlab greatly increases the speed of the algorithm. I created two versions of the algorithm in LabView, one using regular matrixes and a second using sparse matrixes. Although both work, the efficiency of the sparse implementation was reduced by the fact that Labview does not use linked lists to represent their arrays.
I also implemented the efficient encoding algorithm described in  and  in Matlab. The paper describes a method to form an approximate lower triangular H matrix which can be used for encoding in near linear time. Although the algorithm I created worked well for small matrixes, when testing with large matrixes I saw erroroneous results and did not have time to fully debug the issues. The program is also posted as a reference to an alternative approach.
December 6, 2007
This is the transmitter.vi which creates a source message, encodes the message using the parity check matrix defined in the GlobalParams.vi. If the newH matrix is not defined, the encoder will perform the LU decomposition and cache the result. Although the decomposition may take a long time for large matrixes, it need only be done once. The GlobalParams.vi has a predefined 1000×2000 parity check matrix forming a 1/2 rate code.
Out implemented decoding algorithm is presented in T.K. Moon’s book Error Correcting Coding: Mathematical Methods and Algorithms . The main idea is to use the log ratio of the conditional probabilities along with the party check matrix used by the LDPC encoder to estimate the receied bit sequence. The decoding process can be represented by a graph that alternates between parity checks (check nodes) and the estimates of the received bits (bit nodes).
The Bit Nodes are computed using the formula
There are four stages to the decoding algorithm :
1.Computing the probabilities
2.Updating the check nodes
3.Updating the bit nodes
4.Computing the decoded sequence
The algorithm is iterative between stages 2 through 4.
Relevant VIs: “bpskProb.vi” , “LDPCdecode.vi”
The posterior probabilities for BPSK can be computed using the following formula:
The log likelihood probabilities are output as an array and were computed using the following formula:
Where A is the symbol amplitude (fixed at 1), No is the noise power, and y[n] are the received symbols.
Once the probabilities are computed, they are passed to LDPCdecode.vi. The first part of the vi initializes values for the algorithm.
Relevant VIs : ‘LDPCdecode.vi’, ‘CheckNodeUpdate.vi’
Eta represents the information or “message” that is passed between the check nodes and the bit nodes. It can be computed using the following formula:
Relevant VI’s : LDPCdecode.vi, BitNodeUpdate.vi
BitNodeUpdate.vi updates the Bit Node Checks according formula (1) presented in the LDPC Decoding algorithm. The check nodes eta (input eta) were generated by CheckNodeUpdate.vi and the current array of the probabilities of the received signal (lambda). For each column in the parity check matrix, the Bit Node (lambda) is updated by being added to the sum of the eta values located in that column.
Relevant VI’s : LDPCdecode.vi
Once the Bit Nodes have been updated for the current iteration, the decoded sequence can be easily computed by thresholding the lambda values about zero. The transmitted message is located after the first M (number of rows in the parity check matrix). Whether or not another iteration should take place is determined by using the property of LDPC codes that if the message (d) is successfully decoded, the following condition must be met : mod(H*d’,2) = 0.
LDPCdecode.vi computes this metric every iteration of the while loop, and if the parity condition is met, the loop ends, otherwise it continues with the next iteration, up to the pre-determined max number of iterations. This is the final stage of LDPCDecode.vi and once the loop terminates, dataout is passed to the error detect vi in receiver.vi
The Iterative Log Likelihood LDPC decoder was integrated into the EE381V Wireless Lab baseband simulator and the actual wireless lab software (Top_rx.vi). The new block diagram for receiver.vi is below:
Error statistics are collected for both decoding methods for comparison purposes. Stage 1 of the algorithm is computed in BPSKProb.vi and the log probabilities are sent to LDPCDecode.vi. The number of iterations is also set and passed to the vi. The parity check matrix is set as a global variable and is the same one that is used in the encoder.
The plot above showcases results for a 1000×2000 1/2 rate LDPC code. Clearly, the system utilizing LDPC error correction outperforms regular decoding. However, this is not without cost since the LDPC system incurs additional complexity encoding and decoding the message, as well as 1/2 the throughput of the system without error correction.
These are the results we collected using National Instruments RF hardware in the lab. As you can see, a 1/2 rate LDPC code can completely correct approximately 4% errors in the data stream. Not only that, even when the errors are the result of ISI instead AWGN, the error recovery properties of the LDPC code still hold well.
Important Labview Files
|ldpcEncode.vi||source row: array of message bits
Others are self describing
|Encoded: the encoded message
Others are self-describing
|Encode the message using the provided LDPC H matrix.Performs LU decomposition and caches the results if necessary.|
|readHMatrix.vi||path to the A-list format H matrix file||H matrix defined by the A-list file||Reads H-Matrix Data files from the Encyclopedia of Sparse Graph Codes 
|getLU.vi||Parity Check H matrix||LU decomposed H, L and U matrix||Performs the LU decomposition of H using the min-col criteria described by Neal.|
|LDPCdecode.vi||MaxIterations – Max number of decoding iterations allowed
Noise Power – Used for SNR estimate Lc
H – The parity check matrix used by the encoder
logprobabilities – conditional probabilities
|dataout – decoded bit sequence||Performs the Log Likelihood Iterative LDPC decoding algorithm for BPSK LDPC encoding. Outputs the decoded bit sequence.|
|baseband_sim.vi||None||None||Performs the baseband simulation with the LDPC error correction enabled.|
|GlobalParams.vi||None||None||Cache of the decomposed LU parity check matrix, the packet length, and if LDPC is enabled.|
|fastLDPCEnc.m||msg: k length binary message to be encoded
H: the (n-k) x n parity check matrix LDPC code of length n and dimension k
newH: the H matrix in approximate lower triangular form.
g: the gap of the newH matrix (number of rows away from being lower triangular)
|encoded: the encoded message with n-k parity checks prepended to it.
g: the gap of the newH matrix
|an unoptomized encoder based on the work of T. J. Richardson and R. L. Urbanke. Works for small matrixes only.|
|makeLowerTriangular.m||H: the (n-k) x n parity check matrix LDPC code of length n and dimension k||newH: the H matrix in approximate lower triangular form.||Greedy algorithm for triangularizing a matrix based on T. J. Richardson and R. L. Urbanke’s work|
|LDPCdecode.m||H: Parity check matrx (size M x N)
r: Vector of log likelihood probabilities of the received symbols
Lc = SNR estimate
MaxLoop = Max # of Decoding iterations to attempt
|x = decoded bit sequence||Iterative Log-Likelihood LDPC decoding algorithm as presented in Todd K. Moon’s text Error Correctng Codeing: Mathematical Methods and Algorithms|
|priorProb.m||rcv: received BPSK packet
N0: noise power estimate
amp: amplitude of the BPSK signal
|prior probabilities of the Gaussian Channel||Calculate the prior probabilities of the transmitted symbols through a gaussian channel for BPSK modulation. Used with LDPCdecode.m|
1. Richardson, T.J.; Urbanke, R.L., “Efficient encoding of low-density parity-check codes,” Information Theory, IEEE Transactions on , vol.47, no.2, pp.638-656, Feb 2001
2. T. K. Moon, Error Correcting Coding: Mathematical Methods and Algorithms New York: Wiley, 2005.
3. David J. C. MacKay, Information Theory, Inference and Learning Algorithms Cambridge University, 2003.
4. T. Richardson, R. Urbanke, Modern Coding Theory Cambridge University, 2008.
5. Neal, Radford M. “Software for Low Density Parity Check Codes.” 8 Feb. 2006. U. of Toronto. 15 Nov. 2007 <http://www.cs.utoronto.ca/~radford/ftp/LDPC-2006-02-08/index.html>.
6. Mackay, David J. “Encyclopedia of Sparse Graph Codes.” University of Cambridge. 15 Nov. 2007 <http://www.inference.phy.cam.ac.uk/mackay/codes/data.html>.