Chapter 8
Chapter 8
7.1 Introduction
Apart from the better transmitter and receiver technology, mobile communications
require signal processing techniques that improve the link performance. Equaliza-
tion, Diversity and channel coding are channel impairment improvement techniques.
Equalization compensates for Inter Symbol Interference (ISI) created by multipath
within time dispersive channels. An equalizer within a receiver compensates for
the average range of expected channel amplitude and delay characteristics. In other
words, an equalizer is a filter at the mobile receiver whose impulse response is inverse
of the channel impulse response. As such equalizers find their use in frequency selec-
tive fading channels. Diversity is another technique used to compensate fast fading
and is usually implemented using two or more receiving antennas. It is usually em-
ployed to reduce the depths and duration of the fades experienced by a receiver in
a flat fading channel. Channel coding improves mobile communication link perfor-
mance by adding redundant data bits in the transmitted message.At the baseband
portion of the transmitter, a channel coder maps a digital message sequence in to
another specific code sequence containing greater number of bits than original con-
tained in the message. Channel Coding is used to correct deep fading or spectral
null. We discuss all three of these techniques in this chapter. A general framework
of the fading effects and their mitigation techniques is shown in Figure 7.1.
129
Figure 7.1: A general framework of fading effects and their mitigation techniques.
7.2 Equalization
ISI has been identified as one of the major obstacles to high speed data transmission
over mobile radio channels. If the modulation bandwidth exceeds the coherence
bandwidth of the radio channel (i.e., frequency selective fading), modulation pulses
are spread in time, causing ISI. An equalizer at the front end of a receiver compen-
sates for the average range of expected channel amplitude and delay characteristics.
As the mobile fading channels are random and time varying, equalizers must track
the time-varying characteristics of the mobile channel and therefore should be time-
varying or adaptive. An adaptive equalizer has two phases of operation: training
and tracking. These are as follows.
Training Mode:
130
the channel and estimate filter coefficients to compensate for the channel.
Tracking Mode:
• When the training sequence is finished the filter coefficients are near optimal.
• When the data of the users are received, the adaptive algorithms of the equal-
izer tracks the changing channel.
where d(t) is the transmitted signal, h(t) is the combined impulse response of the
transmitter,channel and the RF/IF section of the receiver and nb (t) denotes the
baseband noise.
If the impulse response of the equalizer is heq (t), the output of the equalizer is
ŷ (t) = d (t) ∗ h (t) ∗ heq (t) + nb (t) ∗ heq (t) = d (t) ∗ g (t) + nb (t) ∗ heq (t) . (7.2)
However, the desired output of the equalizer is d(t) which is the original source data.
Assuming nb (t)=0, we can write y(t) = d(t), which in turn stems the following
equation:
g (t) = h (t) ∗ heq (t) = δ (t) (7.3)
The main goal of any equalization process is to satisfy this equation optimally. In
frequency domain it can be written as
Heq (f ) H (f ) = 1 (7.4)
which indicates that an equalizer is actually an inverse filter of the channel. If the
channel is frequency selective, the equalizer enhances the frequency components with
small amplitudes and attenuates the strong frequencies in the received frequency
131
spectrum in order to provide a flat, composite received frequency response and
linear phase response. For a time varying channel, the equalizer is designed to track
the channel variations so that the above equation is approximately satisfied.
In a zero forcing equalizer, the equalizer coefficients cn are chosen to force the samples
of the combined channel and equalizer impulse response to zero. When each of the
delay elements provide a time delay equal to the symbol duration T, the frequency
response Heq (f ) of the equalizer is periodic with a period equal to the symbol rate
1/T. The combined response of the channel with the equalizer must satisfy Nyquist’s
criterion
Hch (f ) Heq (f ) = 1, |f | < 1/2T (7.5)
where Hch (f ) is the folded frequency response of the channel. Thus, an infinite
length zero-forcing ISI equalizer is simply an inverse filter which inverts the folded
frequency response of the channel.
Disadvantage: Since Heq (f ) is inverse of Hch (f ) so inverse filter may excessively
amplify the noise at frequencies where the folded channel spectrum has high atten-
uation, so it is rarely used for wireless link except for static channels with high SNR
such as local wired telephone. The usual equalizer model follows a time varying or
adaptive structure which is given next.
The basic structure of an adaptive filter is shown in Figure 7.2. This filter is called
the transversal filter, and in this case has N delay elements, N+1 taps and N+1
tunable complex multipliers, called weights. These weights are updated continuously
by an adaptive algorithm. In the figure the subscript k represents discrete time
index. The adaptive algorithm is controlled by the error signal ek . The error signal
is derived by comparing the output of the equalizer, with some signal dk which is
replica of transmitted signal. The adaptive algorithm uses ek to minimize the cost
function and uses the equalizer weights in such a manner that it minimizes the cost
function iteratively. Let us denote the received sequence vector at the receiver and
132
Figure 7.2: A generic adaptive equalizer.
Now, the output sequence of the equalizer yk is the inner product of xk and wk , i.e.,
ek = dk − yk = dk − xTk wk . (7.9)
Assuming dk and xk to be jointly stationary, the Mean Square Error (MSE) is given
as
M SE = E[e2k ] = E[(dk − yk )2 ]
= E[(dk − xTk wk )2 ]
133
where wk is assumed to be an array of optimum values and therefore it has been
taken out of the E() operator. The MSE then can be expressed as
where the signal variance σd2 = E[d2k ] and the cross correlation vector p between the
desired response and the input signal is defined as
p = E [dk xk ] = E dk xk dk xk−1 dk xk−2 · · · dk xk−N . (7.12)
wk = R−1 p. (7.14)
Since an adaptive equalizer compensates for an unknown and time varying channel,
it requires a specific algorithm to update the equalizer coefficients and track the
channel variations. Factors which determine algorithm’s performance are:
Rate of convergence: Number of iterations required for an algorithm, in re-
sponse to a stationary inputs, to converge close enough to optimal solution. A fast
rate of convergence allows the algorithm to adapt rapidly to a stationary environ-
ment of unknown statistics.
Misadjustment: Provides a quantitative measure of the amount by which the
final value of mean square error, averaged over an ensemble of adaptive filters,
deviates from an optimal mean square error.
134
Computational complexity: Number of operations required to make one com-
plete iteration of the algorithm.
Numerical properties: Inaccuracies like round-off noise and representation
errors in the computer, which influence the stability of the algorithm.
Three classic equalizer algorithms are primitive for most of today’s wireless stan-
dards. These include the Zero Forcing Algorithm (ZF), the Least Mean Square Algo-
rithm (LMS), and the Recursive Least Square Algorithm (RLS). Below, we discuss
a few of the adaptive algorithms.
LMS algorithm is the simplest algorithm based on minimization of the MSE between
the desired equalizer output and the actual equalizer output, as discussed earlier.
Here the system error, the MSE and the optimal Wiener solution remain the same
as given the adaptive equalization framework.
In practice, the minimization of the MSE is carried out recursively, and may be
performed by use of the stochastic gradient algorithm. It is the simplest equalization
algorithm and requires only 2N+1 operations per iteration. The filter weights are
updated by the update equation. Letting the variable n denote the sequence of
iteration, LMS is computed iteratively by
where the subscript k denotes the kth delay stage in the equalizer and µ is the step
size which controls the convergence rate and stability of the algorithm.
The LMS equalizer maximizes the signal to distortion ratio at its output within
the constraints of the equalizer filter length. If an input signal has a time dispersion
characteristics that is greater than the propagation delay through the equalizer, then
the equalizer will be unable to reduce distortion. The convergence rate of the LMS
algorithm is slow due to the fact that there is only one parameter, the step size, that
controls the adaptation rate. To prevent the adaptation from becoming unstable,
the value of µ is chosen from
N
0<µ<2 λi (7.17)
i=1
where λi is the i-th eigenvalue of the covariance matrix R.
135
Normalized LMS (NLMS) Algorithm
β
µ (n) = . (7.18)
x (n)2 + ε
β
wk (n + 1) = wk (n) + ek (n) x (n − k) . (7.19)
x (n)2 + ε
7.3 Diversity
136
Figure 7.3: Receiver selection diversity, with M receivers.
Space Diversity
Eb 2
SN R = Γ = α (7.20)
N0
137
where Eb is the average carrier energy, N0 is the noise PSD, α is a random variable
used to represent amplitude values of the fading channel.
The instantaneous SNR(γi ) is usually defined as γi = instantaneous signal power
per branch/mean noise power per branch. For Rayleigh fading channels, α has a
Rayleigh distribution and so α2 and consequently γi have a chi-square distribution
with two degrees of freedom. The probability density function for such a channel is
1 −γi
p (γi ) = e Γ . (7.21)
Γ
The probability that any single branch has an instantaneous SNR less than some
defined threshold γ is
γ
γ
1 −γi −γ
Pr [γi ≤ γ] = p (γi ) dγi = e Γ dγi = 1 − e Γ = P (Γ). (7.22)
Γ
0 0
Similarly, the probability that all M independent diversity branches receive signals
which are simultaneously less than some specific SNR threshold γ is
−γ
M
Pr [γ1 , γ2 , . . . , γM ≤ γ] = 1 − e Γ = PM (γ) (7.23)
which is more than the required SNR for a single branch receiver. This expression
shows the advantage when a selection diversity is used.
To determine of average signal to noise ratio, we first find out the pdf of γ as
d M γ M −1 −γ/
pM (γ) = PM (γ) = 1 − e− /Γ e Γ. (7.25)
dγ Γ
∞
∞ M −1
−x
γ̄ = γpM (γ) dγ = Γ Mx 1 − e e−x dx (7.26)
0 0
where x = γ/Γ and Γ is the average SNR for a single branch, when no diversity is
used.
138
This equation shows an average improvement in the link margin without requir-
ing extra transmitter power or complex circuitry, and it is easy to implement as
it needed a monitoring station and an antenna switch at the receiver. It is not an
optimal diversity technique as it doesn’t use all the possible branches simultaneously.
(b) Feedback or Scanning Diversity:
Scanning all the signals in a fixed sequence until the one with SNR more than a
predetermined threshold is identified. Feedback or scanning diversity is very similar
to selection diversity except that instead of always using the best of N signals, the N
signals are scanned in a fixed sequence until one is found to be above a predetermined
threshold. This signal is then received until it falls below threshold and the scanning
process is again initiated. The resulting fading statistics are somewhat inferior, but
the advantage is that it is very simple to implement(only one receiver is required).
(c) Maximal Ratio Combining:
Signals from all of the m branches are weighted according to their individual
signal voltage to noise power ratios and then summed. Individual signals must be
cophased before being summed, which generally requires an individual receiver and
phasing circuit for each antenna element. Produces an output SNR equal to the
sum of all individual SNR. Advantage of producing an output with an acceptable
SNR even when none of the individual signals are themselves acceptable. Modern
DSP techniques and digital receivers are now making this optimal form, as it gives
the best statistical reduction of fading of any known linear diversity combiner. In
terms of voltage signal,
m
rm = G i ri (7.27)
i=1
where Gi is the gain and ri is the voltage signal from each branch.
(d) Equal Gain Combining:
In some cases it is not convenient to provide for the variable weighting capability
required for true maximal ratio combining. In such cases, the branch weights are
all set unity, but the signals from each branch are co-phased to provide equal gain
combining diversity. It allows the receiver to exploit signals that are simultaneously
received on each branch. Performance of this method is marginally inferior to max-
imal ratio combining and superior to Selection diversity. Assuming all the Gi to be
139
Figure 7.4: Maximal ratio combining technique.
unity, here,
m
rm = ri . (7.28)
i=1
Polarization Diversity
Polarization Diversity relies on the decorrelation of the two receive ports to achieve
diversity gain. The two receiver ports must remain cross-polarized. Polarization
Diversity at a base station does not require antenna spacing. Polarization diversity
combines pairs of antennas with orthogonal polarizations (i.e. horizontal/vertical, ±
slant 45o , Left-hand/Right-hand CP etc). Reflected signals can undergo polarization
changes depending on the channel. Pairing two complementary polarizations, this
scheme can immunize a system from polarization mismatches that would otherwise
cause signal fade. Polarization diversity has prove valuable at radio and mobile com-
140
munication base stations since it is less susceptible to the near random orientations
of transmitting antennas.
Frequency Diversity
Time Diversity
In time diversity, the signal representing the same information are sent over the
same channel at different times. Time diversity repeatedly transmits information at
time spacings that exceeds the coherence time of the channel. Multiple repetition
of the signal will be received with independent fading conditions, thereby providing
for diversity. A modern implementation of time diversity involves the use of RAKE
receiver for spread spectrum CDMA, where the multipath channel provides redun-
dancy in the transmitted message. Disadvantage is that it requires spare bandwidth
also as many receivers as there are channels used for the frequency diversity. Two
important types of time diversity application is discussed below.
In CDMA spread spectrum systems, CDMA spreading codes are designed to provide
very low correlation between successive chips, propagation delay spread in the radio
channel provides multiple version of the transmitted signal at the receiver. Delaying
multipath components by more than a chip duration, will appear like uncorrelated
noise at a CDMA receiver. CDMA receiver may combine the time delayed versions
of the original signal to improve the signal to noise ratio at the receiver. RAKE
141
Figure 7.5: RAKE receiver.
receiver collect the time shifted versions of the original signal by providing a sep-
arate correlation receiver for M strongest multipath components. Outputs of each
correlator are weighted to provide a better estimate of the transmitted signal than
provided by a single component. Demodulation and bit decisions are based on the
weighted output of the correlators. Schematic of a RAKE receiver is shown in Figure
7.5.
Application 2: Interleaver
In the encoded data bits, some source bits are more important than others, and
must be protected from errors. Many speech coder produce several important bits
in succession. Interleaver spread these bit out in time so that if there is a deep fade
or noise burst, the important bits from a block of source data are not corrupted
at the same time. Spreading source bits over time, it becomes possible to make
use of error control coding. Interleaver can be of two forms, a block structure or a
convolutional structure.
A block interleaver formats the encoded data into a rectangular array of m rows
and n columns, and interleaves nm bits at a time. Each row contains a word of
source data having n bits. an interleaver of degree m consists of m rows. source bits
are placed into the interleaver by sequentially increasing the row number for each
142
successive bit, and forming the columns. The interleaved source data is then read
out row-wise and transmitted over the channel. This has the effect of separating
the original source bits by m bit periods. At the receiver, de-interleaver stores the
received data by sequentially increasing the row number of each successive bit, and
then clocks out the data row-wise, one word at a time. Convolutional interleavers
are ideally suited for use with convolutional codes.
In channel coding, redundant data bits are added in the transmitted message so
that if an instantaneous fade occurs in the channel, the data may still be recov-
ered at the receiver without the request of retransmission. A channel coder maps
the transmitted message into another specific code sequence containing more bits.
Coded message is then modulated for transmission in the wireless channel. Channel
Coding is used by the receiver to detect or correct errors introduced by the channel.
Codes that used to detect errors, are error detection codes. Error correction codes
can detect and correct errors.
In 1948, Shannon showed that by proper encoding of the information, errors induced
by a noise channel can be reduced to any desired level without sacrificing the rate
of information transfer. Shannon’s channel capacity formula is applicable to the
AWGN channel and is given by:
S P Eb R b
C = B log2 1 + = B log2 1 + = B log2 1 + (7.29)
N N0 B N0 B
where C is the channel capacity (bit/s), B is the channel bandwidth (Hz), P is the
received signal power (W), N0 is the single sided noise power density (W/Hz), Eb is
the average bit energy and Rb is transmission bit rate.
Equation (7.29) can be normalized by the bandwidth B and is given as
C Eb R b
= log2 1 + (7.30)
B N0 B
and the ratio C/B is denoted as bandwidth efficiency. Introduction of redundant
bits increases the transmission bit rate and hence it increases the bandwidth require-
ment, which reduces the bandwidth efficiency of the link in high SNR conditions, but
143
provides excellent BER performance at low SNR values. This leads to the following
two inferences.
Corollary 1 : While dealing within maximum channel capacity, introduction of re-
dundant bits increase the transmitter rate and hence bandwidth requirement also
increases, while decreasing the bandwidth efficiency, but it also decreases the BER.
Corollary 2 : If data redundancy is not introduced in a wideband noisy environment,
error free performance in not possible (for example, CDMA communication in 3G
mobile phones).
A channel coder operates on digital message (or source) data by encoding the source
information into a code sequence for transmission through the channel. The error
correction and detection codes are classified into three groups based on their struc-
ture.
1. Block Code
2. Convolution Code
3. Concatenated Code.
Block codes are forward error correction (FEC) codes that enable a limited number
of errors to be detected and corrected without retransmission. Block codes can be
used to improve the performance of a communications system when other means of
improvement (such as increasing transmitter power or using a more sophisticated
demodulator) are impractical.
In block codes, parity bits are added to blocks of message bits to make codewords
or code blocks. In a block encoder, k information bits are encoded into n code bits.
A total of n−k redundant bits are added to the k information bits for the purpose of
detecting and correcting errors. The block code is referred to as an (n, k) code, and
the rate of the code is defined as Rc = k/n and is equal to the rate of information
divided by the raw channel rate.
144
ments in which two codewords Ci and Cj differs denoted by d (Ci , Cj ). If the code
used is binary, the distance is known as ’Hamming distance’. For example d(10110,
11011) is 3. If the code ’C’ consists of the set of codewords, then the minimum
distance of the code is given by dmin = min {d (Ci , Cj )}.
(c) Code Weight (w): Weight of a codeword is given by the number of nonzero
elements in the codeword. For a binary code, the weight is basically the number of
1s in the codeword. For example weight of a code 101101 is 4.
Ex 1: The block code C = 00000, 10100, 11110, 11001 can be used to represent two
bit binary numbers as:
• 00 – 00000
• 01 – 10100
• 10 – 11110
• 11 – 11001
(a) Linearity: Suppose Ci and Cj are two code words in an (n, k) block code. Let
α1 and α2 be any two elements selected from the alphabet. Then the code is said to
be linear if and only if α1 C1 + α2 C2 is also a code word. A linear code must contain
the all-zero code word.
(b) Systematic: A systematic code is one in which the parity bits are appended
to the end of the information bits. For an (n, k) code, the first k bits are identical
to the information bits, and the remaining n − k bits of each code word are linear
combinations of the k information bits.
145
(c) Cyclic: Cyclic codes are a subset of the class of linear codes which satisfy the
following cyclic shift property: If C = [Cn−1 , Cn−2 , ..., C0 ] is a code word of a cyclic
code, then [Cn−2 , Cn−3 , ..., C0 , Cn−1 ], obtained by a cyclic shift of the elements of C,
is also a code word. That is, all cyclic shifts of C are code words.
In this context, it is important to know about Finite Field or Galois Field.
Let F be a finite set of elements on which two binary operations – addition (+) and
multiplication (.) are defined. The set F together with the two binary operations is
called a field if the following conditions are satisfied:
1. F is a commutative group under addition.
2. The set of nonzero elements in F is a commutative group under multiplication.
3. Multiplication is distributive over addition; that is, for any three elements a, b,
and c in F, a(b + c) = ab + ac
4. Identity elements 0 and 1 must exist in F satisfying a + 0 = a and a.1 = a.
5. For any a in F, there exists an additive inverse (−a) such that a + (−a) = 0.
6. For any a in F, there exists an multiplicative inverse a−1 such that a.a−1 = 1.
Depending upon the number of elements in it, a field is called either a finite or an
infinite field. The examples of infinite field include Q (set of all rational numbers),
R (set of all real numbers), C (set of all complex numbers) etc. A field with a finite
number of elements (say q) is called a ’Galois Field’ and is denoted by GF(q). A
finite field entity p(x), called a polynomial, is introduced to map all symbols (with
several bits) to the element of the finite field. A polynomial is a mathematical
expression
p (x) = p0 + p1 x + ... + pm xm (7.31)
where the symbol x is called the indeterminate and the coefficients p0 , p1 , ..., pm are
the elements of GF(q). The coefficient pm is called the leading coefficient. If pm
is not equal to zero, then m is called the degree of the polynomial, denoted as deg
p(x). A polynomial is called monic if its leading coefficient is unity. The division
algorithm states that for every pair of polynomials a(x) and b(x) in F(x), there
exists a unique pair of polynomials q(x), the quotient, and r(x), the remainder,
such that a(x) = q(x)b(x) + r(x), where deg r(x)¡deg b(x). A polynomial p(x) in
F(x) is said to be reducible if p(x)=a(x)b(x), otherwise it is called irreducible. A
monic irreducible polynomial of degree at least one is called a prime polynomial.
146
An irreducible polynomial p(x) of degree ‘m’ is said to be primitive if the smallest
integer ‘n’ for which p(x) divides xn +1 is n = 2m −1. A typical primitive polynomial
is given by p(x) = xm + x + 1.
A specific type of code which obeys both the cyclic property as well as poly-
nomial operation is cyclic codes. Cyclic codes are a subset of the class of linear
codes which satisfy the cyclic property. These codes possess a considerable amount
of structure which can be exploited. A cyclic code can be generated by using a
generator polynomial g(p) of degree (n-k). The generator polynomial of an (n,k)
cyclic code is a factor of pn + 1 and has the form
where (xk−1 , . . . , x0 ) represents the k information bits. The resultant codeword c(p)
can be written as
c (p) = x (p) g (p) (7.34)
(a) Single Parity Check Code: In single parity check codes (example: ASCII code),
an overall single parity check bit is appended to ’k’ information bits. Let the infor-
mation bit word be: (b1 , b2 , ..., bk ), then parity check bit: p = b1 + b2 + ......... + bk
modulo 2 is appended at the (k+1)th position, making the overall codeword: C =
(b1 , b2 , ..., bk , p). The parity bit may follow an even parity or an odd parity pattern.
All error patterns that change an odd number of bits are detectable, and all even
numbered error patterns are not detectable. However, such codes can only detect
the error, it cannot correct the error.
Ex. 2: Consider a (8,7) ASCII code with information codeword (0, 1, 0, 1, 1, 0, 0)
and encoded with overall even parity pattern. Thus the overall codeword is (0, 1, 0,
1, 1, 0, 0, 1) where the last bit is the parity bit. If there is a single error in bit 3: (0,
147
1, 1, 1, 1, 0, 0, 1), then it can be easily checked by the receiver that now there are
odd number of 1’s in the codeword and hence there is an error. On the other hand,
if there are two errors, say, errors in bit 3 and 5: (0, 1, 1, 1, 0, 0, 0, 1), then error
will not be detected.
After decoding a received codeword, let pc be the probability that the decoder
gives correct codeword C, pe is the probability that the decoder gives incorrect
codeword C = C, and pf is the probability that the decoder fails to give a codeword.
In this case, we can write pc + pe + pf = 1.
If in an n-bit codeword, there are j errors and p is the bit error probability,
then the probability of obtaining j errors in this codeword is Pj = n Cj pj (1 − p)n−j .
Using this formula, for any (n, n − 1) single parity check block code, we get
• pc = P 0 ,
C1 = [1 0 1 0]
C2 = [0 0 1 1]
C3 = [1 1 0 0]
C4 = [0 1 0 1].
Now if an error occurs in the second bit of the second codeword, the received code-
words at the receiver would then be
C1 = [1 0 1 0]
148
C2 = [0 1 1 1] ←
C3 = [1 1 0 0]
C4 = [0 1 0 1]
and these would indicate the corresponding row and column position of the erroneous
bit with vertical and horizontal parity check. Thus the bit can be corrected. Here
we get a horizontal (4, 3) codeword and a vertical (4, 3) codeword and concatenating
them we get a (16, 9) product code. In general, a product code can be formed as
(n1 , k1 ) & (n2 , k2 ) → (n1 n2 , k1 k2 ).
(c) Repetition Codes: In a (n,1) repetition code each information bit is repeated
n times (n should be odd) and transmitted. At the receiver, the majority decoding
principle is used to obtain the information bit. Accordingly, if in a group of n received
bit, 1 occurs a higher number of times than 0, the information bit is decoded as 1.
Such majority scheme works properly only if the noise affects less than n/2 number
of bits.
Ex 3: Consider a (3,1) binary repetition code.
• For input bit 0, the codeword is (0 0 0) and for input bit 1, the codeword is
(1 1 1).
where k is the number of information bits used to form a n bit codeword, and m
is any positive integer. The number of parity symbols are n − k = m. Thus, a
149
codeword is represented by C = [i1 , ...in , p1 , ..., pn−k ]. This is quite a useful code in
communication which is illustrated via the following example.
Ex 4: Consider a (7, 4) Hamming code. With three parity bits we can correct exactly
1 error. The parity bits may follow such a modulo 2 arithmetic:
p1 = i1 + i2 + i3 ,
p2 = i2 + i3 + i4 ,
p3 = i1 + i3 + i4 ,
p1 + i1 + i2 + i3 = 0
p2 + i2 + i3 + i4 = 0
p3 + i1 + i3 + i4 = 0.
S = [S1 S2 S3 ]
S1 = v1 + v2 + v3 + v5
S2 = v2 + v3 + v4 + v6
S3 = v1 + v2 + v4 + v7
It is obvious that in case of no error, the syndrome vector is equal to zero. Corre-
sponding to this syndrome vector, there is an error vector e which can be obtained
from a syndrome table and finally the required codeword is taken as C = V + e. In
a nutshell, to obtain the required codeword, we perform the following steps:
1. Calculate S from decoder input V.
2. From syndrome table, obtain e corresponding to S.
3. The required codeword is then C = V + e.
A few cases are given below to illustrate the syndrome decoding.
1. Let C = [0 1 1 1 0 1 0] and V = [0 1 1 1 0 1 0]. This implies S = [0 0 0], and it
corresponds to e = [0 0 0 0 0 0 0]. Thus, C = V + e = [0 1 1 1 0 1 0].
150
2. Let C = [1 1 0 0 0 1 0] and V = [1 1 0 1 0 1 0]. This means S = [0 1 1], from which
we get e = [0 0 0 1 0 0 0] which means a single bit error is there in the received bit
v4 . This will be corrected by performing the operation C = V + e.
3. Another interesting case is, let C = [0 1 0 1 1 0 0] and V = [0 0 1 1 1 0 1] (two
errors at second and third bits). This makes S = [0 0 0] and as a result, e = [0 0
0 0 0 0 0]. However, C = V , and C = V + e implies the double error cannot be
corrected. Therefore a (7,4) Hamming code can correct only single bit error.
(e) Golay Codes: Golay codes are linear binary (23,12) codes with a minimum
distance of seven and a error correction capability of three bits. This is a special,
one of a kind code in that this is the only nontrivial example of a perfect code.
Every codeword lies within distance three of any codeword, thus making maximum
likelihood decoding possible.
(f) BCH Codes: BCH code is one of the most powerful known class of linear
cyclic block codes, known for their multiple error correcting ability, and the ease
of encoding and decoding. It’s block length is n = 2m − 1 for m ≥ 3 and number
of errors that they can correct is bounded by t < (2m − 1)/2. Binary BCH codes
can be generalized to create classes of non binary codes which use m bits per code
symbol.
(g) Reed Solomon (RS) Codes: Reed-Solomon code is an important subset of
the BCH codes with a wide range of applications in digital communication and data
storage. Typical application areas are storage devices (CD, DVD etc.), wireless
communications, digital TV, high speed modems. It’s coding system is based on
groups of bits, such as bytes, rather than individual 0 and 1. This feature makes it
particularly good at dealing with burst of errors: six consecutive bit errors. Block
length of these codes is n = 2m − 1, and can be extended to 2m or 2m + 1. Number
of parity symbols that must be used to correct e errors is n − k = 2e. Minimum
distance dmin = 2e + 1, and it achieves the largest possible dmin of any linear code.
For US-CDPD, the RS code is used with m = 6. So each of the 64 field elements
is represented by a 6 bit symbol. For this case, we get the primitive polynomial as
p(x) = x6 + x + 1. Equating p(x) to 0 implies x6 = x + 1.
The 6 bit representation of the finite field elements is given in Table 7.1. The table
elements continue up to α62 . However, to follow linearity property there should be
151
Table 7.1: Finite field elements for US-CDPD
α5 α4 α3 α2 α1 α0
1 0 0 0 0 0 1
α1 0 0 0 1 0 0
α2 0 0 1 0 0 0
. . . . . . .
. . . . . . .
α6 = α + 1 0 0 0 0 1 1
. . . . . . .
. . . . . . .
152
Figure 7.6: A convolutional encoder with n=2 and k=1.
153
Figure 7.8: Tree diagram representation of a convolutional encoder.
output bits (c1 c2 ). Again, it is not difficult to verify from the state diagram that an
input information sequence b = (1011) generates an encoded sequence c = (11, 10,
00, 01).
Tree Diagram:
The tree diagram shows the structure of the encoder in the form of a tree with the
branches representing the various states and the outputs of the coder. The encoded
bits are labeled on the branches of the tree. Given an input sequence, the encoded
sequence can be directly read from the tree. As an example, an input sequence
(1011) results in the encoded sequence (11, 10, 00, 01).
154
Figure 7.10: Block diagram of a turbo encoder.
Trellis Diagram:
Tree reveals that the structure repeats itself once the number of stages is greater
than the constraint length. It is observed that all branches emanating from two
nodes having the same state are identical in the sense that they generate identical
output sequences. This means that the two nodes having the same label can be
merged. By doing this throughout the tree diagram, we obtain another diagram
called a Trellis Diagram which is more compact representation.
155
may be binary or m-binary encoder. Encoders are also categorized as systematic
or non-systematic. If the component encoders are not identical then it is called an
asymmetric turbo code.
7.5 Conclusion
Although a lot of advanced powerful techniques for mitigating the fading effects
such as space diversity in MIMO systems, space-time block coding scheme, MIMO
equalization, BLAST architectures etc. have taken place in modern wireless com-
munication, nevertheless, the discussed topics in this chapter are the basic building
blocks for all such techniques and that stems the necessity for all these discussions.
The effectiveness of the discussed topics would be more clear in the next chapter in
the context of different multiple access techniques.
7.6 References
156