0% found this document useful (0 votes)
17 views

DSP Lectures Full Set

Digital signal processing (DSP) involves representing signals in digital form and using digital processors to analyze, modify, or extract information from signals. Many signals come from analog signals that have been sampled and converted to discrete-time digital signals. DSP has advantages over analog processing like accuracy, reproducibility, and flexibility. Example applications of DSP include audio processing, image processing, speech recognition, and more.

Uploaded by

Asad Javied
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

DSP Lectures Full Set

Digital signal processing (DSP) involves representing signals in digital form and using digital processors to analyze, modify, or extract information from signals. Many signals come from analog signals that have been sampled and converted to discrete-time digital signals. DSP has advantages over analog processing like accuracy, reproducibility, and flexibility. Example applications of DSP include audio processing, image processing, speech recognition, and more.

Uploaded by

Asad Javied
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

 

Digital Signal Processing

Content

1. Introduction to DSP
2. Discrete-time Signals and Systems
3. z-transform
4. Sampling Continuous Time Signals
5. Discrete Fourier Transform
6. Transform Analysis of LTI Systems
7. Filter Design
Introduction to DSP
A signal is any variable that carries information. Examples of the types of
signals of interest are
 Speech (telephony, radio, everyday communication)
 Biomedical signals (EEG brain signals)
 Sound and music
 Video and image
 Radar signals (range and bearing).
Digital signal processing (DSP) is concerned with the digital representation of
signals and the use of digital processors to analyse, modify, or extract
information from signals.
Many signals in DSP are derived from analogue signals which have been
sampled at regular intervals and converted into digital form. The key
advantages of DSP over analogue processing are
 Guaranteed accuracy (determined by the number of bits used)
 Perfect reproducibility
 No drift in performance due to temperature or age
 Takes advantage of advances in semiconductor technology
 Greater flexibility (can be reprogrammed without modifying hardware)
 Superior performance (linear phase response possible, and filtering
algorithms can be made adaptive)
 Sometimes information may already be in digital form.
There are however (still) some disadvantages
 Speed and cost (DSP design and hardware may be expensive, especially

1
with high bandwidth signals)
 Finite wordlength problems (limited number of bits may cause
degradation).
Application areas of DSP are considerable:
 Image processing (pattern recognition, robotic vision, image
enhancement, facsimile, satellite weather map, animation)
 Instrumentation and control (spectrum analysis, position and rate control,
noise reduction, data compression)
 Speech and audio (speech recognition, speech synthesis, text to speech,
digital audio, equalisation)
 Military (secure communication, radar processing, sonar processing,
missile guidance)

 Telecommunications (echo cancellation, adaptive equalisation, spread


spectrum, video conferencing, data communication)
 Biomedical (patient monitoring, scanners, EEG brain mappers, ECG
analysis, X-ray storage and enhancement).

Example: audio signal reconstruction in CDs


Information on a compact disk is recorded on a spiral track as a succession of
pits (106 bits=mm2 ).
The recording or mastering process is depicted below:

2
44.1 kHz
sampling frequency

Analogue Sample 16 bit 1.41 Mbit/s


LPF and hold ADC

Multiplexer
Analogue Sample 16 bit
LPF and hold ADC

Laser Synchro− EFM 2−level RS


or nisation modulator encoder
mastering

4.32 Mbit/s

 Analogue signal in each stereo channel is sampled at 44:1 kHz and


digitised to 16 bits (90 dB dynamic range), resulting in 32 bits per
sampling instant.
 Encoded using a two-level Reed-Solomon code to enable errors to be
corrected or concealed during reproduction.

 An EFM (eight-to-fourteen) modulation scheme translates each byte in the


stream to a 14 bit code, which is more suitable for disc storage (eliminates
adjacent 1’s, etc.)
 The resulting bit stream is used to control a laser beam, which records
information on the disc.
The audio signal reconstruction process is demonstrated below:

3
Disc

Laser optical EFM Error correction Time base


pickup demodulation and concealment correction

2x44.1 kHz
Lowpass 14−bit
filter DAC
Oversampling
digital filter
Lowpass 14−bit
filter DAC

176.4 kHz

 Track optically scanned at 1:2 m/s


 Signal is demodulated, errors detected and (if possible) corrected. If
correction is not possible, errors are concealed by interpolation or muting
 This results in a series of 16 bit words, each representing a single audio
sample. These samples could be applied directly to a DAC and analogue
lowpass filtered
 However, this would require high specification lowpass filters (20 kHz
frequencies must be reduced by 50 dB), and the filter should have linear
phase. To avoid this, signals are upsampled by a factor of 4. This makes
the output of the DAC smoother, simplifying the analogue filtering
requirements.
 The use of a digital filter also allows a linear phase response, reduces
chances of intermodulation, and yields a filter that varies with clock rate.

4
Discrete-time signals and systems
See Oppenheim and Schafer, Second Edition pages 8–93, or First Edition
pages 8–79.

1 Discrete-time signals

A discrete-time signal is represented as a sequence of numbers:

x D fxŒng; 1 < n < 1:

Here n is an integer, and xŒn is the nth sample in the sequence.


Discrete-time signals are often obtained by sampling continuous-time signals.
In this case the nth sample of the sequence is equal to the value of the analogue
signal xa .t / at time t D nT :

xŒn D xa .nT /; 1 < n < 1:

The sampling period is then equal to T , and the sampling frequency is


fs D 1=T .
xa .1T /

... ...
t

x[1]

For this reason, although xŒn is strictly the nth number in the sequence, we
often refer to it as the nth sample. We also often refer to “the sequence xŒn”
when we mean the entire sequence.
Discrete-time signals are often depicted graphically as follows:

1
x[−3]
x[−2]
x[−4] x[−1]
x[4]
... x[0] ...
1 2 3
n
−4 −3 −2 −1 0 4
x[1]
x[3]
x[2]

(This can be plotted using the MATLAB function stem.) The value xŒn is
undefined for noninteger values of n.
Sequences can be manipulated in several ways. The sum and product of two
sequences xŒn and yŒn are defined as the sample-by-sample sum and product
respectively. Multiplication of xŒn by a is defined as the multiplication of
each sample value by a.
A sequence yŒn is a delayed or shifted version of xŒn if

yŒn D xŒn n0 ;

with n0 an integer.
The unit sample sequence
1

n
0
is defined as 8
<0 n¤0
ıŒn D
:1 n D 0:
This sequence is often referred to as a discrete-time impulse, or just impulse.
It plays the same role for discrete-time signals as the Dirac delta function does
for continuous-time signals. However, there are no mathematical

2
complications in its definition.
An important aspect of the impulse sequence is that an arbitrary sequence can
be represented as a sum of scaled, delayed impulses. For example, the
sequence
a 3
a 2
a 4 a 1
a4
... a0 ...
1 2 3
n
−4 −3 −2 −1 0 4
a1
a3
a2

can be represented as
xŒn D a 4 ıŒn C 4 C a 3 ıŒn C 3 C a 2 ıŒn C 2 C a 1 ıŒn C 1 C a0 ıŒn
Ca1 ıŒn 1 C a2 ıŒn 2 C a3 ıŒn 3 C a4 ıŒn 4:

In general, any sequence can be expressed as


1
X
xŒn D xŒkıŒn k:
kD 1

The unit step sequence


1

n
0

is defined as 8
<1 n0
uŒn D
:0 n < 0:

3
The unit step is related to the impulse by
n
X
uŒn D ıŒk:
kD 1

Alternatively, this can be expressed as


1
X
uŒn D ıŒn C ıŒn 1 C ıŒn 2 C    D ıŒn k:
kD0

Conversely, the unit sample sequence can be expressed as the first backward
difference of the unit step sequence

ıŒn D uŒn uŒn 1:

Exponential sequences are important for analysing and representing


discrete-time systems. The general form is

xŒn D A˛ n :

If A and ˛ are real numbers then the sequence is real. If 0 < ˛ < 1 and A is
positive, then the sequence values are positive and decrease with increasing n:

...
...
n
0
For 1 < ˛ < 0 the sequence alternates in sign, but decreases in magnitude.
For j˛j > 1 the sequence grows in magnitude as n increases.
A sinusoidal sequence

...
n
0 ...

4
has the form
xŒn D A cos.!0 n C / for all n;
with A and  real constants. The exponential sequence A˛ n with complex
˛ D j˛je j!0 and A D jAje j can be expressed as

xŒn D A˛ n D jAje j j˛jn e j!0 n D jAjj˛jn e j.!0 nC/


D jAjj˛jn cos.!0 n C / C j jAjj˛jn sin.!0 n C /;

so the real and imaginary parts are exponentially weighted sinusoids.


When j˛j D 1 the sequence is called the complex exponential sequence:

xŒn D jAje j.!0 nC/ D jAj cos.!0 n C / C j jAj sin.!0 n C /:

The frequency of this complex sinusoid is !0 , and is measured in radians per


sample. The phase of the signal is .
The index n is always an integer. This leads to some important differences
between the properties of discrete-time and continuous-time complex
exponentials:
 Consider the complex exponential with frequency .!0 C 2/:

xŒn D Ae j.!0 C2/n D Ae j!0 n e j 2 n D Ae j!0 n :

Thus the sequence for the complex exponential with frequency !0 is


exactly the same as that for the complex exponential with frequency
.!0 C 2/. More generally, complex exponential sequences with
frequencies .!0 C 2 r/, where r is an integer, are indistinguishable from
one another. Similarly, for sinusoidal sequences

xŒn D A cosŒ.!0 C 2 r/n C  D A cos.!0 n C /:

 In the continuous-time case, sinusoidal and complex exponential


sequences are always periodic. Discrete-time sequences are periodic (with
period N) if
xŒn D xŒn C N  for all n:

5
Thus the discrete-time sinusoid is only periodic if

A cos.!0 n C / D A cos.!0 n C !0 N C /;

which requires that

!0 N D 2k for k an integer:

The same condition is required for the complex exponential sequence


C e j!0 n to be periodic.
The two factors just described can be combined to reach the conclusion that
there are only N distinguishable frequencies for which the corresponding
sequences are periodic with period N . One such set is
2k
!k D ; k D 0; 1; : : : ; N 1:
N

Additionally, for discrete-time sequences the interpretation of high and low


frequencies has to be modified: the discrete-time sinusoidal sequence
xŒn D A cos.!0 n C / oscillates more rapidly as !0 increases from 0 to ,
but the oscillations become slower as it increases further from  to 2.

6
1
!0 D 0
0
−1

1
!0 D 
8

0
−1

1
!0 D 
4

0
−1

1
!0 D 

0
−1

1
!0 D 7
4

0
−1

1
!0 D 15
8

0
−1
−8 −6 −4 −2 0 2 4 6 8

The sequence corresponding to !0 D 0 is indistinguishable from that with


!0 D 2. In general, any frequencies in the vicinity of !0 D 2k for integer
k are typically referred to as low frequencies, and those in the vicinity of
!0 D . C 2k/ are high frequencies.

7
2 Discrete-time systems

A discrete-time system is defined as a transformation or mapping operator that


maps an input signal xŒn to an output signal yŒn. This can be denoted as

yŒn D T fxŒng:

T fg
x[n] y[n]

Example: Ideal delay

yŒn D xŒn nd  W

x[n]
... ...
n
−3 −2 −1 0 1 2 3

y[n]=x[n−2]
... ...
n
−1 0 1 2 3 4 5

This operation shifts input sequence later by nd samples.

8
Example: Moving average

M2
1 X
yŒn D xŒn k
M1 C M2 C 1
kD M1

For M1 D 1 and M2 D 1, the input sequence


y[3]

x[n]
... ...
n
−1 0 1 2 3 4 5

y[2]

yields an output with


::
:
1
yŒ2 D .xŒ1 C xŒ2 C xŒ3/
3
1
yŒ3 D .xŒ2 C xŒ3 C xŒ4/
3
::
:

In general, systems can be classified by placing constraints on the


transformation T fg.

2.1 Memoryless systems

A system is memoryless if the output yŒn depends only on xŒn at the same n.
For example, yŒn D .xŒn/2 is memoryless, but the ideal delay

9
yŒn D xŒn nd  is not unless nd D 0.

2.2 Linear systems

A system is linear if the principle of superposition applies. Thus if y1 Œn is the


response of the system to the input x1 Œn, and y2 Œn the response to x2 Œn, then
linearity implies
 Additivity:

T fx1 Œn C x2 Œng D T fx1 Œng C T fx2 Œng D y1 Œn C y2 Œn

 Scaling:
T fax1 Œng D aT fx1 Œng D ay1 Œn:

These properties combine to form the general principle of superposition

T fax1 Œn C bx2 Œng D aT fx1 Œng C bT fx2 Œng D ay1 Œn C by2 Œn:

In all cases a and b are arbitrary constants.


This property generalises to many inputs, so the response of a linear system to
P P
xŒn D k ak xk Œn will be yŒn D k ak yk Œn.

2.3 Time-invariant systems

A system is time invariant if a time shift or delay of the input sequence causes
a corresponding shift in the output sequence. That is, if yŒn is the response to
xŒn, then yŒn n0  is the response to xŒn n0 .
For example, the accumulator system
n
X
yŒn D xŒk
kD 1

10
is time invariant, but the compressor system

yŒn D xŒM n

for M a positive integer (which selects every M th sample from a sequence) is


not.

2.4 Causality

A system is causal if the output at n depends only on the input at n and earlier
inputs.
For example, the backward difference system

yŒn D xŒn xŒn 1

is causal, but the forward difference system

yŒn D xŒn C 1 xŒn

is not.

2.5 Stability

A system is stable if every bounded input sequence produces a bounded output


sequence:
 Bounded input: jxŒnj  Bx < 1
 Bounded output: jyŒnj  By < 1.

For example, the accumulator


n
X
yŒn D xŒn
kD 1

11
is an example of an unbounded system, since its response to the unit step uŒn
is 8
n <0
X n<0
yŒn D uŒn D
:n C 1 n  0;
kD 1

which has no finite upper bound.

3 Linear time-invariant systems


If the linearity property is combined with the representation of a general
sequence as a linear combination of delayed impulses, then it follows that a
linear time-invariant (LTI) system can be completely characterised by its
impulse response.
Suppose hk Œn is the response of a linear system to the impulse ıŒn k at
n D k. Since ( 1 )
X
yŒn D T xŒkıŒn k ;
kD 1
the principle of superposition means that
1
X 1
X
yŒn D xŒkT fıŒn kg D xŒkhk Œn:
kD 1 kD 1

If the system is additionally time invariant, then the response to ıŒn k is


hŒn k. The previous equation then becomes
1
X
yŒn D xŒkhŒn k:
kD 1

This expression is called the convolution sum. Therefore, a LTI system has
the property that given hŒn, we can find yŒn for any input xŒn. Alternatively,
yŒn is the convolution of xŒn with hŒn, denoted as follows:

yŒn D xŒn  hŒn:

12
The previous derivation suggests the interpretation that the input sample at
n D k, represented by xŒkıŒn k, is transformed by the system into an
output sequence xŒkhŒn k. For each k, these sequences are superimposed
to yield the overall output sequence:

xŒn hŒn
1
−1 0 n 0 n

xŒ 1ıŒn C 1 xŒ 1hŒn C 1

−1 0 n 0 n

xŒ1ıŒn 1 xŒ1hŒn 1
1
0 n 0 n

yŒn D xŒ 1hŒn C 1 C xŒ1hŒn 1

0 n

A slightly different interpretation, however, leads to a convenient


computational form: the nth value of the output, namely yŒn, is obtained by
multiplying the input sequence (expressed as a function of k) by the sequence
with values hŒn k, and then summing all the values of the products
xŒkhŒn k. The key to this method is in understanding how to form the
sequence hŒn k for all values of n of interest.
To this end, note that hŒn k D hŒ .k n/. The sequence hŒ k is seen to
be equivalent to the sequence hŒk reflected around the origin:

13
h[k]

k
−2 0 5

Reflect h[−k]

k
−5 0 2

Shift h[n−k]

k
n−5 0 n n+2

The sequence hŒn k is then obtained by shifting the origin of the sequence
to k D n.
To implement discrete-time convolution, the sequences xŒk and hŒn k are
multiplied together for 1 < k < 1, and the products summed to obtain the
value of the output sample yŒn. To obtain another output sample, the
procedure is repeated with the origin shifted to the new sample position.
Example: analytical evaluation of the convolution sum
Consider the output of a system with impulse response
8
<1 0nN 1
hŒn D
:0 otherwise

to the input xŒn D an uŒn. To find the output at n, we must form the sum over
all k of the product xŒkhŒn k.

14
1

x[n]
0.5

0
−10 −5 0 5 10
k

1
h[n−k]

0.5

0
−10 −5 n−(N−1) 0 n 5 10
k

Since the sequences are non-overlapping for all negative n, the output must be
zero:
yŒn D 0; n < 0:
For 0  n  N 1 the product terms in the sum are xŒkhŒn k D ak , so it
follows that
n
X
yŒn D ak ; 0nN 1:
kD0

Finally, for n > N 1 the product terms are xŒkhŒn k D ak as before, but
the lower limit on the sum is now n N C 1. Therefore
n
X
yŒn D ak ; n>N 1:
kDn N C1

15
4 Properties of LTI systems
All LTI systems are described by the convolution sum
1
X
yŒn D xŒkhŒn k:
kD 1

Some properties of LTI systems can therefore be found by considering the


properties of the convolution operation:
 Commutative: xŒn  hŒn D hŒn  xŒn
 Distributive over addition:

xŒn  .h1 Œn C h2 Œn/ D xŒn  h1 Œn C xŒn  h2 Œn:

 Cascade connection:

h1 Œn h2 Œn
x[n] y[n]

h2 Œn h1 Œn
x[n] y[n]

yŒn D hŒn  xŒn D h1 Œn  h2 Œn  xŒn D h2 Œn  h1 Œn  xŒn.


 Parallel connection:

h1 Œn

x[n] y[n]
h2 Œn

yŒn D .h1 Œn C h2 Œn/  xŒn D hp Œn  xŒn.


Additional important properties are:
P1
 A LTI system is stable if and only if S D kD 1 jhŒkj < 1. The ideal

16
delay system hŒn D ıŒn nd  is stable since S D 1 < 1; the moving
average system
M2
1 X
hŒn D ıŒn k
M1 C M2 C 1
kD M1
8
1
<
M1 CM2 C1
M1  n  M2
D
:0 otherwise;

the forward difference system hŒn D ıŒn C 1 ıŒn, and the backward
difference system hŒn D ıŒn ıŒn 1 are stable since S is the sum of a
finite number of finite samples, and is therefore less than 1; the
accumulator system
n
X
hŒn D ıŒk
kD 1
8
<1 n0
D
:0 n<0
D uŒn
P1
is unstable since S D nD0 uŒn D 1.
 A LTI system is causal if and only if hŒn D 0 for n < 0. The ideal delay
system is causal if nd  0; the moving average system is causal if
M1  0 and M2  0; the accumulator and backward difference systems
are causal; the forward difference system is noncausal.
Systems with only a finite number of nonzero values in hŒn are called Finite
duration impulse response (FIR) systems. FIR systems are stable if each
impulse response value is finite. The ideal delay, the moving average, and the
forward and backward difference described above fall into this class. Infinite
impulse response (IIR) systems, such as the accumulator system, are more
difficult to analyse. For example, the accumulator system is unstable, but the

17
IIR system
hŒn D an uŒn; jaj < 1
is stable since
1 1
X
n
X 1
SD ja j  jajn D <1
nD0 nD0
1 jaj

(it is the sum of an infinite geometric series).


Consider the system

Forward One−sample
difference delay

which has

hŒn D .ıŒn C 1 ıŒn/  ıŒn 1


D ıŒn 1  ıŒn C 1 ıŒn 1  ıŒn
D ıŒn ıŒn 1:

This is the impulse response of a backward difference system:

One−sample Forward
delay difference

Backward
difference

Here a non-causal system has been converted to a causal one by cascading with
a delay. In general, any non-causal FIR system can be made causal by
cascading with a sufficiently long delay.
Consider the system consisting of an accumulator followed by a backward
difference:

18
Backward
Accumulator
difference

The impulse response of this system is

hŒn D uŒn  .ıŒn ıŒn 1/ D uŒn uŒn 1 D ıŒn:

The output is therefore equal to the input because xŒn  ıŒn D xŒn. Thus the
backward difference exactly compensates for (or inverts) the effect of the
accumulator — the backward difference system is the inverse system for the
accumulator, and vice versa. We define this inverse relationship for all LTI
systems:
hŒn  hi Œn D ıŒn:

5 Linear constant coefficient difference equations


Some LTI systems can be represented in terms of linear constant coefficient
difference (LCCD) equations
N
X M
X
ak yŒn k D bm xŒn m:
kD0 mD0

Example: difference equation representation of the accumulator


Take for example the accumulator

Backward
Accumulator
difference
x[n] y[n] x[n]

Here yŒn yŒn 1 D xŒn, which can be written in the desired form with
N D 1, a0 D 1, a1 D 1, M D 0, and b0 D 1. Rewriting as

yŒn D yŒn 1 C xŒn

19
we obtain the recursion representation

x[n] y[n]

One−sample
delay

where at n we add the current input xŒn to the previously accumulated sum
yŒn 1.
Example: difference equation representation of moving average
Consider now the moving average system with M1 D 0:
1
hŒn D .uŒn uŒn M2 1/:
M2 C 1
The output of the system is
M2
1 X
yŒn D xŒn k;
M2 C 1
kD0

which is a LCCDE with N D 0, a0 D 1, and M D M2 , bk D 1=.M2 C 1/.


Using the sifting property of ıŒn,
1
hŒn D .ıŒn ıŒn M2 1/  uŒn
M2 C 1
so

Attenuator + x1 Œn
Accumulator
x[n] 1=.M2 C 1/ y[n]

.M2 C 1/
sample delay

20
Here x1 Œn D 1=.M2 C 1/.xŒn xŒn M2 1/ and for the accumulator
yŒn yŒn 1 D x1 Œn. Therefore
1
yŒn yŒn 1 D .xŒn xŒn M2 1/;
M2 C 1
which is again a (different) LCCD equation with N D 1, a0 D 1, a1 D 1,
b0 D bM2 C1 D 1=.M2 C 1/.
As for constant coefficient differential equations in the continuous case,
without additional information or constraints a LCCDE does not provide a
unique solution for the output given an input. Specifically, suppose we have
the particular output yp Œn for the input xp Œn. The same equation then has the
solution
yŒn D yp Œn C yh Œn;
where yh Œn is any solution with xŒn D 0. That is, yh Œn is an homogeneous
solution to the homogeneous equation
N
X
ak yh Œn k D 0:
kD0

It can be shown that there are N nonzero solutions to this equation, so a set of
N auxiliary conditions are required for a unique specification of yŒn for a
given xŒn.
If a system is LTI and causal, then the initial conditions are initial rest
conditions, and a unique solution can be obtained.

6 Frequency-domain representation of
discrete-time signals and systems
The Fourier transform considered here is strictly speaking the discrete-time
Fourier transform (DTFT), although Oppenheim and Schafer call it just the

21
Fourier transform. Its properties are recapped here (with examples) to show
nomenclature.
Complex exponentials

xŒn D e j!n ; 1<n<1

are eigenfunctions of LTI systems:


1 1
!
X X
yŒn D hŒke j!.n k/
D e j!n hŒke j!k
:
kD 1 kD 1

Defining
1
X
j! j!k
H.e /D hŒke
kD 1

we have that yŒn D H.e j! /e j!n D H.e j! /xŒn. Therefore, e j!n is an


eigenfunction of the system, and H.e j! / is the associated eigenvalue.
The quantity H.e j! / is called the frequency response of the system, and
j! /
H.e j! / D HR .e j! / C jHI .e j! / D jH.e j! /je j ^H.e :

Example: frequency response of ideal delay:


Consider the input xŒn D e j!n to the ideal delay system yŒn D xŒn nd :
the output is
yŒn D e j!.n nd / D e j!nd e j!n :
The frequency response is therefore

H.e j! / D e j!nd
:

Alternatively, for the ideal delay hŒn D ıŒn nd ,


1
X
j! j!n j!nd
H.e /D ıŒn nd e De :
nD 1

The real and imaginary parts of the frequency response are

22
HR .e j! / D cos.!nd / and HI .e j! / D sin.!nd /, or alternatively

jH.e j! /j D 1
^H.e j! / D !nd :

The frequency response of a LTI system is essentially the same for continuous
and discrete time systems. However, an important distinction is that in the
discrete case it is always periodic in frequency with a period 2:
1
X
j.!C2/ j.!C2/n
H.e /D hŒne
nD 1
X1
j!n j 2 n
D hŒne e
nD 1
X1
j!n
D hŒne D H.e j! /:
nD 1

This last result holds since e ˙j 2 n D 1 for integer n.


The reason for this periodicity is related to the observation that the sequence
˚ j!n
e ; 1<n<1

has exactly the same values as the sequence


n o
j.!C2/n
e ; 1 < n < 1:

A system will therefore respond in exactly the same way to both sequences.
Example: ideal frequency selective filters
The frequency response of an ideal lowpass filter is as follows:

23
Hlp .e j! /
1

!
2  !c 0 !c  2

Only required part

Due to the periodicity in the response, it is only necessary to consider one


frequency cycle, usually chosen to be the range  to . Other examples of
ideal filters are:
Hhp .e j! /
1

Highpass
!
 !c 0 !c 
j!
1 Hbs .e /

Bandstop
!
 !b !a 0 !a !b 
Hbp .e j! /
1

Bandpass
!
 !b !a 0 !a !b 

In these cases it is implied that the frequency response repeats with period 2
outside of the plotted interval.
Example: frequency response of the moving-average system

24
The frequency response of the moving average system
8
1
<
M1 CM2 C1
M1  n  M2
hŒn D
:0 otherwise

is given by

1 e j!.M2 CM1 C1/=2 e j!.M2 CM1 C1/=2 j!.M2 M1 C1/


H.e j! / D e 2
M1 C M2 C 1 1 e j!
1 e j!.M2 CM1 C1/=2 e j!.M2 CM1 C1/=2 j!.M2 M1 /
D e 2
M1 C M2 C 1 e j!=2 e j!=2
1 sinŒ!.M1 C M2 C 1/=2 j!.M2 M1 /
D e 2 :
M1 C M2 C 1 sin.!=2/
For M1 D 0 and M2 D 4,
1
jH.e j! /j

0 2 2
2  5
0
5
 2
!


^H.e j! /


2  0  2
!

This system attenuates high frequencies (at around ! D ), and therefore has
the behaviour of a lowpass filter.

25
7 Fourier transforms of discrete sequences
The discrete time Fourier transform (DTFT) of the sequence xŒn is
1
X
j! j!n
X.e /D xŒne :
nD 1

This is also called the forward transform or analysis equation. The inverse
Fourier transform, or synthesis formula, is given by the Fourier integral
Z 
1
xŒn D X.e j! /e j!n d!:
2 

The Fourier transform is generally a complex-valued function of !:


j! /
X.e j! / D XR .e j! / C jXI .e j! / D jX.e j! /je j ^X.e :

The quantities jX.e j! /j and ^X.e j! / are referred to as the magnitude and
phase of the Fourier transform. The Fourier transform is often referred to as
the Fourier spectrum.
Since the frequency response of a LTI system is given by
1
X
j! j!k
H.e /D hŒke ;
kD 1

it is clear that the frequency response is equivalent to the Fourier transform of


the impulse response, and the impulse response is
Z 
1
hŒn D H.e j! /e j!n d!:
2 

A sufficient condition for the existence of the Fourier transform of a sequence


xŒn is that it be absolutely summable: 1
P
nD 1 jxŒnj < 1. In other words,
P1
the Fourier transform exists if the sum nD 1 jxŒnj converges. The Fourier
transform may however exist for sequences where this is not true — a rigorous
mathematical treatment can be found in the theory of generalised functions.

26
8 Symmetry properties of the Fourier transform

Any sequence xŒn can be expressed as

xŒn D xe Œn C xo Œn;

where xe Œn is conjugate symmetric (xe Œn D xe Œ n) and xo Œn is conjugate
antisymmetric (xo Œn D xo Œ n). These two components of the sequence
can be obtained as:
1
xe Œn D .xŒn C x  Œ n/ D xe Œ n
2
1
xo Œn D .xŒn x  Œ n/ D xo Œ n:
2
If a real sequence is conjugate symmetric, then it is an even sequence, and if
conjugate antisymmetric, then it is odd.
Similarly, the Fourier transform X.e j! / can be decomposed into a sum of
conjugate symmetric and antisymmetric parts:

X.e j! / D Xe .e j! / C Xo .e j! /;

where
1
Xe .e j! / D ŒX.e j! / C X  .e j!

2
1
Xo .e j! / D ŒX.e j! / X  .e j!
/:
2
With these definitions, and letting

X.e j! / D XR .e j! / C jXI .e j! /;

the symmetry properties of the Fourier transform can be summarised as


follows:

27
Sequence xŒn Transform X.e j! /
j!
x  Œn X  .e /
x  Œ n X  .e j! /
RefxŒng Xe .e j! /
j ImfxŒng Xo .e j! /
xe Œn XR .e j! /
xo Œn jXI .e j! /

Most of these properties can be proved by substituting into the expression for
the Fourier transform. Additionally, for real xŒn the following also hold:

Real sequence xŒn Transform X.e j! /


xŒn X.e j! / D X  .e j!
/
xŒn XR .e j! / D XR .e j!
/
xŒn XI .e j! / D XI .e j!
/
xŒn jX.e j! /j D jX.e j!
/j
xŒn ^X.e j! / D ^X.e j!
/
xe Œn XR .e j! /
xo Œn jXI .e j! /

9 Fourier transform theorems

Let X.e j! / be the Fourier transform of xŒn. The following theorems then
apply:

28
Sequences xŒn, yŒn Transforms X.e j! /, Y .e j! / Property
axŒn C byŒn aX.e j! / C bY .e j! / Linearity
j!nd
xŒn nd  e X.e j! / Time shift
e j!0 n xŒn X.e j.! !0 /
/ Frequency shift
j!
xŒ n X.e / Time reversal
dX.e j! /
nxŒn j d!
Frequency diff.
j! j!
xŒn  yŒn X.e /Y .e / Convolution
1
R
xŒnyŒn 2  X.e j /Y .e j.! /
/d Modulation

Some useful Fourier transform pairs are:

Sequence Fourier transform


ıŒn 1
j!n0
ıŒn n0  e
P1
1 . 1 < n < 1/ kD 1 2ı.! C 2k/
1
an uŒn .jaj < 1/ 1 ae j!
1 P1
uŒn 1 e j!
C kD 1 ı.! C 2k/
1
.n C 1/an uŒn .jaj < 1/
8.1 ae j! /2
<1 j!j < !c
sin.!c n/ j!
n
X.e /D
:0 !c < j!j  
8
<1 0nM sinŒ!.M C1/=2
xŒn D sin.!=2/
e j!M=2
:0 otherwise
P1
e j!0 n kD 1 2ı.! !0 C 2k/

29
The z-transform
See Oppenheim and Schafer, Second Edition pages 94–139, or First Edition
pages 149–201.

1 Introduction

The z-transform of a sequence x[n] is



X
X(z) = x[n]z −n .
n=−∞

The z-transform can also be thought of as an operator Z{·} that transforms a


sequence to a function:

X
Z{x[n]} = x[n]z −n = X(z).
n=−∞

In both cases z is a continuous complex variable.


We may obtain the Fourier transform from the z-transform by making the
substitution z = ejω . This corresponds to restricting |z| = 1. Also, with
z = rejω ,

X ∞
X
jω jω −n
x[n]r−n e−jωn .

X(re ) = x[n](re ) =
n=−∞ n=−∞

That is, the z-transform is the Fourier transform of the sequence x[n]r−n . For
r = 1 this becomes the Fourier transform of x[n]. The Fourier transform
therefore corresponds to the z-transform evaluated on the unit circle:

1
z−plane Im z = ejω

ω
Re

Unit circle

The inherent periodicity in frequency of the Fourier transform is captured


naturally under this interpretation.
The Fourier transform does not converge for all sequences — the infinite sum
may not always be finite. Similarly, the z-transform does not converge for all
sequences or for all values of z. The set of values of z for which the
z-transform converges is called the region of convergence (ROC).
P∞
The Fourier transform of x[n] exists if the sum n=−∞ |x[n]| converges.
However, the z-transform of x[n] is just the Fourier transform of the sequence
x[n]r−n . The z-transform therefore exists (or converges) if

X
X(z) = |x[n]r−n | < ∞.
n=−∞

This leads to the condition



X
|x[n]||z|−n < ∞
n=−∞

for the existence of the z-transform. The ROC therefore consists of a ring in
the z-plane:

2
z−plane Im
Region of
convergence

Re

In specific cases the inner radius of this ring may include the origin, and the
outer radius may extend to infinity. If the ROC includes the unit circle |z| = 1,
then the Fourier transform will converge.
Most useful z-transforms can be expressed in the form
P (z)
X(z) = ,
Q(z)
where P (z) and Q(z) are polynomials in z. The values of z for which
P (z) = 0 are called the zeros of X(z), and the values with Q(z) = 0 are
called the poles. The zeros and poles completely specify X(z) to within a
multiplicative constant.
Example: right-sided exponential sequence
Consider the signal x[n] = an u[n]. This has the z-transform

X ∞
X
n
X(z) = a u[n]z −n
= (az −1 )n .
n=−∞ n=0

Convergence requires that



X
|az −1 |n < ∞,
n=0

which is only the case if |az −1 | < 1, or equivalently |z| > |a|. In the ROC, the

3
series converges to

X 1 z
X(z) = (az −1 )n = = , |z| > |a|,
n=0
1 − az −1 z−a

since it is just a geometric series. The z-transform has a region of convergence


for any finite value of a.
z−plane Im
unit circle

a
Re
1

The Fourier transform of x[n] only exists if the ROC includes the unit circle,
which requires that |a| < 1. On the other hand, if |a| > 1 then the ROC does
not include the unit circle, and the Fourier transform does not exist. This is
consistent with the fact that for these values of a the sequence an u[n] is
exponentially growing, and the sum therefore does not converge.
Example: left-sided exponential sequence
Now consider the sequence x[n] = −an u[−n − 1]. This sequence is left-sided
because it is nonzero only for n ≤ −1. The z-transform is

X −1
X
n
X(z) = −a u[−n − 1]z −n
=− an z −n
n=−∞ n=−∞

X ∞
X
−n n
=− a z =1− (a−1 z)n .
n=1 n=0

4
For |a−1 z| < 1, or |z| < |a|, the series converges to
1 1 z
X(z) = 1 − = = , |z| < |a|.
1 − a−1 z 1 − az −1 z−a
z−plane Im
unit circle

a
Re
1

Note that the expression for the z-transform (and the pole zero plot) is exactly
the same as for the right-handed exponential sequence — only the region of
convergence is different. Specifying the ROC is therefore critical when dealing
with the z-transform.
Example: sum of two exponentials
n n
The signal x[n] = 12 u[n] + − 31 u[n] is the sum of two real
exponentials. The z-transform is
∞  n  n 
X 1 1
X(z) = u[n] + − u[n] z −n
n=−∞
2 3
∞  n ∞  n
X 1 X 1
= u[n]z −n + − u[n]z −n
n=−∞
2 n=−∞
3
∞  n X ∞  n
X 1 −1 1 −1
= z + − z .
n=0
2 n=0
3

From the example for the right-handed exponential sequence, the first term in
this sum converges for |z| > 1/2, and the second for |z| > 1/3. The combined
transform X(z) therefore converges in the intersection of these regions,

5
namely when |z| > 1/2. In this case
1
1 1 2z(z − 12 )
X(z) = + = .
1 − 21 z −1 1 + 13 z −1 (z − 21 )(z + 13 )
The pole-zero plot and region of convergence of the signal is
z−plane Im
unit circle

1
Re
1
− 31 1
12 2

Example: finite length sequence


The signal 
an 0≤n≤N −1
x[n] =
0 otherwise
has z-transform
N
X −1 N
X −1
n −n
X(z) = a z = (az −1 )n
n=0 n=0
1 − (az −1 )N 1 z N − aN
= = .
1 − az −1 z N −1 z − a
Since there are only a finite number of nonzero terms the sum always
converges when az −1 is finite. There are no restrictions on a (|a| < ∞), and
the ROC is the entire z-plane with the exception of the origin z = 0 (where the
terms in the sum are infinite). The N roots of the numerator polynomial are at

zk = aej(2πk/N ) , k = 0, 1, . . . , N − 1,

6
since these values satisfy the equation z N = aN . The zero at k = 0 cancels the
pole at z = a, so there are no poles except at the origin, and the zeros are at

zk = aej(2πk/N ) , k = 1, . . . , N − 1.

2 Properties of the region of convergence

The properties of the ROC depend on the nature of the signal. Assuming that
the signal has a finite amplitude and that the z-transform is a rational function:

• The ROC is a ring or disk in the z-plane, centered on the origin


(0 ≤ rR < |z| < rL ≤ ∞).
• The Fourier transform of x[n] converges absolutely if and only if the ROC
of the z-transform includes the unit circle.

• The ROC cannot contain any poles.


• If x[n] is finite duration (ie. zero except on finite interval
−∞ < N1 ≤ n ≤ N2 < ∞), then the ROC is the entire z-plane except
perhaps at z = 0 or z = ∞.

• If x[n] is a right-sided sequence then the ROC extends outward from the
outermost finite pole to infinity.
• If x[n] is left-sided then the ROC extends inward from the innermost
nonzero pole to z = 0.

• A two-sided sequence (neither left nor right-sided) has a ROC consisting


of a ring in the z-plane, bounded on the interior and exterior by a pole (and
not containing any poles).
• The ROC is a connected region.

7
3 The inverse z-transform
Formally, the inverse z-transform can be performed by evaluating a Cauchy
integral. However, for discrete LTI systems simpler methods are often
sufficient.

3.1 Inspection method

If one is familiar with (or has a table of) common z-transform pairs, the inverse
can be found by inspection. For example, one can invert the z-transform
 
1 1
X(z) = , |z| > ,
1 − 12 z −1 2
using the z-transform pair
Z 1
an u[n]←→ , for |z| > |a|.
1 − az −1
By inspection we recognise that
 n
1
x[n] = u[n].
2
Also, if X(z) is a sum of terms then one may be able to do a term-by-term
inversion by inspection, yielding x[n] as a sum of terms.

3.2 Partial fraction expansion

For any rational function we can obtain a partial fraction expansion, and
identify the z-transform of each term. Assume that X(z) is expressed as a ratio
of polynomials in z −1 :
PM −k
k=0 bk z
X(z) = PN .
a z −k
k=0 k

8
It is always possible to factor X(z) as
QM
b0 k=1 (1 − ck z −1 )
X(z) = ,
a0 N
Q −1 )
k=1 (1 − dk z
where the ck ’s and dk ’s are the nonzero zeros and poles of X(z).
• If M < N and the poles are all first order, then X(z) can be expressed as
N
X Ak
X(z) = .
1 − dk z −1
k=1

In this case the coefficients Ak are given by

Ak = (1 − dk z −1 )X(z) z=d

k.

• If M ≥ N and the poles are all first order, then an expansion of the form
M −N N
X
−r
X Ak
X(z) = Br z +
r=0
1 − dk z −1
k=1

can be used, and the Br ’s be obtained by long division of the numerator


by the denominator. The Ak ’s can be obtained using the same equation as
for M < N .
• The most general form for the partial fraction expansion, which can also
deal with multiple-order poles, is
M −N N s
X
−r
X Ak X Cm
X(z) = Br z + + .
r=0
1 − dk z −1 m=1 (1 − di z −1 )m
k=1,k6=i

Ways of finding the Cm ’s can be found in most standard DSP texts.


The terms Br z −r correspond to shifted and scaled impulse sequences, and
invert to terms of the form Br δ[n − r]. The fractional terms
Ak
1 − dk z −1

9
correspond to exponential sequences. For these terms the ROC properties must
be used to decide whether the sequences are left-sided or right-sided.
Example: inverse by partial fractions
Consider the sequence x[n] with z-transform
1 + 2z −1 + z −2 (1 + z −1 )2
X(z) = = , |z| > 1.
1 − 32 z −1 + 12 z −2 (1 − 12 z −1 )(1 − z −1 )
Since M = N = 2 this can be expressed as
A1 A2
X(z) = B0 + + .
1 − 12 z −1 1 − z −1
The value B0 can be found by long division:

2
1 −2
− 32 z −1 + 1 z −2 +2z −1 +1

2z
z −2 −3z −1 +2
5z −1 −1
so
−1 + 5z −1
X(z) = 2 + .
(1 − 21 z −1 )(1 − z −1 )
The coefficients A1 and A2 can be found using

Ak = (1 − dk z −1 )X(z) z=d ,

k

so
1 + 2z −1 + z −2

1+4+4
A1 = = = −9
1 − z −1 −1
z =2 1−2
and
1 + 2z −1 + z −2

1+2+1
A2 = 1 −1 = = 8.
1 − 2z
z =1
−1 1/2
Therefore
9 8
X(z) = 2 − 1 −1 + .
1− 2z
1 − z −1

10
Using the fact that the ROC is |z| > 1, the terms can be inverted one at a time
by inspection to give

x[n] = 2δ[n] − 9(1/2)n u[n] + 8u[n].

3.3 Power series expansion

If the z-transform is given as a power series in the form



X
X(z) = x[n]z −n
n=−∞

= . . . + x[−2]z 2 + x[−1]z 1 + x[0] + x[1]z −1 + x[2]z −2 + . . . ,

then any value in the sequence can be found by identifying the coefficient of
the appropriate power of z −1 .
Example: finite-length sequence
The z-transform
1
X(z) = z 2 (1 − z −1 )(1 + z −1 )(1 − z −1 )
2
can be multiplied out to give
1 1
X(z) = z 2 − z − 1 + z −1 .
2 2
By inspection, the corresponding sequence is therefore



 1 n = −2

 1
− 2 n = −1



x[n] = −1 n=0

1

n=1




 2

0 otherwise

11
or equivalently
1 1
x[n] = 1δ[n + 2] − δ[n + 1] − 1δ[n] + δ[n − 1].
2 2
Example: power series expansion
Consider the z-transform

X(z) = log(1 + az −1 ), |z| > |a|.


Using the power series expansion for log(1 + x), with |x| < 1, gives

X (−1)n+1 an z −n
X(z) = .
n=1
n

The corresponding sequence is therefore



(−1)n+1 an n≥1
n
x[n] =
0 n ≤ 0.

Example: power series expansion by long division


Consider the transform
1
X(z) = , |z| > |a|.
1 − az −1
Since the ROC is the exterior of a circle, the sequence is right-sided. We
therefore divide to get a power series in powers of z −1 :

1+az −1 +a2 z −2 + · · ·
1 − az −1 1


1−az −1
az −1
az −1 −a2 z −2
a2 z −2 + · · ·
or
1
−1
= 1 + az −1 + a2 z −2 + · · · .
1 − az

12
Therefore x[n] = an u[n].
Example: power series expansion for left-sided sequence
Consider instead the z-transform
1
X(z) = , |z| < |a|.
1 − az −1
Because of the ROC, the sequence is now a left-sided one. Thus we divide to
obtain a series in powers of z:

−a−1 z−a−2 z 2 − · · ·

−a + z z
z−a−1 z 2
az −1
Thus x[n] = −an u[−n − 1].

4 Properties of the z-transform


In this section, if X(z) denotes the z-transform of a sequence x[n] and the
ROC of X(z) is indicated by Rx , then this relationship is indicated as
Z
x[n]←→X(z), ROC = Rx .

Furthermore, with regard to nomenclature, we have two sequences such that


Z
x1 [n]←→X1 (z), ROC = Rx1
Z
x2 [n]←→X2 (z), ROC = Rx2 .

4.1 Linearity

The linearity property is as follows:


Z
ax1 [n] + bx2 [n]←→aX1 (z) + bX2 (z), ROC containsRx1 ∩ Rx1 .

13
4.2 Time shifting

The time-shifting property is as follows:


Z
x[n − n0 ]←→z −n0 X(z), ROC = Rx .

(The ROC may change by the possible addition or deletion of z = 0 or


z = ∞.) This is easily shown:

X ∞
X
Y (z) = x[n − n0 ]z −n
= x[m]z −(m+n0 )
n=−∞ m=−∞

X
= z −n0 x[m]z −m = z −n0 X(z).
m=−∞

Example: shifted exponential sequence


Consider the z-transform
1 1
X(z) = , |z| > .
z − 14 4

From the ROC, this is a right-sided sequence. Rewriting,


z −1
 
−1 1 1
X(z) = = z , |z| > .
1 − 14 z −1 1 − 14 z −1 4

The term in brackets corresponds to an exponential sequence (1/4)n u[n]. The


factor z −1 shifts this sequence one sample to the right. The inverse z-transform
is therefore
x[n] = (1/4)n−1 u[n − 1].
Note that this result could also have been easily obtained using a partial
fraction expansion.

14
4.3 Multiplication by an exponential sequence

The exponential multiplication property is


Z
z0n x[n]←→X(z/z0 ), ROC = |z0 |Rx ,

where the notation |z0 |Rx indicates that the ROC is scaled by |z0 | (that is,
inner and outer radii of the ROC scale by |z0 |). All pole-zero locations are
similarly scaled by a factor z0 : if X(z) had a pole at z = z1 , then X(z/z0 )
will have a pole at z = z0 z1 .
• If z0 is positive and real, this operation can be interpreted as a shrinking or
expanding of the z-plane — poles and zeros change along radial lines in
the z-plane.
• If z0 is complex with unit magnitude (z0 = ejω0 ) then the scaling
operation corresponds to a rotation in the z-plane by and angle ω0 . That is,
the poles and zeros rotate along circles centered on the origin. This can be
interpreted as a shift in the frequency domain, associated with modulation
in the time domain by ejω0 n . If the Fourier transform exists, this becomes
F
ejω0 n x[n]←→X(ej(ω−ω0 ) ).

Example: exponential multiplication


The z-transform pair
Z 1
u[n]←→ , |z| > 1
1 − z −1
can be used to determine the z-transform of x[n] = rn cos(ω0 n)u[n]. Since
cos(ω0 n) = 1/2ejω0 n + 1/2e−jω0 n , the signal can be rewritten as
1 jω0 n 1
x[n] = (re ) u[n] + (re−jω0 )n u[n].
2 2

15
From the exponential multiplication property,
1 jω0 n Z 1/2
(re ) u[n]←→ , |z| > r
2 1 − rejω0 z −1
1 −jω0 n Z 1/2
(re ) u[n]←→ , |z| > r,
2 1 − re−jω0 z −1
so
1/2 1/2
X(z) = + , |z| > r
1 − rejω0 z −1 1 − re−jω0 z −1
1 − r cos ω0 z −1
= , |z| > r.
1 − 2r cos ω0 z −1 + r2 z −2

4.4 Differentiation

The differentiation property states that


Z dX(z)
nx[n]←→ − z , ROC = Rx .
dz
This can be seen as follows: since

X
X(z) = x[n]z −n ,
n=−∞

we have
∞ ∞
dX(z) X
−n−1
X
−z = −z (−n)x[n]z = nx[n]z −n = Z{nx[n]}.
dz n=−∞ n=−∞

Example: second order pole


The z-transform of the sequence

x[n] = nan u[n]

can be found using


Z 1
an u[n]←→ , |z| > a,
1 − az −1

16
to be
az −1
 
d 1
X(z) = −z = , |z| > a.
dz 1 − az −1 (1 − az −1 )2

4.5 Conjugation

This property is
Z
x∗ [n]←→X ∗ (z ∗ ), ROC = Rx .

4.6 Time reversal

Here
Z 1
x∗ [−n]←→X ∗ (1/z ∗ ), ROC = .
Rx
The notation 1/Rx means that the ROC is inverted, so if Rx is the set of values
such that rR < |z| < rL , then the ROC is the set of values of z such that
1/rl < |z| < 1/rR .
Example: time-reversed exponential sequence
The signal x[n] = a−n u[−n] is a time-reversed version of an u[n]. The
z-transform is therefore
1 −a−1 z −1
X(z) = = , |z| < |a−1 |.
1 − az 1 − a−1 z −1

4.7 Convolution

This property states that


Z
x1 [n] ∗ x2 [n]←→X1 (z)X2 (z), ROC containsRx1 ∩ Rx2 .

17
Example: evaluating a convolution using the z-transform
The z-transforms of the signals x1 [n] = an u[n] and x2 [n] = u[n] are

X 1
X1 (z) = an z −n = , |z| > |a|
n=0
1 − az −1

and

X 1
X2 (z) = z −n = , |z| > 1.
n=0
1 − z −1
For |a| < 1, the z-transform of the convolution y[n] = x1 [n] ∗ x2 [n] is

1 z2
Y (z) = = , |z| > 1.
(1 − az −1 )(1 − z −1 ) (z − a)(z − 1)
Using a partial fraction expansion,
 
1 1 a
Y (z) = − , |z| > 1,
1 − a 1 − z −1 1 − az −1
so
1
y[n] = (u[n] − an+1 u[n]).
1−a

4.8 Initial value theorem

If x[n] is zero for n < 0, then

x[0] = lim X(z).


z→∞

18
Some common z-transform pairs are:

Sequence Transform ROC


δ[n] 1 All z
1
u[n] 1−z −1 |z| > 1
1
−u[−n − 1] 1−z −1 |z| < 1
−m
δ[n − m] z All z except 0 or ∞
1
an u[n] 1−az −1
|z| > |a|
1
−an u[−n − 1] 1−az −1
|z| < |a|
az −1
nan u[n] (1−az −1 )2 |z| > |a|
az −1
−nan u[−n − 1] (1−az −1 )2 |z| < |a|

an 0 ≤ n ≤ N − 1, 1−aN z −N
1−az −1 |z| > 0
0 otherwise
1−cos(ω0 )z −1
cos(ω0 n)u[n] 1−2 cos(ω0 )z −1 +z −2 |z| > 1
1−r cos(ω0 )z −1
rn cos(ω0 n)u[n] 1−2r cos(ω0 )z −1 +r 2 z −2 |z| > r

4.9 Relationship with the Laplace transform

Continuous-time systems and signals are usually described by the Laplace


transform. Letting z = esT , where s is the complex Laplace variable

s = d + jω,

we have
z = e(d+jω)T = edT ejωT .
Therefore

|z| = edT and ∢z = ωT = 2πf /fs = 2πω/ωs ,

19
where ωs is the sampling frequency. As ω varies from ∞ to ∞, the s-plane is
mapped to the z-plane:
• The jω axis in the s-plane is mapped to the unit circle in the z-plane.
• The left-hand s-plane is mapped to the inside of the unit circle.
• The right-hand s-plane maps to the outside of the unit circle.

20
Sampling of continuous-time signals
See Oppenheim and Schafer, Second Edition pages 140–239, or First Edition
pages 80–148.

1 Periodic sampling

Discrete-time signal xŒn often arises from periodic sampling of


continuous-time signal xc .t /:

xŒn D xc .nT /; 1 < n < 1:

This system is called an ideal continuous-to-discrete-time (C/D) converter or


sampler,

C/D
xc .t / xŒn D xc .nT /

and is described by the following:


 Sampling period: T seconds.
 Sampling frequency: fs D 1=T samples per second, or s D 2=T
radians per second.

In practice, sampling is usually approximately implemented using


analog-to-digital (A/D) converter.
The sampling process is not generally invertible: one cannot always
reconstruct xc .t / unambiguously from xŒn. However, ambiguity can be
removed by restricting input signals to sampler.

1
2 Frequency-domain representation of sampling
What is the frequency-domain relation between input and output of C/D
converter?
Consider converting xc .t / to xs .t /, by modulating it with the periodic impulse
train
X1
s.t / D ı.t nT /;
nD 1
which has frequency representation
1
2 X
S.j / D ı. ks / W
T
kD 1

xc .t /

xs .t /
... ...
t

Through the sifting property of the impulse function,


1
X
xs .t / D xc .t /s.t / D xc .t / ı.t nT /
nD 1
1
X
D xc .nT /ı.t nT /:
nD 1

The Fourier transform Xs .j / of xs .t / D xc .t /s.t / is the continuous-time


convolution of Fourier transforms Xc .j / and S.j /, so
1
1 1 X
Xs .j / D Xc .j /  S.j / D Xc .j. ks //:
2 T
kD 1

2
Therefore, the Fourier transform of xs .t / consists of copies of Xc .j /, shifted
by integer multiples of sampling frequency s , and then superimposed:
Xc .j /


0 N
S.j /


2s s 0 s 2s
Xs .j /

... ...

2s s 0
N

If xc .t / is bandlimited, with highest nonzero frequency at N , then the


replicas do not overlap when

s > 2N :

Then we can recover xc .t / from xs .t / using an ideal lowpass filter. Otherwise,


Xc .j / cannot be recovered using lowpass filtering — aliasing results:
Xs .j /

... ...

2s s 0 s 2s

The frequency N is referred to as the Nyquist frequency, and the frequency

3
2N that must be exceeded in the sampling is the Nyquist rate.
The objective now is to express the Fourier transform X.e j! / of xŒn in terms
of Xc .j / and Xs .j /. Taking the Fourier transform of the relationship
1
X
xs .t / D xc .nT /ı.t nT /
nD 1

yields the following:


1
X
j T n
Xs .j / D xc .nT /e :
nD 1

Now, since xŒn D xc .nT / and


1
X
X.e j! / D xŒne j!n
;
nD 1

it follows that
Xs .j / D X.e j! /ˇ!DT D X.e j T /:
ˇ

Consequently,
1
j T 1 X
X.e /D Xc .j. ks //;
T
kD 1

and
1   
j! 1 X ! 2k
X.e /D Xc j :
T T T
kD 1

Thus X.e j! / is just a frequency-scaled version of Xs .j /, with the scaling


specified by ! D T . Alternatively, the effect of sampling may be thought of
as a normalisation of the frequency axis, so that the frequency  D s of
Xs .j / is normalised to ! D 2 for X.e j! /.

4
3 Reconstruction of bandlimited signal from
samples

If samples of a bandlimited continuous-time signal are taken frequently


enough, then they are sufficient to represent the signal exactly. The
continuous-time signal can then be recovered from the samples. This task is
ideally performed by a discrete-to-continuous-time (D/C) converter. The form
and behaviour of such a converter is discussed in this section.
Given sequence of samples xŒn, we can form impulse train
1
X
xs .t / D xŒnı.t nT /:
nD 1

The nth sample corresponds to the impulse at time t D nT .


If appropriate sampling conditions are met, namely the signal is bandlimited
and the Fourier transform replicas do not overlap, then x.t / can be
reconstructed from xs .t / by ideal continuous-time lowpass filtering:
1
X
xr .t / D xŒnhr .t nT /:
nD 1

Here hr .t / is impulse response of an ideal LPF with cutoff frequency at c :


Hr .j /
Xs .j /

... ...

2s s 0 s 2s
c

A convenient choice for the cutoff frequency is c D s =2 D =T ,

5
corresponding to the ideal reconstruction filter
8
<T jj  =T
Hr .j / D
:0 jj > =T

and reconstructed signal

Xr .j / D Hr .j /X.e j T /
8
<TX.e j T / jj  =T
D
:0 jj > =T:

In the time domain the ideal reconstruction filter has impulse response
sin. t =T /
hr .t / D ;
 t =T
so the reconstructed signal is
1
X sinŒ.t nT /=T 
xr .t / D xŒn :
nD 1
.t nT /=T

From the previous frequency-domain argument, if xŒn D xc .nT / with


Xc .j / D 0 for jj  =T , then xr .t / D xc .t /. Note that the filter hr .t / is
not realisable since it has infinite duration.
An ideal discrete-to-continuous (D/C) reconstruction system therefore has the
form

6
Reconstruction
Sequence to filter
xŒn impulse train xs .t / Hr .j / xr .t /

D/C
xŒn xr .t /

4 Discrete-time processing of continuous-time


signals

Discrete-time systems are often used to process continuous-time signals. This


can be accomplished by a system of the form:

Discrete−time
C/D D/C
xc .t / system yr .t /
xŒn yŒn

T T
Heff .j / D Hc .j /

For now it is assumed that the C/D and D/C converters have the same sampling
rate.
The C/D converter produces the discrete-time signal

xŒn D xc .nT /;

7
with Fourier transform
1   
1 X ! 2k
X.e j! / D Xc j :
T T T
kD 1

The D/C converter creates a continuous-time output of the form


1
X sinŒ.t nT /=T 
yr .t / D yŒn :
nD 1
.t nT /=T

The continuous-time Fourier transform of yr .t /, namely Yr .j /, and the


discrete-time Fourier transform of yŒn, namely Y .e j  /, are related by

Yr .j / D Hr .j /Y .e j T /
8
<T Y .e j T / jj < =T
D
:0 otherwise:

If the discrete-time system is LTI, then

Y .e j! / D H.e j! /X.e j! /;

where H.e j! / is the frequency response of the system. Therefore

Yr .j / D Hr .j /H.e j T /X.e j T /
1   
j T 1 X 2k
D Hr .j /H.e / Xc j  :
T T
kD 1

If Xc .j / D 0 for jj  =T , then the ideal LPF Hr .j / selects only the
term for k D 0 in the sum, and scales the result:
8
<H.e j T /X .j / jj < =T
c
Yr .j / D
:0 jj  =T:

8
Thus if Xc .j / is bandlimited and sampled above the Nyquist rate, then the
output is related to the input by

Yr .j / D Heff .j /Xc .j /;

where 8
<H.e j T / jj < =T
Heff .j / D
:0 jj  =T
is the effective frequency response of the system.

5 Continuous-time processing of discrete-time


signals
It is conceptually useful to consider continuous-time processing of
discrete-time signals. A system to perform this task is:

hc .t /
D/C C/D
xŒn xc .t / Hc .j / yc .t / yŒn

T T
hŒn; H.e j! /
Since the D/C converter includes an ideal LPF, Xc .j / and therefore also
Yc .j / will be zero for jj  =T . Thus the C/D converter samples yc .t /
without aliasing and we have
1
X sinŒ.t nT /=T 
xc .t / D xŒn
nD 1
.t nT /=T
and
1
X sinŒ.t nT /=T 
yc .t / D yŒn ;
nD 1
.t nT /=T

9
where xŒn D xc .nT / and yŒn D yc .nT /. In the frequency domain,

Xc .j / D TX.e j T /; jj < =T;


Yc .j / D Hc .j /Xc .j /; jj < =T;
j! 1  !
Y .e / D Yc j ; j!j < :
T T
The overall system therefore behaves like a discrete-time system with
frequency response
 !
j!
H.e / D Hc j j!j < :
T
Equivalently, the overall frequency response of the system will be equal to a
given H.e j! / if the frequency of the continuous-time system is

Hc .j / D H.e j T /; jj < =T:

Since Xc .j / D 0 for jj  =T , Hc .j / may be chosen arbitrarily above


=T .

6 Changing sampling rate using discrete-time


processing

Given the sequence


xŒn D xc .nT /
obtained by sampling (with period T ) the signal xc .t /, we often want to
change the sampling rate (to period T 0 ):

x 0 Œn D xc .nT 0 /:

One approach is to reconstruct xc .t / from xŒn, and then resample with new
period T 0 . However, we want to do this using only discrete-time operations.

10
6.1 Sampling rate reduction by integer factor

Compressor
#M
xŒn xd Œn D xŒnM 
Sampling Sampling
period T period T’=MT

The sampling rate compressor implements the following function:

xd Œn D xŒnM  D xc .nM T /:

Here xd Œn is exactly the sequence that would be obtained by sampling xc .t /


with period T 0 D M T .
If Xc .j / D 0 for jj  N , then xd Œn is an exact (unaliased)
representation of xc .t / if =.M T /  N .
In the frequency domain we have
1   
j! 1 X ! 2k
X.e /D Xc j
T T T
kD 1

and
1   
1 X ! 2 r
Xd .e j! / D 0 Xc j :
T rD 1 T0 T0
Since T 0 D M T , and noting that with r D i C kM we can write the
summation over r as a summation over 1 < k < 1 and 0  i  M 1, we
obtain
MX1 1 X 1
"   #
1 ! 2k 2 i
Xd .e j! / D Xc j :
M T MT T MT
iD0 kD 1
M 1
1 X
D X.e j.!=M 2 i=M /
/:
M
iD0

11
Xc .j /
T’=MT: case M=2


0
1 Xs .j /
T


2 0 2
T T
1 X.e j! /
T

!
2 0 2
1 Xd .j /
T0


2 0 2
T0 T0
j!
1 Xd .e /
MT

!
2 0 2

Applying a compressor to a signal can result in aliasing. This can be avoided


(at the cost of some information) by prefiltering with a lowpass filter, and then
compressing the sampling rate:

LPF
Compressor
Gain D 1 #M
xŒn c D =M xŒn
Q xQ d Œn D xŒnM
Q 
Sampling Sampling Sampling
period T period T period T’=MT

This is referred to as downsampling (or decimation) by a factor M .

12
6.2 Increasing sampling rate by integer factor

With underlying continuous-time signal xc .t /, we want to obtain samples

xi Œn D xc .nT 0 /

from
xŒn D xc .nT /;
where T 0 D T =L. Therefore

xi Œn D xŒn=L D xc .nT =L/; n D 0; ˙L; ˙2L; : : : :

This is referred to as upsampling (or interpolating) by a factor L, and is


performed by expanding the sampling rate, and then lowpass filtering:

LPF
Expander
"L Gain D L
xŒn xe Œn c D =L xi Œn
Sampling Sampling Sampling
period T period T’=T/L period T’=T/L

The expanded signal is


8
<xŒn=L; n D 0; ˙L; ˙2L; : : : ;
xe Œn D
:0; otherwise;
1
X
D xŒkıŒn kL:
kD 1

An example of upsampling in the discrete-time domain is shown below:

13
xŒn
n
0
xe Œn
n
0
xi Œn
n
0

The Fourier transform of the expanded signal is


1 1
!
X X
Xe .e j! / D xŒkıŒn kL e j!n

nD 1 kD 1
1
X
j!Lk
D xŒke D X.e j!L /:
kD 1

Final upsampling is obtained by lowpass filtering the expanded signal.

14
Xc .j /
T’=T/L: case L=2


0
1 X.e j! /
T

!
2 0 2
1 Xe .e j! /
T

!
2 0 2
L L
j!
L Hi .e /

  !
2 L 0 L 2
j!
L Xi .e /
T

  !
2 L 0 L 2

We can obtain an interpolation formula for xi Œn in terms of xŒn: since the
LPF has impulse response
sin. n=L/
hi Œn D ;
 n=L
we have
1
X sinŒ.n kL/=L
xi Œn D xŒk :
.n kL/=L
kD 1

15
replacements
6.3 Changing the sampling rate by a noninteger factor

By cascading upsampling (by factor L) and downsampling (by factor M), the
sampling rate can be changed by a noninteger factor.

xe Œn xQ i Œn
LPF xi Œn LPF
"L Gain D L Gain D L #M
xŒn c D =L c D =M xQ d Œn

T T/L T/L T/L TM/L

This forms the basis of multirate signal processing, where highly efficient
structures are developed for implementing complicated signal processing
operations. The discrete wavelet transform (DWT) can be developed in this
framework.

16
The Discrete Fourier Transform
The discrete-time Fourier transform (DTFT) of a sequence is a continuous
function of !, and repeats with period 2. In practice we usually want to
obtain the Fourier components using digital computation, and can only
evaluate them for a discrete set of frequencies. The discrete Fourier transform
(DFT) provides a means for achieving this.
The DFT is itself a sequence, and it corresponds roughly to samples, equally
spaced in frequency, of the Fourier transform of the signal. The discrete
Fourier transform of a length N signal xŒn, n D 0; 1; : : : ; N 1 is given by
N
X1
j.2=N /k n
XŒk D xŒne :
nD0

This is the analysis equation. The corresponding synthesis equation is


N 1
1 X
xŒn D XŒke j.2=N /k n :
N
kD0

When dealing with the DFT, it is common to define the complex quantity
j.2=N /
WN D e :

With this notation the DFT analysis-synthesis pair becomes


N
X1
XŒk D xŒnWNk n
nD0
N 1
1 X
xŒn D XŒkWN k n :
N
kD0

An important property of the DFT is that it is cyclic, with period N , both in the

1
discrete-time and discrete-frequency domains. For example, for any integer r,
N
X1 N
X1
XŒk C rN  D xŒnWN.kCrN /n D xŒnWNk n .WNN /rn
nD0 nD0
N
X1
D xŒnWNk n D XŒk;
nD0

since WNN D e j.2=N /N D e j 2 D 1. Similarly, it is easy to show that


xŒn C rN  D xŒn, implying periodicity of the synthesis equation. This is
important — even though the DFT only depends on samples in the interval 0 to
N 1, it is implicitly assumed that the signals repeat with period N in both
the time and frequency domains.
To this end, it is sometimes useful to define the periodic extension of the signal
xŒn to be
xŒn
Q D xŒn mod N  D xŒ..n//N :
Here n mod N and ..n//N are taken to mean n modulo N , which has the value
of the remainder after n is divided by N . Alternatively, if n is written in the
form n D kN C l for 0  l < N , then

n mod N D ..n//N D l:
xŒn

n
0 N
xŒn
Q

n
0 N
Similarly, the periodic extension of XŒk is defined to be

XŒk
Q D XŒk mod N  D XŒ..k//N :

2
It is sometimes better to reason in terms of these periodic extensions when
dealing with the DFT. Specifically, if XŒk is the DFT of xŒn, then the inverse
DFT of XŒk is xŒn.
Q The signals xŒn and xŒn
Q are identical over the interval 0
to N 1, but may differ outside of this range. Similar statements can be made
regarding the transform XŒk.

1 Properties of the DFT


Many of the properties of the DFT are analogous to those of the discrete-time
Fourier transform, with the notable exception that all shifts involved must be
considered to be circular, or modulo N .
D D D
Defining the DFT pairs xŒn !XŒk, x1 Œn !X1 Œk, and x2 Œn !XŒk,
the following are properties of the DFT:
 Symmetry:

XŒk D X  Œ.. k//N 


RefXŒkg D RefXŒ.. k//N g
ImfXŒkg D ImfXŒ.. k//N g
jXŒkj D jXŒ.. k//N j
^XŒk D ^XŒ.. k//N 

D
 Linearity: ax1 Œn C bx2 Œn !aX1 Œk C bX2 Œk.
D
 Circular time shift: xŒ..n m//N  !WNkm XŒk.
 Circular convolution:
N
X1 D
x1 Œmx2 Œ..n m//N  !X1 ŒkX2 Œk:
mD0

Circular convolution between two N-point signals is sometimes denoted


by x1 Œn
N xŒn.

3
 Modulation:
N 1
D 1 X
x1 Œnx2 Œn ! X1 ŒlX2 Œ..k l//N :
N
lD0

Some of these properties, such as linearity, are easy to prove. The properties
involving time shifts can be quite confusing notationally, but are otherwise
quite simple. For example, consider the 4-point DFT
3
X
XŒk D xŒnW4k n
nD0

of the length 4 signal xŒn. This can be written as

XŒk D xŒ0W40k C xŒ1W41k C xŒ2W42k C xŒ3W43k

The product W41k XŒk can therefore be written as

W41k XŒk D xŒ0W41k C xŒ1W42k C xŒ2W43k C xŒ3W44k


D xŒ3W40k C xŒ0W41k C xŒ1W42k C xŒ2W43k

since W44k D W40k . This can be seen to be the DFT of the sequence
xŒ3; xŒ0; xŒ1; xŒ2, which is precisely the sequence xŒn circularly shifted to
the right by one sample. This proves the time-shift property for a shift of
length 1. In general, multiplying the DFT of a sequence by WNkm results in an
N-point circular shift of the sequence by m samples. The convolution
properties can be similarly demonstrated.
It is useful to note that the circularly shifted signal xŒ..n m//N  is the same
as the linearly shifted signal xŒn
Q m, where xŒn
Q is the N-point periodic
extension of xŒn.

4
xŒn

n
0 N
xŒn
Q

n
0 N
xŒn
Q m

n
0 N
xŒ..n m//N 

n
0 N

On the interval 0 to N 1, the circular convolution


N
X1
x3 Œn D x1 Œn
N x2 Œn D x1 Œmx2 Œ..n m//N 
mD0

can therefore be calculated using the linear convolution product


N
X1
x3 Œn D x1 ŒmxQ 2 Œn m:
mD0

Circular convolution is really just periodic convolution.


Example: Circular convolution with a delayed impulse sequence
Given the sequences

x1 Œn x2 Œn
n n
0 N 0 N

5
the circular convolution x3 Œn D x1 Œn
N x2 Œn is the signal xŒn
Q delayed by
two samples, evaluated over the range 0 to N 1:

x3 Œn
n
0 N

Example: Circular convolution of two rectangular pulses


Let 8
<1 0nL 1
x1 Œn D x2 Œn D
:0 otherwise:
If N D L, then the N-point DFTs are
8
N
X1 <N kD0
X1 Œk D X2 Œk D WNk n D
:0 otherwise:
nD0

Since the product is


8
<N 2 kD0
X3 Œk D X1 ŒkX2 Œk D
:0 otherwise;

it follows that the N-point circular convolution of x1 Œn and x2 Œn is

x3 Œn D x1 Œn
N x2 Œn D N; 0nN 1:

Suppose now that x1 Œn and x2 Œn are considered to be length 2L sequences by
augmenting with zeros. The N D 2L-point circular convolution is then seen to
be the same as the linear convolution of the finite-duration sequences x1 Œn and
x2 Œn:

6
x1 Œn D x2 Œn

n
0 L N

L
x1 Œn
N x2 Œn

n
0 L N

2 Linear convolution using the DFT


Using the DFT we can compute the circular convolution as follows
 Compute the N -point DFTs X1 Œk and X2 Œk of the two sequences x1 Œn
and x2 Œn.
 Compute the product X3 Œk D X1 ŒkX2 Œk for 0  k  N 1.
 Compute the sequence x3 Œn D x1 Œn
N x2 Œn as the inverse DFT of
X3 Œk.
This is computationally useful due to efficient algorithms for calculating the
DFT. The question that now arises is this: how do we get the linear convolution
(required in speech, radar, sonar, image processing) from this procedure?

2.1 Linear convolution of two finite-length sequences

Consider a sequence x1 Œn with length L points, and x2 Œn with length P
points. The linear convolution of the sequences,
1
X
x3 Œn D x1 Œmx2 Œn m;
mD 1

is nonzero over a maximum length of L C P 1 points:

7
2

x [n]
1
0
0 L

1
x [n]
2

0
0 P
8
x [n]
3

0
0 L+P−1
n

Therefore L C P 1 is the maximum length of x3 Œn resulting from the linear


convolution.
The N-point circular convolution of x1 Œn and x2 Œn is
N
X1 N
X1
x1 Œn
N x2 Œn D x1 Œmx2 Œ..n m//N  D x1 ŒmxQ 2 Œn m W
mD0 mD0

It is easy to see that the circular convolution product will be equal to the linear
convolution product on the interval 0 to N 1 as long as we choose
N  L C P 1. The process of augmenting a sequence with zeros to make it
of a required length is called zero padding.

2.2 Convolution by sectioning

Suppose that for computational efficiency we want to implement a FIR system


using DFTs. It cannot in general be assumed that the input signal has a finite
duration, so the methods described up to now cannot be applied directly:

8
h[n]
0 P
x[n]

0 L 2L 3L
n
The solution is to use block convolution, where the signal to be filtered is
segmented into sections of length L. The input signal xŒn, here assumed to be
causal, can be decomposed into blocks of length L as follows:
1
X
xŒn D xr Œn rL;
rD0

where 8
<xŒn C rL 0nL 1
xr Œn D
:0 otherwise:
x0[n]

0 L
n
x1[n]

L 2L
n
x2[n]

2L 3L
n

9
The convolution product can therefore be written as
1
X
yŒn D xŒn  hŒn D yr Œn rL;
rD0

where yr Œn is the response

yr Œn D xr Œn  hŒn:


y0[n]

0 L+P−1
n
y1[n]

L
n
y2[n]

2L
n

Since the sequences xr Œn have only L nonzero points and hŒn is of length P ,
each response term yr Œn has length L C P 1. Thus linear convolution can
be obtained using N-point DFTs with N  L C P 1. Since the final result is
obtained by summing the overlapping output regions, this is called the
overlap-add method.
y[n]

0 L 2L 3L

An alternative block convolution procedure, called the overlap-save method,


corresponds to implementing an L-point circular convolution of a P-point

10
impulse response hŒn with an L-point segment xr Œn. The portion of the
output that corresponds to linear convolution is then identified (consisting of
L .P 1/ points), and the resulting segments patched together to form the
output.

3 Spectrum estimation using the DFT


Spectrum estimation is the task of estimating the DTFT of a signal xŒn. The
DTFT of a discrete-time signal xŒn is
1
X
j! j!n
X.e /D xŒne :
nD 1

The signal xŒn is generally of infinite duration, and X.e j! / is a continuous


function of !. The DTFT can therefore not be calculated using a computer.
Consider now that we truncate the signal xŒn by multiplying with the
rectangular window wr Œn:
1
w [n]

0.5
r

0
0 N−1
n
The windowed signal is then xw Œn D xŒnwr Œn. The DTFT of this windowed
signal is given by
1
X N
X1
Xw .e j! / D xw Œne j!n
D xw Œne j!n
:
nD 1 nD0

Noting that the DFT of xw Œn is


N
X1 2k n
j
Xw Œk D xw Œne N ;
nD0

11
it is evident that
Xw Œk D Xw .e j! /ˇ!D2k=N :
ˇ

The values of the DFT Xw Œk of the signal xw Œn are therefore periodic
samples of the DTFT Xw .e j! /, where the spacing between the samples is
2=N . Since the relationship between the discrete-time frequency variable and
the continuous-time frequency variable is ! D T , the DFT frequencies
correspond to continuous-time frequencies
2k
k D :
NT
The DFT can therefore only be used to find points on the DTFT of the
windowed signal xw Œn of xŒn.
The operation of windowing involves multiplication in the discrete time
domain, which corresponds to continuous-time periodic convolution in the
DTFT frequency domain. The DTFT of the windowed signal is therefore
Z 
j! 1
Xw .e / D X.e j /W .e j.!  / /d;
2 
where W .e j! / is the frequency response of the window function. For a simple
rectangular window, the frequency response is as follows:
1
wr Œn

0.5
0
0 8
jWr .e j! /j

0
 0 
!
The DFT therefore effectively samples the DTFT of the signal convolved with
the frequency response of the window.

12
Example: Spectrum analysis of sinusoidal signals Suppose we have the
sinusoidal signal combination

xŒn D cos.=3n/ C 0:75 cos.2=3n/; 1 < n < 1:

Since the signal is infinite in duration, the DTFT cannot be computed


numerically. We therefore window the signal in order to make the duration
finite:

1
xŒn

0
−1
0 8

1
wr Œn

0.5

0
0 8

1
xw Œn

0
−1
0 8
n

The operation of windowing modifies the signal. This is reflected in the


discrete-time Fourier transform domain by a spreading of the frequency
components:

13
jX.e j! /j
0.5

0 2   2
 3 3
0 3 3

jWr .e j! /j

0 2   2
 3 3
0 3 3

jXw .e j! /j

0 2   2
 3 3
0 3 3

!
The operation of windowing therefore limits the ability of the Fourier
transform to resolve closely-spaced frequency components. When the DFT is
used for spectrum estimation, it effectively samples the spectrum of this
modified signal at the locations of the crosses indicated:
6
jXŒkj

0
0 1 2 3 4 5 6 7
k
Note that since k D 0 corresponds to ! D 0, there is a corresponding shift in
the sampled values.
In general, the elements of the N -point DFT of xw Œn contain N evenly-spaced
samples from the DTFT Xw .e j! /. These samples span an entire period of the
DTFT, and therefore correspond to frequencies at spacings of 2=N . We can
obtain samples with a closer spacing by performing more computation.

14
Suppose we form the zero-padded length M signal xM Œn as follows:
8
<xŒn 0nN 1
xM Œn D
:0 N  n  M 1:

The M -point DFT of this signal is


M
X1 N
X1
2 2
j kn j M kn
XM Œk D xM Œne M D xw Œne
nD0 nD0
1
X 2
j M kn
D xw Œne
nD 1

The sample Xp Œk can therefore be seen to correspond to the DTFT of the the
windowed signal xw Œn at frequency !k D 2k=M . Since M is chosen to be
larger than N , the transform values correspond to regular samples of Xw .e j! /
with a closer spacing of 2=M . The following figure shows the magnitude of
the DFT transform values for the 8-point signal shown previously, but
zero-padded to use a 32-point DFT:

1
xM Œn

0
−1

0 5 10 15 20 25 30
n
5
jXŒkj

0
0 5 10 15 20 25 30
k

Note that this process increases the density of the samples, but has no effect on
the resolution of the spectrum.
If W .e j! / is sharply peaked, and approximates a Dirac delta function at the

15
origin, then Xw .e j! /  X.e j! /. The values of the DFT then correspond quite
accurately to samples of the DTFT of xŒn. For a rectangular window, the
approximation improves as N increases:
1
wr Œn

0.5

0
0 32
jWr .e j! /j

32

0
 0 
jXw .e j! /j

0
 0 
!
The magnitude of the DFT of the windowed signal is

15
jXŒkj

10
5
0
0 5 10 15 20 25 30
k

which is clearly easier to interpret than for the case of the shorter signal. As
the window length tends to 1, the relationship becomes exact.
The rectangular window inherent in the DFT has the disadvantage that the peak
sidelobe of Wr .e j! / is high relative to the mainlobe. This limits the ability of
the DFT to resolve frequencies. Alternative windows may be used which have
preferred behaviour — the only requirement is that in the time domain the

16
window function is of finite duration. For example, the triangular window
1
wr Œn
0.5

0
0 32
jWr .e j! /j

0
 0 
jXw .e j! /j

0
 0 
!

leads to DFT samples with magnitude


10
jXŒkj

0
0 5 10 15 20 25 30
k

Here the sidelobes have been reduced at the cost of diminished resolution —
the mainlobe has become wider.
The method just described forms the basis for the periodogram spectrum
estimate. It is often used in practice on account of its perceived simplicity.
However, it has a poor statistical properties — model-based spectrum
estimates generally have higher resolution and more predictable performance.
Finally, note that the discrete samples of the spectrum are only a complete

17
representation if the sampling criterion is met. The samples therefore have to
be sufficiently closely spaced.

4 Fast Fourier transforms

The widespread application of the DFT to convolution and spectrum analysis


is due to the existence of fast algorithms for its implementation. The class of
methods are referred to as fast Fourier transforms (FFTs).
Consider a direct implementation of an 8-point DFT:
7
X
XŒk D xŒnW8k n ; k D 0; : : : ; 7:
nD0

If the factors W8k n have been calculated in advance (and perhaps stored in a
lookup table), then the calculation of XŒk for each value of k requires 8
complex multiplications and 7 complex additions. The 8-point DFT therefore
requires 8  8 multiplications and 8  7 additions. For an N-point DFT these
become N 2 and N.N 1/ respectively. If N D 1024, then approximately one
million complex multiplications and one million complex additions are
required.
The key to reducing the computational complexity lies in the observation that
the same values of xŒnW8k n are effectively calculated many times as the
computation proceeds — particularly if the transform is long.
The conventional decomposition involves decimation-in-time, where at each
stage a N-point transform is decomposed into two N=2-point transforms. That

18
is, XŒk can be written as
N=2 1 N=2 1
.2rC1/k
X X
XŒk D xŒ2rWN2rk C xŒ2r C 1WN
rD0 rD0
N=2 1 N=2 1
X X
D xŒ2r.WN2 /rk C WNk xŒ2r C 1.WN2 /rk :
rD0 rD0

Noting that WN2 D WN=2 this becomes


N=2 1 N=2 1
X X
rk
XŒk D xŒ2r.WN=2 / C WNk xŒ2r C 1.WN=2 /rk
rD0 rD0

D GŒk C WNk H Œk:

The original N-point DFT can therefore be expressed in terms of two


N=2-point DFTs.
The N=2-point transforms can again be decomposed, and the process repeated
until only 2-point transforms remain. In general this requires log2 N stages of
decomposition. Since each stage requires approximately N complex
multiplications, the complexity of the resulting algorithm is of the order of
N log2 N .
The difference between N 2 and N log2 N complex multiplications can
become considerable for large values of N . For example, if N D 2048 then
N 2 =.N log2 N /  200.
There are numerous variations of FFT algorithms, and all exploit the basic
redundancy in the computation of the DFT. In almost all cases an off-the-shelf
implementation of the FFT will be sufficient — there is seldom any reason to
implement a FFT yourself.

19
Transform analysis of LTI systems
Oppenheim and Schafer, Second edition pp. 240–339.
For LTI systems we can write
1
X
yŒn D xŒn  hŒn D xŒkhŒn k:
kD 1

Alternatively, this relationship can be expressed in the z-transform domain as

Y .z/ D H.z/X.z/;

where H.z/ is the system function, or the z-transform of the system impulse
response.
Recall that a LTI system is completely characterised by its impulse response,
or equivalently, its system function.

1 Frequency response of LTI systems

The frequency response H.e j! / of a system is defined as the gain that the
system applies to the complex exponential input e j!n . The Fourier transforms
of the system input and output are therefore related by

Y .e j! / D H.e j! /X.e j! /:

In terms of magnitude and phase,

jY .e j! /j D jH.e j! /jjX.e j! /j
^Y .e j! / D ^H.e j! / C ^X.e j! /:

In this case jH.e j! /j is referred to as the magnitude response or gain of the


system, and ^H.e j! / is the phase response or phase shift.

1
1.1 Ideal frequency-selective filters

Frequency components of the input are suppressed in the output if jH.e j! /j is


small at those frequencies. The ideal lowpass filter is defined as the LTI
system with frequency response
8
<1 j!j  !c
j!
Hlp .e /
:0 !c < j!j  :

Its magnitude and phase are

1
jH.e j! /j

 !c 0 !c 
!

^H.e j! /


 !c 0 !c 
!
This response, as for all discrete-time signals, is periodic with period 2. Its
impulse response (for 1 < n < 1) is
1 j!n !c
Z !c  
1 j!n 1
hlp Œn D e d! D e
2 !c 2 j n !c
1 1 j!c n j!c n sin.!c n/
D .e e /D ;
 n 2j n
which for !c D =4 is

2
0.3

0.2
hlp Œn 0.1

−0.1
−20 −15 −10 −5 0 5 10 15 20
n

The ideal lowpass filter is noncausal, and its impulse response extends from
1 < n < 1. The system is therefore not computationally realisable. Also,
the phase response of the ideal lowpass filter is specified to be zero — this is a
problem in that causal ideal filters have nonzero phase responses.
The ideal highpass filter is
8
<0 j!j  !c
Hhp .e j! / D
:1 !c < j!j  :

Since Hhp .e j! / D 1 Hlp .e j! /, its frequency response is


sin.!c n/
hhp Œn D ıŒn hlp Œn D ıŒn :
n

1.2 Phase distortion and delay

Consider the ideal delay, with impulse response

hid Œn D ıŒn nd 

and frequency response


Hid .e j! / D e j!nd
:

3
The magnitude and phase of this response are

jHid .e j! /j D 1;
^Hid .e j! / D !nd ; j!j < :

The phase distortion of the ideal delay is therefore a linear function of !. This
is considered to be a rather mild (and therefore acceptable) form of phase
distortion, since the only effect is to shift the sequence in time. In other words,
a filter with linear phase response can be viewed as a cascade of a zero-phase
filter, followed by a time shift or delay.
In designing approximations to ideal filters, we are therefore frequently willing
to accept linear phase distortion. The ideal lowpass filter with phase distortion
would be defined as
8
<e j!nd j!j  !c
j!
Hlp .e / D
:0 !c < j!j  ;

with impulse response


sin.!c .n nd /
hlp Œn D :
.n nd /

A convenient measure of linearity of the phase is the group delay, which


relates to the effect of the phase on a narrowband signal. Consider the
narrowband input xŒn D sŒn cos.!0 n/, where sŒn is the envelope of the
signal. Since X.e j! / is nonzero only around ! D !0 , the effect of the phase of
the system can be approximated around ! D !0 by

^H.e j! /  0 !nd :

Thus the response of the system to xŒn D sŒn cos.!0 n/ is approximately


yŒn D sŒn nd  cos.!0 n 0 !0 nd /. The time delay of the envelope sŒn
of the narrowband signal xŒn with Fourier transform centred at !0 is therefore
given by the negative of the slope of the phase at !0 . The group delay of a

4
system is therefore defined as
d ˚
 .!/ D grdŒH.e j! / D argŒH.e j! / :

d!
The deviation of the group delay away from a constant indicates the degree of
nonlinearity of the phase. Note that the phase here must be considered as a
continuous function of !.

2 System response for LCCD systems


Ideal filters cannot be implemented with finite computation. Therefore we
need approximations to ideal filters. Systems described by LCCD equations
N
X M
X
ak yŒn k D bk xŒn k
kD0 kD0

are useful for providing one class of approximation.


The properties of this class of system are best developed in the z-transform
domain. The z-transform of the equation is
N
X M
X
k k
ak z Y .z/ D bk z X.z/;
kD0 kD0

or equivalently
N M
! !
X X
k k
ak z Y .z/ D bk z X.z/:
kD0 kD0

The system function for a system that satisfies a difference equation of the
required form is therefore
PM k   QM
Y .z/ kD0 bk z b0 kD1 .1 ck z 1 /
H.z/ D D PN D QN :
X.z/ a
kD0 k z k a0 kD1 .1 d k z 1/

5
Each factor .1 ck z 1 / in the numerator contributes a zero at z D ck and a
pole at z D 0. Each factor .1 dk z 1 / contributes a zero at z D 0 and a pole
at z D dk .
The difference equation and the algebraic expression for the system function
are equivalent, as demonstrated by the next example.
Example: second-order system
Given the system function
.1 C z 1 /2
H.z/ D 1
;
.1 2
z 1 /.1 C 34 z 1/

we can find the corresponding difference equation by noting that


1
1 C 2z Cz 2 Y .z/
H.z/ D D :
1 C 41 z 1 3
8
z 2 X.z/
Therefore
1 1 3 2 1 2
.1 C z z /Y .z/ D .1 C 2z Cz /X.z/;
4 8
and the difference equation is
1 3
yŒn C yŒn 1 yŒn 2 D xŒn C 2xŒn 1 C xŒn 2:
4 8

2.1 Stability and causality

A difference equation does not uniquely specify the impulse response of a LTI
system. For a given system function (expressed as a ratio of polynomials),
each possible choice of ROC will lead to a different impulse response.
However, they will all correspond to the same difference equation.
If a system is causal, it follows that the impulse response is a right-sided
sequence, and the region of convergence of H.z/ must be outside of the
outermost pole.

6
Alternatively, if we require that the system be stable, then we must have
1
X
jhŒnj < 1:
nD 1

For jzj D 1 this is identical to the condition


1
X
n
jhŒnz j < 1;
nD 1

so the condition for stability is equivalent to the condition that the ROC of
H.z/ include the unit circle.
Example: determining the ROC
The frequency response of the LTI system with difference equation
5
yŒn yŒn 1 C yŒn 2 D xŒn
2
is
1 1
H.z/ D 5
D 1
:
1 2
z 1 Cz 2 .1 2
z 1 /.1 2z 1/

There are three choices for the ROC:


 Causal: ROC outside of outermost pole jzj > 2 (but then not stable).
1
 Stable: ROC such that 2
< jzj < 2 (but then not causal).
1
 If jzj < 2
then the system is neither causal nor stable.
For a causal and stable system the ROC must be outside the outermost pole
and include the unit circle. This is only possible if all the poles are inside the
unit circle.

2.2 Inverse systems

The system Hi .z/ is the inverse system to H.z/ if

G.z/ D H.z/Hi .z/ D 1;

7
which implies that
1
H.z/ D :
Hi .z/
The time-domain equivalent is

gŒn D hŒn  hi Œn D ıŒn:

The question of which ROC to associate with Hi .z/ is answered by the


convolution theorem — for the previous equation to hold the regions of
convergence of H.z/ and Hi .z/ must overlap.
Example: inverse system for first-order system
Let H.z/ be
1 0:5z 1
H.z/ D
1 0:9z 1
with ROC jzj > 0:9. Then Hi .z/ is
1
1 0:9z
Hi .z/ D 1
:
1 0:5z
Since there is only one pole, there are only two possible ROCs. The choice of
ROC for Hi .z/ that overlaps with jzj > 0:9 is jzj > 0:5. Therefore, the
impulse response of the inverse system is

hi Œn D .0:5/n uŒn 0:9.0:5/n 1


uŒn 1:

In this case the inverse is both causal and stable.


A LTI system is stable and causal with a stable and causal inverse if and only if
both the poles and zeros of H.z/ are inside the unit circle — such systems are
called minimum phase systems.
The frequency response of the inverse system, if it exists, is
1
H.e j! / D :
Hi .e j! /
Not all systems have an inverse. For example, there is no way to recover the

8
frequency components above the cutoff frequency that were set to zero by the
action of the lowpass filter.

2.3 Impulse response for rational system functions

If a system has a rational transfer function, with at least one pole that is not
cancelled by a zero, then there will always be a term corresponding to an
infinite length sequence in the impulse response. Such systems are called
infinite impulse response (IIR) systems.
On the other hand, if a system has no poles except at z D 0 (that is, N D 0 in
the canonical LCCDE expression), then
M
X
k
H.z/ D bk z :
kD0

In this case the system is determined to within a constant multiplier by its


zeros, so the impulse response has a finite length:
8
M <b
X n 0nM
hŒn D bk ıŒn k D
:0 otherwise
kD0

In this case the impulse response is finite in length, and the system is called a
finite impulse response (FIR) system.
Example: a first-order IIR system
Given a causal system satisfying the difference equation

yŒn ayŒn 1 D xŒn;

the system function is


1 z
H.z/ D 1
D ; jzj > jaj:
1 az z a

9
The condition for stability is jaj < 1. The inverse z-transform is

hŒn D an uŒn:

Example: a simple FIR system


Consider the truncated impulse response
8
<a n 0nM
hŒn D
:0 otherwise:

The system function is


M
X 1 aM C1 z M 1
H.z/ D an z n
D 1
:
nD0
1 az

The zeros of the numerator are at

zk D ae j 2k=.M C1/ ; k D 0; 1; : : : ; M:

With a assumed real and positive, the pole at z D a is cancelled by a zero. The
pole-zero plot for the case of M D 7 is therefore given by
z−plane Im

7th
order
1
Re

The difference equation satisfied by the input and output of the LTI system is
the convolution
XM
yŒn D ak xŒn k:
kD0

10
The input and output also satisfy the difference equation

yŒn ayŒn 1 D xŒn aM C1 xŒn M 1:

3 Frequency response for rational system


functions

If a stable LTI system has a rational system function, then its frequency
response has the form
PM j!k
j! kD0 bk e
H.e / D PN :
a e j!k
kD0 k

We want to know the magnitude and phase associated with the frequency
response. To this end, it is useful to express H.e j! / in terms of the poles and
zeros of H.z/:
  QM
j! b0 kD1 .1 ck e j! /
H.e / D QN :
a0 kD1 .1 d k e j! /

It follows that
ˇ ˇ QM j!
ˇ b0 ˇ kD1 j1 ck e j
jH.e j! /j D ˇˇ ˇˇ QN :
a0 kD1 j1 dk e j! j

Therefore jH.e j! /j is the product of the magnitudes of all the zero factors of
H.z/ evaluated on the unit circle, divided by the product of the magnitudes of
all the pole factors evaluated on the unit circle.
The gain in dB of H.e j! /, also called the log magnitude, is given by

Gain in dB D 20 log10 jH.e j! /j;

11
which for a rational system function takes the form
ˇ ˇ M
j!
ˇ b0 ˇ X j!
20 log10 jH.e /j D 20 log10 ˇˇ ˇˇ C 20 log10 j1 ck e j
a0
kD1
N
X
j!
20 log10 j1 dk e j:
kD1

Also
Attenuation in dB D Gain in dB:
Thus a 60dB attenuation at frequency ! corresponds to jH.e j! /j D 0:001.
Also,

20 log10 jY .e j! /j D 20 log10 jH.e j! /j C 20 log10 jX.e j! /j:

The phase response for a rational system function is


  XM N
b0
X
^H.e j! / D ^ C ^Œ1 ck e j!
 ^Œ1 dk e j!
:
a0
kD1 kD1

The zero factors contribute with a plus sign and the pole factors with a minus.
In the above equation, the phase of each term is ambiguous, since any integer
multiple of 2 can be added at each value of ! without changing the value of
the complex number. When calculating the phase with a computer, the angle
returned will generally be the principal value ARGŒH.e j! /, which lies in the
range  to . This phase will generally be a discontinuous function,
containing jumps of 2 radians whenever the phase wraps. Appropriate
multiples of 2 can be added or subtracted, if required, to yield the continuous
phase function argŒH.e j! /.

12
ARGŒH.e j! /

0 !



argŒH.e j! /

0 !


2
3
4

3.1 Frequency response of a single pole or zero

Consider a single zero factor of the form

.1 re j e j!
/

in the frequency response. The magnitude squared of this factor is

j1 re j e j! 2
j D .1 re j e j!
/.1 re j j!
e /
D 1 C r2 2r cos.! /;

so the log magnitude in dB is

20 log10 j1 re j e j!
j D 10 log10 Œ1 C r 2 2r cos.! /:

The principle value of the phase for the factor is


 
r sin.! /
ARGŒ1 re j e j!  D arctan :
1 r cos.! /
These functions are periodic in ! with period 2.

13
The following plot shows the frequency response for r D 0:9 and three
different values of :
Gain (dB) 10

−10

 D0 D 2  D
−20
 3
0 2  2 2
!

2
Phase (radians)


2  3
0 2  2 2
!
Note that
 The gain dips at ! D . As  changes, the frequency at which the dip
occurs changes.
 The gain is maximised for !  D , and for r D 0:9 the magnitude of
the resulting gain is
10 log10 .1 C r 2 C 2r/ D 20 log10 .1 C r/ D 5:57dB:

 The gain is minimised for ! D , and for r D 0:9 the resulting gain is
10 log10 .1 C r 2 2r/ D 20 log10 j1 rj D 20dB:

 The phase is zero at ! D .


Note that if the factor .1 re j e j! / occurs in the denominator, thereby
representing a pole factor, then the entire analysis holds with the exception that
the sign of the log magnitude and the phase changes.

14
The frequency response can be sketched from the pole-zero plot using a simple
geometric construction. Note firstly that the frequency response corresponds to
an evaluation of H.z/ on the unit circle. Secondly, the complex value of each
pole and zero can be represented by a vector in the z-plane from the pole or
zero to a point on the unit circle.
Take for example the case of a single zero factor
z re j
H.z/ D .1 re j z 1
/D r < 1;
z
which corresponds to a pole at z D 0 and a zero at z D re j .
Im
3
z−plane
v2 v3
v1
 ! Re

If the vectors v1 , v2 , and v3 D v1 v2 represent respectively the complex


numbers e j! , re j , and e j! re j , then
ˇ ˇ
ˇ e j! re j ˇ jv3 j
jH.e j! /j D j1 re j e j! j D ˇ
ˇ ˇ
D D jv3 j:
e j!
ˇ
ˇ ˇ jv1 j

The phase is

^H.e j! / D ^.1 re j e j!
/ D ^.e j! re j / ^.e j! /
D ^.v3 / ^.v1 / D 3 1 D 3 !:

A vector such as v3 from a zero to the unit circle is referred to as a zero

15
vector, and a vector from a pole to the unit circle is called a pole vector.
Consider now the pole zero system depicted below:
Im

z−plane

v3
v1
v1
v3 3 !
v2 Re

The frequency response for the single zero at different values of r and  D  is
10
Gain (dB)

0
r=0.5
−10 r=0.7
r=0.9
−20 r=1
 3
0 2  2 2
!

2
Phase (radians)


2  3
0 2  2 2
!
Note that the log magnitude dips more sharply as r approaches 1 (and at
! D  tends to 1 as r tends towards 1). The phase function has positive
slope around ! D . This slope increases as r approaches 1, and becomes

16
infinite for r D 1. In this case the phase function is discontinuous, with a jump
of  radians at ! D .
If r increases still further, to lie outside of the unit circle,
Im

z−plane

v1
v3
3
3 v2
Re

then the frequency response becomes

10
Gain (dB)

−10 r=1/0.9
r=1.25
−20 r=2.0
 3
0 2  2 2
!

Phase (radians)

  3
0 2  2 2
!

17
3.2 Frequency response with multiple poles and zeros

In general, the z-transform of a LTI system can be factorised as


  QM 1   QM
b0 kD1 .1 c k z / b0 kD1 .z ck /
H.z/ D QN D z N M QN :
a0 kD1 .1 dk z 1 / a0 kD1 .z dk /

Depending on whether N is greater than or less than M , the factor z N M


represents either N M zeros at the origin, or M N poles at the origin. In
either case, the z-transform can be written in the form
QM0
iD0 .z zk /
H.z/ D K QN
0
iD0 .z pk /
where z1 ; : : : ; zM0 are the zeros, and p1 ; : : : ; pN0 the poles of H.z/. This
representation could also be obtained by merely factorising H.z/ in terms of z
rather than z 1 .
The frequency response of this system is
QM0 j!
j! iD0 .e zk /
H.e / D K QN :
0 j!
iD0 .e pk /
The magnitude is therefore
QM0 j!
j! iD0 je zk j
jH.e /j D jKj QN ;
0 j! pk j
iD0 je

and the phase is


M0
X N0
X
^H.e j! / D ^.e j! zk / ^.e j! pk /:
iD0 iD0

In the z-plane, .e j! zk / is simply the vector from the zero zk to the point on
the unit circle. The term je j! zk j is the length of this vector, and
^.e j! zk / is the angle that it makes with the positive real axis. Similarly, the

18
term .e j! pk / corresponds to the vector from the pole pk to the point on the
unit circle.
It follows then that the the magnitude response is the product of the lengths of
the zero vectors, divided by the product of the lengths of the pole vectors. The
phase response is the sum of the angles of the zero vectors, minus the sum of
the angles of the pole vectors. Thus, for the two pole, two zero system
Im
1
z−plane V1
p1
U1
U2 1
2
z2 V2 z1 Re
p2 2

the frequency response is


U1 U2
jH.e j! /j D jKj
V1 V2
^H.e j! / D 1 C 2 .1 C 2 /:

Here K is a constant factor which cannot be determined from the pole zero
diagram alone, but only serves to scale the magnitude.

4 Realisation structures for digital filters


The difference equation, the impulse response, and the system function are all
equivalent characterisations of the input-output relation for a LTI discrete-time
system. For implementation purposes, systems described by LCCDEs can be

19
implemented by structures consisting of an interconnection of the basic
operations of addition, multiplication by a constant, and delay. The desired
form for the interconnections depends on the technology to be used.
Discrete-time filters are often represented in the form of block or signal flow
diagrams, which are convenient for representing the difference equations or
transfer functions.
For example, the system with the difference equation

yŒn D xŒn 1 b1 yŒn 1 C b2 yŒn 2 C b3 yŒn 3

can be represented in a block diagram form as


x[n] x[n−1]
1
z
y[n]
1
z
b1
y[n−1]
1
z
b2
y[n−2]
1
z
b3
y[n−3]

The symbol z 1 represents a delay of one unit of time, and the arrows
represent multipliers (with the constant multiplication factors next to them).
The equivalent signal flow diagram is

20
1
z
x[n] y[n]
1
z
b1
y[n−1]
1
z
b2
y[n−2]
1
z
b3
y[n−3]

The relationship between the diagrams and the difference equation is clear.
Many alternative filter structures can be developed, and they differ mainly with
respect to their numerical stability and the effects of quantisation on their
performance. A discussion of these effects can be found in many DSP texts.

21
Filter design

1 Design considerations: a framework


|H(f)|

1 C ıp
1 ıp

ıs
f
0 fp fs

Passband Transition Stopband


band
The design of a digital filter involves five steps:
 Specification: The characteristics of the filter often have to be specified in
the frequency domain. For example, for frequency selective filters
(lowpass, highpass, bandpass, etc.) the specification usually involves
tolerance limits as shown above.
 Coefficient calculation: Approximation methods have to be used to
calculate the values hŒk for a FIR implementation, or ak , bk for an IIR
implementation. Equivalently, this involves finding a filter which has
H.z/ satisfying the requirements.
 Realisation: This involves converting H.z/ into a suitable filter structure.
Block or flow diagrams are often used to depict filter structures, and show
the computational procedure for implementing the digital filter.

1
 Analysis of finite wordlength effects: In practice one should check that
the quantisation used in the implementation does not degrade the
performance of the filter to a point where it is unusable.
 Implementation: The filter is implemented in software or hardware. The
criteria for selecting the implementation method involve issues such as
real-time performance, complexity, processing requirements, and
availability of equipment.

2 Finite impulse response (FIR) filter design


A FIR filter is characterised by the equations
N
X1
yŒn D hŒkxŒn k
kD0
N
X1
k
H.z/ D hŒkz :
kD0

The following are useful properties of FIR filters:


 They are always stable — the system function contains no poles. This is
particularly useful for adaptive filters.
 They can have an exactly linear phase response. The result is no frequency
dispersion, which is good for pulse and data transmission.
 Finite length register effects are simpler to analyse and of less
consequence than for IIR filters.
 They are very simple to implement, and all DSP processors have
architectures that are suited to FIR filtering.
 For large N (many filter taps), the FFT can be used to improve
performance.

2
Of these, the linear phase property is probably the most important. A filter is
said to have a generalised linear phase response if its frequency response can
be expressed in the form

H.e j! / D A.e j! /e j˛!Cjˇ

where ˛ and ˇ are constants, and A.e j! / is a real function of !. If this is the
case, then
 If A is positive, then the phase is

^H.e j! / D ˇ ˛!:

If A is negative, then

^H.e j! / D  C ˇ ˛!:

In either case, the phase is a linear function of !.


It is common to restrict the filter to having a real-valued impulse response hŒn,
since this greatly simplifies the computational complexity in the
implementation of the filter.
A FIR system has linear phase if the impulse response satisfies either the even
symmetric condition
hŒn D hŒN 1 n;
or the odd symmetric condition

hŒn D hŒN 1 n:

The system has different characteristics depending on whether N is even or


odd. Furthermore, it can be shown that all linear phase filters must satisfy one
of these conditions. Thus there are exactly four types of linear phase filters.

3
Consider for example the case of an odd number of samples in hŒn, and even
symmetry. The frequency response for N D 7 is
6
X
j! j!n
H.e /D hŒne
nD0
j! j 2! j 3! j 4!
D hŒ0 C hŒ1e C hŒ2e C hŒ3e C hŒ4e
j 5! j 6!
C hŒ5e C hŒ6e
j 3!
De .hŒ0e j 3! C hŒ1e j 2! C hŒ2e j! C hŒ3 C hŒ4e j!

j 2! j 3!
C hŒ5e C hŒ6e /:

The specified symmetry property means that hŒ0 D hŒ6, hŒ1 D hŒ5, and
hŒ2 D hŒ4, so

H.e j! / D e j 3!
.hŒ0.e j 3! C e j 3!
/ C hŒ1.e j 2! C e j 2!
/
C hŒ2.e j! C e j!
/ C hŒ3/
j 3!
De .2hŒ0 cos.3!/ C 2hŒ1 cos.2!/ C 2hŒ2 cos.!//
3
X
j 3!
De aŒn cos.!n/;
nD0

where aŒ0 D hŒ3, and aŒn D 2hŒ3 n for n D 1; 2; 3. The resulting filter
clearly has a linear phase response for real hŒn. It is quite simple to show that
in general for odd values of N the frequency response is
.N 1/=2
X
j! j!.N 1/=2
H.e /De aŒn cos.!n/;
nD0

for a set of real-valued coefficients aŒ0; : : : ; aŒ.N 1/=2. As different values


for aŒn are selected, different linear-phase filters are obtained.

4
The cases of N odd and hŒn antisymmetric are similar to that presented, and
the frequency responses are summarised in the following table:

Symmetry N H.e j! / Type

.N 1/=2
X
j!.N 1/=2
Even Odd e aŒn cos.!n/ 1
nD0

N=2
X
j!.N 1/=2
Even Even e bŒn cos.!.n 1=2// 2
nD1

.N 1/=2
X
j Œ!.N 1/=2 =2
Odd Odd e aŒn sin.!n/ 3
nD0

N=2
X
j Œ!.N 1/=2 =2
Odd Even e bŒn sin.!.n 1=2// 4
nD1

Recall that even symmetry implies hŒn D hŒN 1 n and odd symmetry
hŒn D hŒN 1 n. Examples of filters satisfying each of these symmetry
conditions are:

5
2

h [n]
0
1
−2
−1 0 1 2 3 4 5 6

2
h [n]

0
2

−2
−1 0 1 2 3 4 5 6

2
h [n]

0
3

−2
−1 0 1 2 3 4 5 6

2
h [n]

0
4

−2
−1 0 1 2 3 4 5 6
n

The center of symmetry is indicated by the dotted line.


The process of linear-phase filter design involves choosing the aŒn values to
obtain a filter with a desired frequency response. This is not always possible,
however — the frequency response for a type II filter, for example, has the
property that it is always zero for ! D , and is therefore not appropriate for a
highpass filter. Similarly, filters of type 3 and 4 introduce a 90ı phase shift, and
have a frequency response that is always zero at ! D 0 which makes them
unsuitable for as lowpass filters. Additionally, the type 3 response is always
zero at ! D , making it unsuitable as a highpass filter. The type I filter is the
most versatile of the four.
Linear phase filters can be thought of in a different way. Recall that a linear
phase characteristic simply corresponds to a time shift or delay. Consider now
a real FIR filter with an impulse response that satisfies the even symmetry
condition hŒn D hŒ n:

6
2

hŒn
1
0
−1
−5 0 5
n
6
H.e j! /

4
2
0
−2
−4
 0 
ω

Recall from the properties of the Fourier transform this filter has a real-valued
frequency response A.e j! /. Delaying this impulse response by .N 1/=2
results in a causal filter with frequency response

H.e j! / D A.e j! /e j!.N 1/=2


:

This filter therefore has linear phase.


2
hŒn

1
0
−1
−2 0 2 4 6 8
n
/j

5
j!
jH.e

0
 0 
ω
^H.e j! /

2
0

2
 0 
ω

7
2.1 Window method for FIR filter design

Assume that the desired filter response Hd .e j! / is known. Using the inverse
Fourier transform we can determine hd Œn, the desired unit sample response.
In the window method, a FIR filter is obtained by multiplying a window wŒn
with hd Œn to obtain a finite duration hŒn of length N . This is required since
hd Œn will in general be an infinite duration sequence, and the corresponding
filter will therefore not be realisable. If hd Œn is even or odd symmetric and
wŒn is even symmetric, then hd ŒnwŒn is a linear phase filter.
Two important design criteria are the length and shape of the window wŒn. To
see how these factors influence the design, consider the multiplication
operation in the frequency domain: since hŒn D hd ŒnwŒn,

H.e j! / D Hd .e j! /  W .e j! /:

The following plot demonstrates the convolution operation. In each case the
dotted line indicates the desired response Hd .e j! /.
/
/
W .e j.!

 0 !  2 3
θ
H.e j! /

 0  2 3
ω

From this, note that


 The mainlobe width of W .e j! / affects the transition width of H.e j! /.
Increasing the length N of hŒn reduces the mainlobe width and hence the

8
transition width of the overall response.
 The sidelobes of W .e j! / affect the passband and stopband tolerance of
H.e j! /. This can be controlled by changing the shape of the window.
Changing N does not affect the sidelobe behaviour.
Some commonly used windows for filter design are
 Rectangular: 8
<1 0nN
wŒn D
:0 otherwise

 Bartlett (triangular):

<2n=N
ˆ 0  n  N=2
wŒn D 2 2n=N N=2 < n  N
ˆ
0 otherwise

 Hanning:
8
<0:5 0:5 cos.2 n=N / 0nN
wŒn D
:0 otherwise

 Hamming:
8
<0:54 0:46 cos.2 n=N / 0nN
wŒn D
:0 otherwise

 Kaiser:
8
<I Œˇ.1
0 Œ.n ˛/=˛2 /1=2  0nN
wŒn D
:0 otherwise

Examples of five of these windows are shown below:

9
1
Rectangular
Triangular
wŒn
0.5

0
0 N/2 N
n

1
Hanning
Hamming
wŒn

0.5 Blackman

0
0 N/2 N
n

All windows trade off a reduction in sidelobe level against an increase in


mainlobe width. This is demonstrated below in a plot of the frequency
response of each of the windows:
20 log10 jW .e j! /j

0
Rectangular
Triangular
−50

−100
0 
ω
20 log10 jW .e j! /j

0
Hanning
Hamming
−50 Blackman

−100
0 
ω

Some important window characteristics are compared in the following table:

10
Window Peak sidelobe Mainlobe Peak approximation
amplitude (dB) transition width error (dB)
Rectangular 13 4=.N C 1/ 21
Bartlett 25 8=N 25
Hanning 31 8=N 44
Hamming 41 8=N 53
The Kaiser window has a number of parameters that can be used to explicitly
tune the characteristics.
In practice, the window shape is chosen first based on passband and stopband
tolerance requirements. The window size is then determined based on
transition width requirements. To determine hd Œn from Hd .e j! / one can
sample Hd .e j! / closely and use a large inverse DFT.

2.2 Frequency sampling method for FIR filter design

In this design method, the desired frequency response Hd .e j! / is sampled at


equally-spaced points, and the result is inverse discrete Fourier transformed.
Specifically, letting

H Œk D Hd .e j! /ˇ!D 2k ;


ˇ
k D 0; : : : ; N 1;
N

the unit sample response of the filter is hŒn D IDFT.H Œk/, so


N 1
1 X
hŒn D H Œke j 2 nk=N :
N
kD0

The resulting filter will have a frequency response that is exactly the same as
the original response at the sampling instants. Note that it is also necessary to
specify the phase of the desired response Hd .e j! /, and it is usually chosen to
be a linear function of frequency to ensure a linear phase filter. Additionally, if

11
a filter with real-valued coefficients is required, then additional constraints
have to be enforced.
The actual frequency response H.e j! / of the filter hŒn still has to be
determined. The z-transform of the impulse response is
NX1 NX1 1 NX1
" #
H.z/ D hŒnz n D H Œke j 2 nk=N z n
nD0 nD0
N
kD0
N 1 N
X1
1 X
D H Œk e j 2 nk=N z n
N nD0
kD0
N 1
1 z N
 
1 X
D H Œk :
N 1 e j 2k=N z 1
kD0

Evaluating on the unit circle z D e j! gives the frequency response


N 1
j! 1 e j!N X H Œk
H.e /D :
N 1 e j 2k=N e j!
kD0

This expression can be used to find the actual frequency response of the filter
obtained, which can be compared with the desired response.
The method described only guarantees correct frequency response values at the
points that were sampled. This sometimes leads to excessive ripple at
intermediate points:

12
1.2

0.8
jH.e j! j

0.6

0.4

0.2 Actual
Desired
0
0  2
ω

One way of addressing this problem is to allow transition samples in the


region where discontinuities in Hd .e j! / occur:

T1
T2
T3

Passband Transition band Stopband

This effectively increases the transition width and can decrease the ripple, as
observed below:

13
1.2

0.8
jH.e j! j

0.6

0.4

0.2 Actual
Desired
0
0  2
ω

By leaving the value of the transition sample unconstrained, one can to some
extent optimise the filter to minimise the ripple. Empirically, with three
transition samples a stopband attenuation of 100dB is achievable. Recall
however that for hŒn real we require even or odd symmetry in the impulse
response, so the values are not entirely unconstrained.

2.3 Optimum approximations of FIR filters

This method of filter design attempts to find the filter of length N that
optimises a given design objective. In this case the objective is chosen to be the
minimisation of
max jE.e j! /j
0!2

where E.e j! / is a weighted error function

E.e j! / D W .e j! /ŒHd .e j! / H.e j! /:

The minimisation is performed over the filter coefficients hŒn.


In practice, the design problem can be specified as follows: given ıp , ıs , fp ,
and fs , determine hŒn such that the design specification is satisfied with the
smallest possible N . The optimal (or minimax) design method therefore yields

14
the shortest filter that meets a required frequency response over the entire
frequency range. It is widely used in practice.
Solutions to this optimisation problem have been explored in the literature, and
many implementations of the method are available. It turns out that when
max jE.e j! /j is minimised, the resulting filter response will have equiripple
passband and stopband, with the ripple alternating in sign between two equal
amplitude levels:

−40

−50

−60
jH.e j! j

−70

−80

−90

−100
0 
ω

The maxima and minima are known as extrema. For linear phase lowpass
filters, for example, there are either r C 1 or r C 2 extrema, where
r D .N C 1/=2 (for type 1 filters) or r D N=2 (for type 2 filters).
For a given set of filter specifications, the locations of the extremal
frequencies, apart from those at band edges, are not known a priori. Thus the
main problem in the optimal method is to find the locations of the extremal
frequencies. Numerous algorithms exist to do this. Once the locations of the
extremal frequencies are known, it is simple to specify the actual frequency
response, and hence find the impulse response for the filter.

15
3 Infinite impulse response (IIR) filter design

An IIR filter has nonzero values of the impulse response for all values of n,
even as n ! 1. To implement such a filter using a FIR structure therefore
requires an infinite number of calculations.
However, in many cases IIR filters can be realised using LCCDEs and
computed recursively.
Example:
A filter with the infinite impulse response hŒn D .1=2/n uŒn has z-transform
1 Y .z/
H.z/ D 1
D :
1 1=2z X.z/
Therefore, yŒn D 1=2yŒn 1 C xŒn, and yŒn is easy to calculate.
IIR filter structures can therefore be far more computationally efficient than
FIR filters, particularly for long impulse responses.
FIR filters are stable for hŒn bounded, and can be made to have a linear phase
response. IIR filters, on the other hand, are stable if the poles are inside the
unit circle, and have a phase response that is difficult to specify. The general
approach taken is to specify the magnitude response, and regard the phase as
acceptable. This is a disadvantage of IIR filters.
IIR filter design is discussed in most DSP texts.

16

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy