0% found this document useful (0 votes)
109 views

Mitra Digital Signal Processing OCR

Uploaded by

1hamzajerrari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views

Mitra Digital Signal Processing OCR

Uploaded by

1hamzajerrari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 879

Oiqita\ S\qna\

Process1ng
DIGITAL SIGNAL PROCESSING
· A Computer-Based Approach

Second Edition

Sanjit K. Mitra
Department of Electrical and Computer Engineering
University of California, Santa Barbara

McGraw-Hill


About the Author

Sanjit K. Mitra recelved his M.S. and Ph.D. in electrical engineering from the University of CaJifomia,
Berkeley. and an Honorary Doctorate ofTechfiDlogy from Tampere University of Technology in Finland.
After holding the position of assistant professor at Cornell University until 1965 and working at AT&T
Bell Laboratories. Holmdel, Ne>v Jersey. until J %7. he joined the faculty of the University of California
at Davis. Dr. Mitra then transferred to the Santa Barbara \":ampus in 1977, where he served as department
chairman from 1979 to 1982 and is now a Professor of Electrical and Computer Engineering. Dr. Mitra
:ra._ published more than 500 journal aOC conference papers. and II books, and holds S patents. He served
as President of the IEEE Circuits and Systems Society in !986 and is currently a me!Jlber of the editorial
l:.aa~ds for four journals: Multidimensional Sy.Hemsond Si~nal Prvcessint:: Signal Prt:ces.'iing; Journal of
the Franklin /nsritUJe; and Auwmatikr~. Dr. Mitra ha'> received many distinguished industry .and academic
awards, including the 1973 F. E. Tennan Award. the 1985 AT&T Foundation Award of the American Society
cf Engineering Education, the 1989 Education Award of the IEEE Circuits and Systet"i)S Society, the !989
Distinguished Senior tJ .S. Scientist Award from the Alexander von Humboldt Foundation of Germany, the
!995 Technical Achievement Award of the IEEE Signal Processing Society, the 1999 Mac Van Valkenburg
Society Award and the CAS Golden Jubilee Medal of the IEEE Circuits & System Socie!y, and the IEEE
Millennium Medal in 2000. He is an Academician of the Academy of Finland. Dr. Mitra is a Fellow of
the IEEE AAAS. and SPIE and is a member of EURASIP and the ASEE.
Preface

Tile field of digital signal proce:.~ing (DSP) has setn explosive growth during the pasl three decades,
as p~nomenal advances both in research and application have been made. Fueling thls grov.'th have
been the advances in Ciigital computer technology and software development. Almost every electrical and
computer engineering department in tlus country and abroad now offers ore or more courses in digital
s ~gnal processing. with the tirst course usually being offered at the senior level. Th[s book is intended for
a tv;o-semester course on digital signal processing for seniors or first-year graduate students. It is also
written at a level suitable for self-study by the practicing engineer cr scientist.
Even though the first edition of this book was published barely two years ago, based on the feedback
received from professors wbo adopted this book for their courses and many readers, il was clear that a new
edition was needed to incorpora1e the suggested changes to the contents. A number of new topics have
been included in the second ed:tmn. Likewise, a number of topics that are interesting but not practically
u._~f\d have been removed becau~e of sib: limitations. 1t was also felt that more worked-out examples were
needed to explain new and dlfficult t--oncepts.
The new topics included in the ;;eco:;d edition are: calculation of total solution, zero-mput response,
zero-state respoo'>e, and impulse response of fi.nite-di.mcnsiona.! di5erctc-Hmc sys-te.mti (Se.:ti.ons 2.6.1·-
2.6.3). correlation of signals and its applications {Section 2.7), inverse sys.lem;; (Section 4.9), system
iCemification (Section 4.10), matched filter and its application (Section 4.14 ). sampling of bandpass signals
(Sec linn 5.3 ), design ofhighpa.<>~. bandpass. and bands lOp analog filters {Section 5.5), effect of :>ample-and-
hold operation (Section 5.1 t). design of highpas.s. bandpass. and bandstop IIR digital filters (Sectmn 7.4),
design of FIR di.gital filters with least-mean-square error tSection 7 .8), ron>.trained least-square design of
FIR digital filters (Section 7 .9), perfect reconstruction rwo-::hannel BR filter bank.-; (Se(."tion .I 0. 9), cosinc-
mod-Jlated I.-channel filter hanks {Section 10.1 I), spcctralunalysis of random s-ignal,; (Section 11.4 }, ottld
sparse antenna array design (Section 11.14). The topics that have been rer:10ved from the first e-.dition
are as follov.•s; stale-space repre.~mation of LTT discrete-time systems from Chapter 2, signal flow-graph
representatior. and ~lite-space structures from Ctupter 6, i'.llpulse invariance methud of UR filter design
and FIR filter des-ign based on the frequency-flampling approach from Chapter 7, reduction of product
round-off errot"S from state-space structures from Chapter 9, and voice privacy system from Chapt:er ll.
The fractional sampling rate conversion using the Lagrange interpolation has been moved to Chaptz:.r 10.
Materials in each chapter are now organized more logically
xiv Preface

A kev feature of th>.'> book is the extensive use of M.HLAI:l® -ba;;;ed 1 examples rhal iilustmte the pro-
gram's p~wertul c.ap{lbil!ty to solve ;,ignal processing probkms. The b...-mk uses a thrce.st.:tge pedagogical
stmt.:tme designo:d to take full advantage of M .-.TLAB and to avoid the pitfalls of a "'cookbook" approach to
probl~m solvinc. First, each chapter beg:ns by developing: the es:>ential theory and algorithms. Second
tlli~ material is ilh:.strated with e'l.amples sohed b:r hand cakuhltion. And third. solution;, m-e derived usin~
M.\"11 Ail. From the beginning. M.HLAB codes are pnwided with enough detaib to permit the s.tudents to
repeat the examples on their C"()lllputers. In addition to conve:uional theoretical problems requlfing ana-
lytical M}lutions, each chapter .also includes a large number of problems requiring solution via MATLI'>B.
This book requires a minimal knowledge of MATt.ML I believe students learn the intr;~acies of problem
~~l!viug with MATLAB fast::r by using te~ted. complete prognuns, ami then writing ~imple programs to
soivc specific probkms that are iF.duded at the ends of Chapt.:.7'> 2 to l!.
Because computer verification enhances the understanding of the underlyi"g lhctlries and. w; in the first
ed:ticn, .a large library of worked-out MAnAB pwgram.<. are- included in the second edilinn. The original
:\-'tHLAB programs of the first edition have been updated to run on the newer ~·er;;ions uf MATL..,B and the
Si~~nal Proce.~sing Toolbox. ln addition, new YfATLAB programs and code fragments have been added in
this edition. The reader can run these programs to verify the results included in the book. Altogether there
are 90 MATLAB programs in the text that have been tested under version 5.3 of M!i.TLAB and version 4.2
of the Sifinal Processinx Toolbo.r. Some nfthe programs listed in this book are not necessarily the fastest
wi !h regard to their execution speeds, nor are they the :;honest. They have ·'Jeen written fOr maximum
drrity without detailed explana!ions.
A second attractive feature of this book i;; the inclusion of 231 simple but practical examples that
expo~e the reader to real-life signal processing problems which has been made possible by the usc of
co2nputers in solving practical design problems. Thi;, hook also .:oven; many topics of current inlerest
no-t rmnna!ly found i:n an upper-division text. Additional topics are also introduced ro the reader through
problems at the end of eac-h chapte.r. Finally, the book condudes w1th a chapter that focuses. on several
important, practicaJ application<; of digital signal processing. These applications are easy to follow and do
no·: require knowledge of other advanced-level cou:ses,
The pr-erequisite for this book is 11 junior-ievei course in linear continuous-time and discrete-time
syo:lem">, which i:. u:.ually required in most universil:e'>. A minimal review oflinearsy;,1ems and transfonn1>
is provJded in the text, and basic materials ~rom linear system ttleory are included, with important materials
smm:r.arized in tables. This approach permits the inclusion of more advanced materials without significantly
increasing the length of the book.
The book is divided into ll chaprers. Chapler 1 presents an introduction to the field of signal processing
and provides an overview of signals and signal processing methods. Chapter 2 discusses the time-domain
representatiom of discrete-time signals and discrete-time sy~tems as sequences of numbers and describes
cla.;;ses of such signals and sysrems commonly encountered. Several basic diSt:rete-time signals that play
important roles in i.he time-domain characterization of arbitrary discrete-time signals and discrete-time
<;yslem;, are then mtroduced. Next. a number ofba;,ic openuiuns to generate other sequences. from one or
more ~equences are described. A combination ofthe:'>f' operations is also used in developing a discrete-time
system. The problem of represeflting a continuous-time signa! hy a discrete-time sequence is examined
for a ;;1mple case. Finally, the rime-domain characterization of discrele-time random signals is discus:r.ed.
Ch<ipter J is devoted tu the lransfonn-Comain representations of a discrete-time sequence. Specifically
diseu~cd are ihe discrete-time Fouri:er transform (DTFf), the discrete Fourier transform (DFf). and the
;:-tram form. Properties of each of these transforms are reviewed and a few simple applit.:atinns outlined.
The chapter end~ with a discussion of the ttansfonn-domain repre~n!alion of a random signal.
Thi~ book concentra:tes almost exclusively on the linear time-invariant discrete-time syQems. imd
1 MAI::..'.!l is a rc!'i>t=ed !rademarkof1be MathWuks. hoc , 24 Prune Part Wc:<y, Na!;ck. ~1A 0176().!50(), Ph<>nc: Sl18-647.7fJOO.
hUp:/Jwww..malhworkSJ;'!Jm.
Preface XV

ChaP".er 4 Ji,;;;usscs tPeir transfonn-domain representation;,. SptX:iJic: properties of "uch tran,J'onn-domain


repre;,entati.-mi. are inve~tigatcd, and several simple appli<..:ations are considered.
Chapter 5 is concerned prim:u-ily with the discrete-time processing of continuou;;.-time signals. The
conditions for discrete-time rep':"esen!at:on of a bandlimited continuous-lime signal under ideal sampling
.und its exm."1 recovery from the sampled version are fir;;t d.-:rived. Several interface circuits arc u;;ed for the
di&erete-!lmt: processing of cominuous-timt' signds. Two of these circuits are the anti-aliasing filter and the
!ecomtruction filler, which arc analog lcwpass fillers. As a reo.uii, a brief review of lhe hai>ic theory behind
~.ome L'Ommonly used analog tilter design methods 1;, included. and their use is i1lustrated wilh M<>,TLAB.
Other intclacc circuits dhcusscd in this ch<q.."ler are the "ample-and-hold circuit the analog-to-digital
nmvertcr, and 1he digital-to-analog converter.
A structural represenratiun using imen:onneded basic building blocks i~ !he first step in the hardware
or ~oft ware Jmpl.emcntation of an LTJ digital fiher. The >.lru\."1.ural representation provides the relations
between some pertinen! internal variables with the input and the output, which in tum provide~ the key'>
to l'le implementation. There arc variou~ forms nf the structural repre.<;entation of a Jigital filter. and two
~uch representations are re\•iev.ed Jn Chapter 6. followed by a discussion of wme popular scheme~ for the
realization of real causal IJR and FIR digital filters.. In addJtJon, it de~cribes a method cUr the reali7atiun of
IIR digital filter strm:tures thal..::an he u~ for the generation of a pair of orthogonal sinusoidal ,;equcnces.
Chapter 7 considers lhe digital filter design problem. First, it dis{;usses the issues associated with
tile- tlher -design problem, Then it descril:x:s the most popular approach to IIR filter design, baMX! on Ii1e
nmvnsion of a prototype analog transfer function to a digital transter function. The spectral tran.<;formation
t•f one type of IIR transfer function into another type i~ titscussed. Then a very simple apprnach to FIR
filter design i~ described. FinaUy. the chapter reviewi> computer-aided design of both IJR and FlR digital
!tilers_ The u~ of MATLAB in digital ftlter design is illustmted.
Chapter 8 i-. t~oncemed with the implementation aSP\."-Cts of DSP algorithms. Two major is.sucs con-
cerning implem.ontation are dist:ussed first. The software implementations of digital filleTing and DFf
tttgorithms on a computer using MATL\.B are- reviewed to illustrate the main points. This is followed by a
discussion of various ~..::hemes for thr represenlatior~ of number and signal variable.\ on digital machines·,
which is basic to the developm~nt of methods for the analy:-;i;; uf finite wordlcr.gth effects considered in
Chapter 9. Algorithms used to implement addition and nwhiplication. the two key arithnlelk nperations
in dig;lal signal processing, are reviewed next, along with operations developed to httruJ!e overflow. Fi-
mdly, the chapter outline~ two general methods for the design and implementation oftunab!edigi-tal tilten.,
followed hy a discussion of algonthms for the approximation of certain special functions.
Chapter 9 is devoted to analysis of the effects of the various sources of qu:mtization errors; it d~cribes
s:.ructures that are less sensitiv~ to these effects. Included here are discussions on the dTcct of coefficient
quantization.
Chapter lO di;'>Cusses multirate discrete-time systems with unequai sampling rates at various pam.
The chapter includes a review of the ba.'iic concepts and properties of >.ampling rate alteration, design of
d·xnnation and interpolation di._gilal filters, and m:.~ltiratc lilter bank design.
The final chapter. Chapter ! I_ revtew .. .a fevt simple prJ.ctical applications of digital signal process'mg
to pronde a glimpse of its potentiaL
The muteriab in tllis hook have beer. Jsed in a tw0-quanercoursescquence on digital signal processing
a1 tb.o L'nivers.ity of California" Sant'd Barbara, and have Pe~u extensively tested in the classroom for over
10 years. Basically, Chapters 1through 6 form the husis of an uppcr-divisi•m course, while Chapters 7
lhmu_gh I 0 form !he basis of a gr<!duate-lcv-el course.
Many topics included in !his text car. he omitted from class discussion, depending on the coverage of
other cour:;cs in the curriculum. Because a ~nior-Jevd eour~e on random signals and systems is required
of all clectric<tl and computer engineering majors in most universities. materials in Seo:ions 2.7, ;\.10, and
4.9 ;,:an be cJ~;duded from all upper-division course on digital signal processing. However, these 10pics
arc important in :he :malysi~ ofwordlength effects discussed in Chapter 9, and reade£5 not familiar witll
Preface
xvi
I
thi.> subject are encouraged to review these sections before reading Chapter 9. Likewise, Section 8.4 Qil
number representation and Section 8.5 on arilhmetic operations can similarly be omitted from discussion
s.ince most students taking a digital signal processing course usually take a course on digital hardware
deo.ign.
Th.is text contains 211 examples, 90 MATLAB programs. 6M problems, and 186 MAT LAB exercises.
Every aflempt has been made to ensure the accuracy of all materials in this book, including the MAT LAB
pmgrams. I would, however. appreciate readers bringing to my atiention any errors that may appear in
the printed version for reasons beyond my control and that of the publisher. These errors and any other
cmmr,ents can be communicaled to me by e-mail addressed to: mitra@Eoce.ucsb.edu.
Finally. 1 have been particularly iOmmme to have had the opportunity to work with the outstanding
students who w-ere in my research group during my teaching career, which spans over 35 years. I have
benefited immensely, and continue to do ;,o, both profe::.si.ona:Jy and personaliy, from my friendship and
as;;ociation with them, and to them I dedi.cale this book.

San_iit K. Mitra
Preface xvii

I Acxnowledgements
The preliminary versions. of the -complete manllSCript for the first edition were reviewed by Dr. Hrvojc
Babic of the University of Zagreb. Croatia; Dr. James F. Kaiser of Duke University-:; Dr. Wolfgang F. G.
Mecklenbriiuker of tl".e Technical University ofVienna,Austria; and Dr. P. P. Vaidyanarllan of the California
lnstitute of Technology. A la.tei" version was re"iewed by Dr. Roberto H. Bambmerger of Microsoft;: Dr.
Charles Boumann of Purdue University; Dr. Kevin Buckley of the University of Minnesota; Dr. John A.
Flemming of the Texas AM Umvers.ity; Dr. Jerry D. Gibson of the Southern Methodist University; Dr.
John Gowdv of Clemson Universitv; Drs. James Harris and Mahmood Nahvi of the California Polytechnic
Umversity, ·san Louis Obispo: Dr. Yih-Chyun Jenq of Portland State University; Dr. Troung Q. Ngyuen of
Boston University; and Dr. Andreas Spanias of Arizona State University. \<'arious parts of the manuscript
were reviewed by Dr. C. Sidney Burrus of Rice University~ Dr. Richard V. Cox of the AT & T Laboratories;
Dr. Ian Galton of the University of California. San Diego; Dr. Nikil S. Jayant of the Georgia Institute
of Technology; Dr. Tor Ramstad of the Norwegian University of Science and Technology. Trondheim,
Norway; Dr. B. Ananth Shenoi of Wright State t:niversi.ty; Dr. Hans W. SchUssler of the University of
Erlangen-Nuremberg, Germany; Dr. Richard Schreier of Analog Devices and Dr. Gabor C. Ternes of
Oregon State University.
Reviews for the second edition were provided by Dr. Winser E. Alexander of North Carolina Stste
Universiry; Dr. Sohail A. Diana! of the Rochester Institute of Technology; Dr. Suhash Dutta Roy of the
lndian Institute ofTechnolog)", New DeJhi: Dr. David C. Fatdeo of North Dakota State University; Dr.
Abdulnasii" Y. Hossein of Sultan Qaboos University, Sultanate of Omman; Dr. James F. Kaiser of Duke
University; Dr- Rama>;:rishna Kakarala of the Agilent Labcratories; Dr. Wolfgang F. G. Mecklenbriiuker of
the Technical University of Vienna, Austria; Dr. Antonio Ortega of the University of Southern California:
Dr. Stanley J. Reeves of Auburn Lniversity; Dr. George Syrtosofthe University of Maryland, College Park;
and Dr. Gregory A. Warnell of the Ma<;sachusetts Ins.titute of Technology. Various parts of the manuscript
for the second edition were Wliewed by Dr. Dimitris Anastassiou of Columbia University; Dr. Rajendra
K. Arora of the Florida State University; Dr. Ramdas Kumaresan of the University of R1rode Island; Dr.
Gpamanyu Madhow d the University of California, Santa Barbara; Drs. Urbashi Mitra and Randy Moses
e>f Ohio S!ate University~ Dr. Ivan Selesnick of Polytechnic University, Brooklyn, New York; and Dr. Gabor
C Ternes of Oregon State Uni"ersity.
I thank all of them for their valuable comments, which have improved the book tremendously.
Many of my former and present research students reviewed various portions of the manuscript of botll
editions and rested a number of the MATLAB programs. In particular, I would like to thank Drs. Charles
D. Creus.ere, Rajeev Gandhi, Michael Lightstone. lng-Song Lin, Luca Lucchese, Debargha Mukherjee,
Norbert Strobel, and Stefan Thurnbofer. and Messrs. Serkan Hatlpoglu, Zhihai He. Eric Leipnik, Michael
Moore. and Mylene Queiroz de Farias. I am also indebted to a!J fonner students in my ECE 158: and ECE
258A classes at the University of California, Santa Barbara, for the iT feedback over the years., which helped
refine the book.
I thank Goutam K. Mitm and Alicia Rodriguez for the cover design of the book. Finatiy, 1 thank
Patri;.:ia Monohon for her assistance in the preparation of the LaTeX files of the second edition.
xviii Preface

Supplements
All MATLAB programs included in this book are available via anonymous file transfer protocol (FIP) from
the Internet site iplserv.ece.uesb.edu in the directory /publmitra!Book_ .le.
A solulions. manual prepared by Rajeev Gandhi, Serkan Hatipoglu, Zhihai He, LucaLucchese, Michael
Moore, and Mylene Queiroz de Farias and containing the solutions to all problems and MATLAB exercises
is available w instructors from the publisher.
A companion book Digital Signal Processing Laboratory Using MATLAB b-y the author is also available
from McGraw-HilL

J
7.
j

t'
H
Contents

Preface xm
l Signals and Signal Processing l
l.l Characterization and Cla<;sification of Signal;;
J .2 Typical Signal Processing Operations 3
I3 Examples of Typical Signals 12
1.4 Typical Signal Processing Applications 22
1.5 Why Digital Signal Processing? 37

2 Discrete-Time Signals and Systems in the Time-Domain 41


2.1 Discrete-Time Signals 42
2.2 Typical Sequences and Se{juence Representa:ion 53
2.3 The Sampling Process 60
2_4 Discrete-Time Systems 63
2.5 Time-Domain Characterization of LTI Discrete-Time Systems 71
2.6 Finite-Dimensional LTI Discrete-Time Systems 80
2.7 Correlation of Signals 8!'.
2.8 Random Signals 94
2.9 Summary 105
2.10 Problems 106
2.11 MATLABExercises 115

3 Discrete-Time Signals in the Transform-Domain 117


3.1 The Discrete-Time- Fourier Transform t 17
1.2 The Discrete Fourier Transform l3l
3.3 Relation between the DTFT and £he DFf, and Their Inverses 137
3A Discrete Fourier Transform Properties 140
3.5 Computation of the DFT of Real Sequences 146
3.6 Linear Convolution Using the DFI' 149
3.7 The z-Transform 155
3.8 Region of Convergenc~ of a Rational z- Transform 159
3.9 Inverse z:-Transform !67
3.10 .<:-Transform Properties 173
3.11 Tramfunn-Domain Representations of Random Signals 176
X Contents

:u2 Summary 179


1.\3 Problem::: HI,{)
3,14 MATI.AB Ex•ercises 199

4 LTI Discrete-Time Systems in the Transform-Domain 263


4,1 Finite-Dimensional Discrete-Time Systems 203
4.2 The Frequency Re.sponse 204
4.?- TheTransferFunctiOJJ 215-
4.4 Types of Transfer l11nctions 222
4.5 Simple DigitaJ Filters 234
4.6 Allpas.s Transfer Fur.ction 243
4.7 Minimum-Phase and \1aximum-Phase Tram:.fer FuflCtion~ 246
4.8 Complementary Transfer Functions 248
4.9 Inverse Systems 25.1
4.10 Syslem Identification 256
4.11 Digital Two-Pairs 259
4.\2 Aigehraic StabllityTe»t 261
4.13 Discrete-Time Proce;..<.ing of Random Signals 267
4.!4 Matched Filter 272
4.15 Summary 275
4_ 16 Problems 277
4_ 17 MATLAfl: Ex:ercises 295

5 Digital Processing of Continuous~ Time Signals 299


5.1 1ntroductioo 299
5.2 Sampling of Continuous-Time Signals 300
5.3 Sampling of Bandpa;,>o Signals 3 W
5.4 Analog Lowpass Filter Design 313
5.5 Design of Analog Highpass, Bandpa._'!s., and Bandstop Filters 329
5.6 Anti-Aliasing Filter Do:sign 335
5.7 Sample-and-Hold Circuit 337
5.3 AnaJog-lo-Digital Converter 33R
5.9 Digital-to-Analog Converter 344
5. 10 Reconstruction Filter Design 348
5.1 I FJfect of Sample-and-Hold Operation 35 l
5.l2 Summary 352
5 !3 Problems 353
5.l4 MATL\.B E.\erdses 356

6 Digital Filter Structures 359


6.1 Block Diagram Re-presentation 359
6.2 Equivalent Structure:- 363
63 Basic FIR Digita\ Filter Structures 364
6.4 Basic JJR Dtgital Filter Structures 368
6.5 Realization of Basic Structures Using MATl.AB 374
6 .• Allpass Filten. 378
Tunable TIR Digital Filters 387
UR Tapped Cascaded Lattice Structures 389
Contents xi

6.9 FIR C.tscaded L.a!tk"Y Struc!Ures 395


6 JO Parallel Allp:J.<;.o; Realization of IIR Tnmsfer FunctiOm:i 40i
6. 1 I Digital Sine-Cosine Generator 405
6.12 Computari.onal Complexity of Digital Filter Structures 408
6.13 Summary 408
6.14 Problems 409
6.15 MAHAil Exerci ...es 421

7 Digital Filter Design 423


7.1 Prelimhary Comiderations 423
7.2 Bi!inear Tmm.forrnation Method of IIR Filter Design 430
7 .J Design of U1wpas"> HR DigilaJ Filters 435
7.4 Design of Highpas", Bandpass. and Bandstop HR Digital Filters 437
75 Spectral Transformations of llR Filters 44 i
7.6 FlR Filler Design Based on WindQWed Fourier Series 446
7.7 Computer-..\ideti De"'ign of Digital FiUers 460
7 .l'l D,.-sign of HR Digital Filler.;. with Least~Mean-Square Error 468
7 .'J Consrnt.ined Lea:;t-Square Design of FIR Digital Filters 469
7_to Dtgital Filter Design Using MATL-AB 472
7.11 Su:nmary 497
7.12 Problems 498
7.J3 M4.fL".BExercises 510

8 DSP Algorithm Implementation SIS


H.l BasiC bsue~ 5!5
8.2 Slmcture Simulation and Verification Using MATLAB 523
H.J Computation ofJhe DL.-'>crete Fourier Transform 535
8.4 Number Representation 552
K5 Arithme;it: Operations 556
S.6 Handling of Overflow 562
8.7 Tunable Digital Fil!en; 562
8.S Function Approximation 56R
8.9 Summary 571
B. I 0 Problems 572
X.ll MATLAB Exercises 5&1

9 Analysis of Finite Word length Effects 583


9.1 The Quantization Process and Erron. 584
9.2 Quantization of Fixed-Poinr Numbers 585
9.3 Quanriz.auon of Hoating-Point Numbers 587
9.4 Analysis of Coefticient Quantization Effect~ 588
9.5 A;D C'---Ollverston !'<oise Analysis 600
9.6 Analysis of Arithmetic Round-Off Errors 611
9.7 Uynamic Range Scaling 614
9.X Signal-k'-Noi:<.J;;:- Ratio -in Lew-Order HR Fillers. 625
9,9 Low-Sensitivity Digital Filrers 629
9.10 Reduction of Prod\.lct Round--QffErrm-s Using Error Feedback 635
9.ll L.imit Cycles i:n IIR Digital Filters 639
xii Contents

9.12 Round-Off Errors in FFf Algorithms 646


9.13 Summary 649
9.14 Problems 650
9. !5 MATLAB Exercises 657

lO Multirate Digital Signal Processing 659


10. [ The Basic Sample Rate Alteration Devices 660
\0.2 Fihers in Sampling Rate Alteration S)•stems 671
10.3 Multistage Design of Decimator and Interpolator 6RO
10.4 The Polyphase Decomposition 684
!0.5 Arbitrary-Rate Sampling Rate Converter 690
l 0.6 Digital Filter Banks 696
I0. 7 Nyquist Filters 700
10.8 1\vo-Channel Quadrature-Mirror Filter Bank 705
10.9 Perfect Reconstruction Two-Channel f[R Filter Banks 714
10.10 L-Channel QI.-1F Banks 722
IO. 1 I Cos.ine-Modulared L-Channel Filter Banks 730
I 0. 12 Multilevel Filter Banks 734
10.13 Summary 738
10.14 Problems 739
10.15 MATLAB fuen:ises 750

ll Applications of Digital Signal Processing 753


J L1 Dual-Tone Multifrequency Signal Detection 753
I I .2 Spectral Analysis of Sinusoidal Signals 758
11.3 Spectral Analysis ofNonstationary Signals 764
1 I .4 Srectral Analysis of Random Signals 771
11,5 M"Jsical Sound Processing 780
11.6 Digital FM Stereo Generatior, 790
11.7 Discrete-Time Analytic Signal Generation 794
11.8 Subband Coding of Speech and Audio Sjgnals 800
11 .9 Transmultip!exers 803
11. 10 Discrete Multitoue Transmission of Digital Data 807
11.11 Digital Audio Sampling Rate Conversion 810
1 I .12 Oversampling A/D Cor.verter 812
I 1.13 Oversampling D/A Converter 822
11.14 Sparse Antenna Array Design 826
ll. J 5 Summ;uy 829
11.16 Problems 830
11.17 MATLAB Exercises 834

Bibliography 837
Index 855
Signals
1 and Signal Processing
Signals p.Jay an important role in our daily life. Examples of signals that we encounter frequently are
speech. mu'>ic, picture, and video signals. A signal is a funclion of independent variables such as time,
distance, position, temperature. and press-ure. For example, speech and music signals represent air pressure
as a function of time at a point in space. A black-and-while picture is a representation of light intensity
as a function of two spatial coordinates. The- video signal in television cons.ists of a sequence of images..
called frames, and is a func{ion of three variables: two spatia! coordinates and time.
Most signals we encounter are generated by natural means. However, a signal can also be generated
synthetically or by CGmputer simulation. A signal carries infonnation, and the objective of signal processing
is loe.xtract u~ful in. formation c:Jrried by the >.ignal The method ofinfonnation extraction depends on the
type of signal and the nature of the infonnatmn being carried by the signaL Thus., roughly speaking, signaJ
processing is concerned with the rr.alhematical representation of the signal and the algorithmic operation
carried out on it to extntct the information presen!. The representation of the signal can be in terms of
basis functions in the domain of Lite original independent vari~ble(s) or it can be in terms of basjs functions
in a transformed domain. Likewise, the information extraction process may be carried out in the original
domain of the signal or in a transformed domain. This book is concerned with discrete~t.ime representation
of signals and their discrete-time processing.
This chapter provides an overview of signals and signal processing melhods. The mathematical cha.-~
acteriution of the signal h; first discussed along with a classification of signals. Next, some typical signals
are discussed in detail and the type of information carried by them is described. Then a review of some
COf'llllimly used signal processing operations is provided and illustrated through examples. Advantages
and disadvantages of digital processing cf signals are then discussed. Finally, a brief review of some
typical signal processing applications is included.

1.1 Characterization and Classification of Signals


Depending on !he nature of the independent variables and the value of the function defining the signal,
valiou;; types of signals can be defined. For ex.ample, independent variables can be continuous or dis-
crete. Likewise, the signaJ can either be a continuous or a discrete function of the independent variables.
Moreover. the signal can be either a real-valued function or a complex-valued function.
A. signal can be generated by a singJe wurce or by multiple sources_ In the former case, it is: a scalar
;.igr.al and in the latter case it is a vector signal, often calied a muitichanflel signai.
A one-dimensional { l -D) signal is a fUnction of a single independent variable. A rwo-dimensional (2-D)
signal is a function of two Independent variables. A multidimensional (M-D) signa! is a function of more
than one variable. The speech signal is an example of a 1-D signal where the independent variable is time.
An image signal, such as a photograph., Es an example of a 2-D signal where the two independent variables
are the tWo "patial variables. Each frame of u black-and-white video s.ignal is a 2-D image signal that is

1
2 Chapter 1: Sfgnals and Signal Processing

a function of two discrete spatial variables, with each frame occurring sequentiaily at discrete instants of
time. Hence, the black-and~ white video signal can be considered an example of a three-dimensional (3-D)
signal where the three independent 'Fariables are the two spatial variables ar.d time. A color video signal
is a three-channel signal composed of three 3-D signals representing the three primary colors: red, green.
and blue (RGB}. For transmission purposes, the RGB television signal is transformed into another £ype of
three-channel signal composed of a luminance component and two chromi:nance components.
The value of the ;;igna1 at a specific vaJue(s) of the independent variable(s) is called its amphtude. The
variation of the amplitude as a function of the independent variable{s) is called its waveform.
For a 1-D signal, the independent variable is usually labeled as Time. If the independent variable is
continuous, the signal i.-; called a continuous-time signal. If the independent variable is dJscrele, tht• .signal
is calJed a discrete-time signal. A continuous-time signal i5 defined at every instant of time. On the other
hand, a dis....--rete-time signa] is defined at discrete instants of time, and hence, it i;; a sequence of numbers.
A continuous-time signal with a continuous <i!Uplitude is usually called an analog signal. A speech
signal is an example of an analog signal. Analog signals are commonly encountered in our daily life and are
usually generated by natural means. A di&:rete-t<me signal with discrete-vatued amplitudes represented
by a finite number of digit<1 is referred to as a digital signal. An example of a digical signal is the digitized
music .signal uored in a CD-ROM disk. A discrete-time signal with continuous-valued amplitu~ is caUed
a sampled-data signnl. This Jas[ type of signal otturs in switched--capacitrn- (SC) circuits. A digital signal
is thus a quantized sampled-dara >.igmti. Finally. acontinuou:;-time s;gna] with discrete-valuetl amplitudes
bus been referred to as a quantized boxcar signal[Ste93]. Figure J .I illustrates the four types of signals._
The functional dependence of a signal in its mathematical representation i~ often expficilly shown. For
a continuous-time \-D ~ignaL the continuous independent variable is usually denoted by t, whereas for
a diserete-time l-D signal. rhe discrete independent variable is usually denoted by n. For example, u(t)
rept"esents- a contlnuou~-time l-D signal and { v{n l} represents a diserete-time 1-D signal. Eacl: member,
v[nJ, of a discrete-time- signal is called a sample. In many a_;>plications, a discrete-time signal -is generated
from a parent continuous-time ~ignal by sampling the latter at uniform intervals of time. If the discrete
instanh of time at which a discrete-time signal is defined 11re uniformly spaced, the independent discrete
variable n can be normalized to assume imeger values.
In the ca&e of a continuous-time 2-D ~ignal, the two independent variables are the spatial coordinates,
which are usually denoted by x andy. For exampl~ the intensity of a black-and-white image can he
expressed as u(x.y). On the other hand, a digitized image is a 2-D discrete-time signal, and its two
independent variables are discretized spatial variables often denoted by m and n. Hence. a digitiz.ed image
can be represented ru; r[m. n]. Likewise. a black-and-white video sequence is a 3-D signal and can be
represented as u (x. y, ! ) where x andy denote the t.vo spatial variables and t denotes the temporal variable
dme. A color video signal is a vector signal com_po..sed of three signals representing the three primary
colon: red, green, and blue:
r(x,y,O]
u(x,y.t)= g(x,)-•.t) .
[ b(x._v.t)
There i.-.;. another classification of signals that depends .:;m the certainty by which the signal can be
uniquely descr:bed, A signal that can be uniquely determined by a well-defined process such as a math-
ematical expre~sion or rule, or table look-up, is called a deterministic signal. A signal that i:. generated
in a random fashion and cannot be predicted ahead of time i" called a random signal. Jn this text we are
primarily concerned with !he processing of discrete-time deterministic signals. However. since practical
discrere-1ime systems employ finite wordJengths for the storing of signals and the implementation of the
signal processing algorithms, it is necessary to develop tools for the analysis of finite wordlengt:'l effects
on the performance of discrete-time systems. To this end, it has been found COll""ellient to represent certain
peri:inent ~ignals as random signals and employ statislical techniques for their anaJysis.
Srnne typical signal processing operations are reviewed in the following section.
1.2. Typical Signal Proces~ng Operations 3

Time, t

(a) (b)

······~l
~

.,s
. ,:>,··

Time, t Time, t

1..
(c) (d)
Figure 1.1: {a) A <."mllinuous-time signal, (b-! a digital s.ignal, (c) .a sampled-data signal, and (d) a quant;zed boxcar
~'igr.a.L

1.2 Typical Signal Processing Operations


Variou-;. type:; of ~ign.al proces'>ing operaiions are employed .in practice. In !he case of analog signals..,
most signal proces~ing operations are U5ua1ty carried out in the time~domain, whereas, in the case of
discrete~rime signals. both time--domain and frequency-domain operations are employed. ln e.ither case,
the desired operations are implemented by a combination of some elementary operations- These opern.tions
are .also usually implemented in real-time or near real-time, even though, in certain applications, they may
be implemented off-line.

1.2.1 Elementary Time· Domain Operations


The three most basic time-domain signal operations are scaling, delay, a.lld addition. Scaling is simply the
multiplication of the signal by .a positive or a negative constant. In the case of analog signal~. this operation
is usually called amplification if the magnitude of the multiplying constant, called gain, is greater than
orre. If (he magnitude of the multiplying constant is less than one, the operation is called attenuation.
Tim::;, 1f -~ (1) is nn analog signal, the scaling operation generates a ~ignai y(t} = ax (f) where a is the
mult1plying constant. Two other elementary operations are integration and differentiation. The imegraiion
of an .analog signal x(t} genera(cs a -signal y{r) = J~= x(r) dr, whik its differentiation results in a signa!
w(t) =.dx(t}jdt.
The delay operati>m generates. a signal that is a delayed replica of the original signal. For an analog
signal x(t), y(!) = x(t- ro) is the signal obtained by ddayiag x(!) by the amoutlt to which is assumed [0
be a pos1tlve number. If to IS negative, then 11 is an udvance -operation.
4 Chapter 1: Signals and Signal Processing

Many applications require ope-rations involving two or more signals to generate a new signaL For
example, y(t) = xJ(t) + x 2 (t}- x 3 {t} is the signal generated by the additim1 of the three analog signals
~ 1 (t). x 2 (t). am:l v3 (t). Another elementary operalion is the product of two signals. Thu~. the product of
two signals x t (!) and xz (t) generates a signal y (t) = XJ (t)xz ( t }.
The elementary operations mentioned above are also carried out on discrete-time .signals and are
discussed in later pam of this text Next we review some commonly used complex signal processing
operations thru: are implemented by combining two or more of the elementary operations.

1 .2.2 Filtering
One of [he most widely used "Complex signal processing operations is filtering. Filtering is used to pass
certain frequency components in a signlli through the system without a.ny distortion and to b~ock other
frequency components_ The system implementing this operation is called a filter. The range of frequencies
that is allowed to pass through the filter is called the ptusband, and the range of frequencies that is blocked
by the filter is called the stopband. Various types of filters can be defined, depending on lhe nature of the
filtering operetion. In most case!>, the filtering operation for analog signals is linear and is described by the
convolution integra)
y(t} = J.: h{l- r)x(r)dr. ( 1.1}

where x{t) is the input signal and y{t) is !he output of the filter characterized by an impulse response h(t).
A lowpassfilter passes. alllow-frequ;mcy components below a certain specified frequency fc, called
the cutoff frequency, and bloch all high-frequency components above fc· A highpass filter pac;ses all
high-frequency componenls above a certain cutofffrequeocy fc and blocks ail low-frequency components
belm'l J..-. A bandpass filter passes all frequency compone11ts between two cutoff frequencies J~1 and fc2
where fc~ < /d. and blocks aU frequency components below the frequency f:::t and above the frequency
!.:1- A bandstop filter blocks ail frequency components between two cutoff frequencies fct and f,_2, and
pnsses all frequency components below the frequency fc; and abm-"e the ff"'vquency fc2· Figure L2(a)
!!.}. ,ow!'. a signal composed of three sinusoidal components of frequencies 50 Hz, I 10Hz. and 210Hz.
respectively. Figure L2(b) to (e) shows the results of the above four types of filtering operations with
appropriately chosen cutoff frequencies.
A bandstop filter designed to block a s:ingJe frequency component is called a notch filter. A mulriband
filter has more than one passband and mOI"e than one stopband. A comb filter is designed to block
frequencies that are integra) multiples of a low frequency.
A signai may get corrupted unintentionally by an interfering signaJ called inteiference or Mise. In
many applications the desired signal occupies a low-frequency band from de lo some frequency fL Hz,
and it is corrupted by a high-frequency noise with frequency components above fH Hz with fH > fL·
In such cases. the desired signal can be recovered from the noise-cOITUpted signal by passing tbe latter
through a lowpass filter with a cutoff frequency fc where fL < fc < f H. A common source of noise is
power lines radiating electric and magnetic fields. The noi->e generated by power lines appears as a 60-Hz
sil'lmoidal signal corrupting the desired signal and can be removed by passing the corrupted signal through
a llO(ch filter with a notch frequency at 60Hz. 1

1 .2.3 Generation of Complex Signals


A" indicated earlier, a signal can be real-valued or complex-valued, For convenience, the former is usually
called a real signal while the latter is caUed a complex signal. AU naturally generated signaL<; are real-valued
ln smne applications., it is desirable to develop a complex s1gnal from a real signal having more desirable
11n many coumries, puwer line" generate 50-Hz noi~"-
1.2. Typical Signal Processing Operations 5

Time, moe<:
{a)

Time. msec Time.msec

(b) (c)

Time. msec Time. mse.::

(d) (e)

Flgurt" 1.2: (a} Input signal, (b) output of a lowpa:ss filter with a cutoff at 80Hz, {c) output of a higbpass filter with a
cutoff at I 50 Hz. (d) output of a bandpass filter with cutoffs at 80Hz and 150Hz. and (e) output of a bandstop filter
wlthcuwffs al80 Hz and 150Hz.
6 Chapter 1: Signals and Signal Processing

properties. A complex signal can be generated from a real signal by employing a Hilbert tran5jormer that
is characterized by an impulse response hHr(t) given by [Fre94), [Opp83]
I
hHr(t) = -. (1.2)
m
To illustrate the method, consider a real analog signal x(l) with a continuous-time Fourier transform
(CTFT} X(jQ) glven by
(1.3)

X (jQ) is called the spectrum of x (t). Tile magnitude spectrum of a real signal exhibits even symmetry
while the phase spectrum exhibits odd syrr.metry. Thus, the spectrum X {jQ) of a real signal x(t} contains
both positive and negative frequencies and can therefore be expressed as

(L4)

wh.!Te X p(jfl) is the portion of X {jQ) occupying the positive frequency range and X .. (JQ) io;: the portion
of X{jfl) occupying the negative frequency range. If x(t) is passed through a Hilbert transformer, its
output X(t) is given by the linear convolution of x(t) with hyT(t):

X(t) = L: hHT(t- -r)x(r}dr. (1 .5}

Tiw spectrum i(J"!:2) of .i(t) is given by the product of the continuous-time Fourier transforms of x(t) and
hH r(t). Now the continuous-time Fourier transform HHr(}O.} of hHr(t) of Eq. (1.2) is given by

HHr{jfl) = [ ~ j, [.!
n
> 0,
< o. ( 1.6}
].

Hence,
(1.7)

As the magnitude and the phase of X(}~) are an even anrl odd function, respectively, it foUows from
Eq. ( 1.7) that ..i{!) is also- a real signaL Consider the complex signal y(t) fanned by the sum of x(t) and
.i(r ):
y(t) = x(t) + #(l). ( 1.8)
The. signals x(!) and ..i(t) are called. respectively, the in-plwse and quadrature compor.ents of _v{!). By
making use of .Eqs. (L4) and (1.7) in the continuous-time Fourier transform of y(t), we obtain

(1.9)

In other words., Ute comp1ex signa) y(t ), calJed an analytic signal, has only positive frequency components.
A block diagram representation of the scheme for the analytic signal generation from a real signa] js
sketched in Figure L3.

1.2.4 Modulation and Demodulat1on


For transmission of signals over long distances, a transmiss.ion media such as cable, optical fiber. or
the Mmosphere is employed. Each such medium has a bandwidth that is more suitable for the efficient
transmission of signals in the high-frequency range. As a result, for the transmission of a low-frequency
t .2. Typ;cal Signal Processing OperatiOns 7

,-------~xw

Hilbert
xU) -----'----<j Transformer

Figure 1.3: Generation of an analytic signal using a Hilbert tramSDmler.

signa.! over .a channel, it is necessary to transfonn the signal to a high-frequency signal by means of a
modulation operation. At the receiving end. the modulaled high-frequency sign.al is demodulated, and
the desired low-frequency signal is then extructed by further processing. There are four major types of
modulation of analog signals: amplitude modulation, frequency modulation. phase modulation. and pulse
amplitude modulation. Of these schemes, amplitude modulation is conceptually simple and i.;; discussed
here fFre94], I0pp83].
fn the amplitude mvdula.tion scheme. the amp:itude of a high-frequency sinusoidal signal A CQS(~ri.
-called the carrier signal, is varied by the low-frequency bandlimited signal x(t), called the modulating
signal, generating a high-frequency signal, called the moduluJed signal, y(t) according to

y{t) = Ax(I) cos(~ 0 l}. (1.10)

Thus, amplitude modulation can be implemented by forming the product of the modulating signal wiL't
the carrier signal. To demonstrate the frequency-translation property of the amplitude moduiation process,
let x(t) = <.--oo(Q!t) where Ql Is much smaller than the carrier frequency ::20 , i.e<. Q 1 << !1 0 . From
Eq. ( l. 10) we therefOf"e obtain

y(t) = A. cos(0 1 t) · cos(r2 0 t)


= 4 COS ((Q 0 + f21 }t) + 4COS ((Q,.,- Ql) t). (I.l I}

T::ru~.the modulated signal y(t) is composed of two sinusoidal signals of frequencies Q, + Ql and Q 0 - !:21
which are dose to S"l 0 as Ut has been as.mmed to be much smaller than the canier frequency Q,,.
lt is instructive to examine the spectrum of y(t). From the properties of the continuous-time Fourier
tr;msform lt follows that the spectrum Y(jQ) of y(t) is given by

(1.12)

where X (jQ) is the spectrum of the modulating signal x(t). Figure 1.4 shows fie spectra of the modulating
signal and that of the modulated signal under the assumption that the carrier frequency Q 0 :is greater than
Qm, the highest frequency contained in x(t}. As seen from this figure, y(t) is now a bandlimited high-
frequency signal with a bandwidth 2Qm centered at 0 0 •
The portion of the amplitude.- modulated signal between Q.., and Q 0 -l- Qm is called the upper sideband
whereas the portion between Q,. and Q., - Om is caJied the lower sideband. Because of the gener-.ltion
of two r,idebands and the absence of a carrier component in the modulated signal, the process is called
do,;Wlc-sideband suppussed carrier (DSB-SC) mndulation.
The demodulmion of y(!), assuming Q 0 > ~m, is carried out in two stage&. First, the produel of y(t)
with a sinusoi<ial signaJ of the sa;ne frequency as the carrier is formed. This re~ults in

(1.13)
8 Chapter 1: Signals and Signal Processing

X(jQ)

----+-o~~ln~~·., -Qm 0 Qm

(a)

Y(jQ)

___L_~~-¥-~'~\,.--';;-! _L_rv~~----'---~·"
u, ~
1
o ~ n,
-(Q"- Om) Q0 - 0.,.
(b)

Figure 1.4: (a) Spectrum of the modulating signal x(t).and (b) spectrum of the modulated signal y(l ). Forconvertieru:e.
both ~pectra are shown as real func1ions.

R(jQ)

-~L-~"---:--:-!---,".. ·~Rtn_· -_,·-~'!:'-'-· ·_· · -; · ·- '- 7C'-.- -'";:/\'f- -'- >• Q

2n0 ~ -i2111 0 Dm .(. ID 0


-(:HJ,- Q:m} 2Q, -Qm

Flgnre 1.5: Spectrum of the product of me modulated sign.a.i and the carrier.

which can be rewritten as


A A
r(t) = y(t) cos R,.,t = x(t) + x(!) cos(2Q.;,t). (1.14)
2 2
This result indicates that the product signal is composed of the original modulating signal scaled by a
fa.:tor 1!2 and an amplitude-modulated signal with a carrier frequency 200 • The spectrum R{jQ) of r(t)
is as indicated in Figure L5. The original modulating signa] can now be recovered from r{t) by passing it
tlrough a low pass filter with a cutoff frequency Q., satisfying the relation Q 111 < fie < 2Q., - Qm. The
output of the filter is then a scaled replica of the modulating signal.
Figure 1.6 shows the block diagram representations of the amplitude modulation and demodulation
SC:1emes. The underlying assumption in tile demodulation process outlined above is that a sinusoidal signal
idtmtical to the carrier signal can be generated at the recei'>'ing end. In general, it is difficult to ensure
du1t the demodulating sinusoidal signal has a frequency identical to t't.at of the carrier all the time. To get
around this problem, in the transmission of amplitude-modulated radio signals. the IIKldulation process
is modified so that the lransmitted -signal includes the carrier signaL This is achieved by redefining the
amplitude modulation operation as follows:

y(t) = A[I +mx(t}Joos(Q0 t), (1.15)


1.2. Typical Signal Processing Operations 9

r(t) Lowpass
x(t)~y(J) y(l} --<!?< f-'-'---'1 Ftlter

A cos'!:.!,_, t
(a} {b-)

FJgure 1.6: Schematic representations of t;;e amplitt.de modulation and demodulation s<:hemes: (a) modulator, and
(h} demodulator.

Modulat:ng ~Lgru!l Modulmed carrier

''

-OS \ j

_, I
~-
\._/- --cc---cc---'-~--c!
\
o 211 40 ro 100 w
Tilc!e, m""c
(a) (b)

Figure 1.7: (a) A sim.lSQidal modalating signal of frequer.cy 20Hz. and (b) modulated canie-r with a l:arrter frequency
of 400 Hz based un the- DSB modulation.

where m is a number chosen to ensure that f I +mx(J)] is. positive for all t. Figure 1.7 shows the waveforms
()f a modulating sinusoidal signal of frequency 20 Hz and the amplitude-modulated carrier obtained ac-
-cording to Eq. (Ll5J for a cam~:r frequency of 400Hz and m = 0.5. Note that the envelope of the
modulated carrier is essentially the waveform of the modulating signaL As here the carrier is also present
in the modulaled signal, the process is called simply double-sickband (DSB) modulation. At the recelvin.g
.;md, the carrier signal is separated first and then used for demodulation.

1.2.5 Multiplexing and Demultiplexing


~For an efficient utili7..ation of a wideba..'"ld transmission channel, many narrow-bandwidth low-frequency
;;ignals are combined to form a composite wideh:md signal that is transmitted a'> a single signaL The
process of combining these sfgnats is called multiplexing which is implemented to ensure that a replica of
1he original narrow-band-w-idth low-frequency signals can be recovered at the receiving end. The recovery
process is caned demultiplexir.g.
One widely used method of combining different voice s.ignais in a teJephone communication system is
lhefrequency-divisit.m multiplexing (FDM) scheme [Cou83l. [Opp83]_ Here, each voice signal, ryp.icaUy
band limited to a low-frequency band of width 2Qm, is frequency-translated into a higher frequency band
using the amplitude modulation method of Eq. (LIO). The carrier frequency of adjacent amplitude-
modulated signals is separated by !2 0 , with Q, > 2Qm to ensure that there is no overlap in the spectra of
dteindividual modulated signals after they are added to form a baseband compos-ite signal. This signal is
then moduJated onto :he main carrier developing the FDM signal and transmitted. Figure \.8 i!Iu.<;trat-es
the frequency-diYi~un multiplexing scheme.
10 Chapter 1: Signals and Signal Processing

XJ(jQ) x~~}Qt

ch

Q
m
ffi
" ""
m
(a)
,Q rh
-0m Q
m '"
.
{o,J_jQ)

(b)

Figure 1.8: Illustrallon of the frequency-div1s.ion multiplexing opemtion. (.a) Spectra of three low-frequency signals,.
and {b) spe<::tra of the modulated composite signaL

;r(t) + y(:)

i{l) ,-.,
wzfO
As\nin,n

Figu~e 1.9: Single-sideband modulation sclleme employing a Hilbert transformer.

At the receivmg end, the composite baseband signal is first derived from the FDM ~ignal by demod-
ulati,on. Then each individual frequency-translated signal is first demuitiplexed by passiag the .composite
signal through a bandpa:;s filter with a center frequency of identical value as that of the corresponding
carrier frequency and a bandwidth slightly greatei tban 2R,.,.. The output of the bandpass filter is then
demodulated usmg the method of Figure 1 .6(b} to recover a scaled replica of the original voice signal.
In the c-ase of the conventional ampJitude modulation, as can be seen from Figure 1.4, the modulated
signal has a bandwidth of2Qm, whereas the bandwidth of the !nodule.ting signal is Q,.,. To increase the
capacity of the transmisston medium. a modified form of the ampJitude modulation is often employed
in which either the upper sideband or the lower sideband a:· the modulated signal is transmitted. The
corresponding procedure is calJed J·in.gle·sideband {SSB} modulation to distinguish it frorn the double-
sideband modulation scheme of Figure 1.6(a}.
One way to implement single-sideband amplitude modulalion is indicated in Figure 1.9, where the
Hilbert transformer is defined by Eq. (1.6). The spectra of pertinent signals in Figure 1.9 are shown in
Figure L10.
1.2. Typical Signal Processing Operations 11

{a)
W_,(jOl Y(fol!

1\
I '] /] ,Q
13, ·j £\
--Q)\ I ?o
I ", ~o u
'0 " "
'---'
[b) (c)

Figure l.l(l: Spectra of pertinent >ignals in figure L9.

1.2.6 Quadrature Amplitude Modulation


We observed earlier that DSB amplitude modulation is half as efficient as SSB amplirude modulation with
~egard to utilization of rbe spec:rum. The quadrature amplitude modulaJion (QAM) method uses DSB
'nodulation to modulate two different s1gnals so that they both occupy the same bandwidth; thus QAM
only takes up as much bandwidth as the SSB modulatio:~ method. Th understand the basic idea behind
the QAM approach, let Xt (t) and x 1 (t) be two bandlimited low-frequency signals with a bandwidth oH2m
as indicated in Figure 1.4{a). The two modulating signals are individually modulated by the two carrier
!>ignals A cos(!20 t) and A sin(Q 0 t), respectively, and are :>ummed, resulting ill a composite signal y(t)
given by
) (t) = AxJ(t) cos{Q,_,t} + Axl(t) sin(O..,.t). (l.l6)

Nffie that the two carrier signal& have- the same carrier frequency Q, bur have a phase difference of 90". In
general. the carrier A CQ.'i(Q,;) is called the in-phase cow.porlent and the carrier A sin(Q0 i} is calJed the
quadrature component. The ;<,pectrum Y (jQ) of the composite signaJ y{t) i> now given by

Y(jQ_, = i {XI (j(Q- Q,)) + X1 (j(n --r !2.o})J


+i~ {Xc (j(ft- fl.,))- X2 (j(f2 +C!o})J, (1.!7)

and is seen to occupy the same bandwidth as the modulated signa] obtained by a DSB modulation.
Tu recover the original modulating Mgnah, the composite signal is multiplied by both the in-phase and
the quadrature components of the carrier separately. resultmg in two signals:

Tj(!) = y(t) CU">{{.?,,t),


( 1.1 Sl
rz(t) = y(t) sin(!:?,,!).

Substituting y(t) from Eq. (LJ6) in Eq. (l.I8), we obillin after some algebra

r; (!) = jxJ{l) + 4x;(t)cos(2Q,r} + 4xz(t} sin(2R,t),


r2{t) = ix2(t) + ~.XJ(!)sin(2fi,.l)- -4x2(t)cos(2r.!al}.

Lowpa'>s filtering of r1 (t) and r2(t) by filters with a ~:utoff at Om yields the two modulating signals. Figure
l_l i -'>hows the block diagram representations of the quadrature amplitude modulation and demodulation
schemes.
12 Chapter 1: Signals and Signal Processing

y(r;

Ia) (b)

F'~gure 1.1 L Sche~.atit:: rcpres.entahons of the qc:~dmture a:nplitude modulation aud demodulation schemes. (aJ
mod~latm, and {bl demodu!;;_tor

As ;n the case of the DSB suppressed carrier modutation method, the QAM method abo requires at the
receiver an exact replica of the carrier signal employed in the transmiuing end for accurate demodulation.
It )>,. therefore not employed in the direct transmis~don ot Malog signals. but finds applications ln the
tr-ansmiss;on of discrete-lime data sequence& and in the transmission of analog signals convected into
di-.crete-time sequences by samplmg and analog-to-digital coJ<verslon.

1.2.7 Signal Generation


An equally impurtam pan of signal pr-ocessing is synthetic signal generation.. One of the simplest such
signal generaton; is a device generating a sinusoidal signal, ca!Jed anoscilla/or. Such a device is an :ntegral
pan of1hc: amplimde~modulation and demodulation system de,.:;crihed in the previous lwo s=tions. lt also
has variou<> other s.ignaJ proces~in£ applications.
There are ~tpphcatiom rhat require the generation of other types of periodic signals such as square
waves and triangular waves. Certain types of random signals with a spectrum of ~onstant amplitude for
ali frequem.'ies. culled white nmse, often find applications i:n practice. One such apphcation, i~ in the
genemtion of discrcte-tinlC synthetic speech ~ignal~.

1.3 Examples of Typical Signals 2


To better 1mtkrstand the breadth of the :-;ign<:~l processing task, we now examine a number of exa.mpies of
some typil·;.~l :,;lgna!s and their subsequent processing in typical applications.

Electrocardiography (ECG) Signal


Th._- ehxtnca1 ili:[i\i:} of the heart is represemed by the ECG ~lgnal!Sba81J. A typical ECG signal trace
;~ ~hown .·n Figure I. 12(;.;"). 1'llc ECG trace is essentially a periodic waveform. One such period of the
ECG v.-u\dorm D'> depicted in Figure l.l2(b) represent-.; one cycle of the blood transfer piOCess from the
heart to Jn: ;u1t·n~-.. Thi;. part uf lhe wavefonn i,;. generated by an electrical impulse originating at the
,inoiltri:Jl !!Ode in the right atrium of the heart. The impulse t:auses contraction of the atria, which fore5.
the bhi(K.! 1n each atrium to squeeze intQ its corresponding ventricle. The resu:ting signal is called the
P-\\a"c. Tht' atrioventricular node delays the excitation impulse until the blood tTansfer from the <..tria to
The \Tillrlcles is completed, res.ultiP.g in the P-R interval of the ECG \;o'aveform. The excitation impulse
then cause-. t:OiltElCtion of the ventricles, which squeezes the blood into the arteries. Thi~ generates the
2
Thi~ >-Cc'lioa tms becrJ adapted fr,..,m Hw•diNmli:for Digital SiMrwl Proces.<ing. S<lnjit K. Mitr..._ and James F. K.ai~eo-, Eds .• ©1993,
Jnhn Wiky & S<m~, Adap!ed try perrnu>-sm;, of Jolm Wiley & Som.
1.3. Examples of Typical Signals 13

(a)

10
R

05

0 -;___, If
Q
-0.5 s
0 0.1 0.2 0.3 0.4 0.5 0.6
Se(;onds
(b)

Figure 1.12: (a) A typkai ECG tl"acl:, and (b) one cycle of an ECG waveform.

QRS part of the ECG waveform. During this phase the atria are relalled and filled with blood. The T-wave
of the waveform represents the relaxation of the ventricles. The complete process is repeated periodicaUy,
generating the ECG trace.
Each ponion of the ECG wa..-eforrn carrie~> various types of information for the physician analyz!ng a
patient's heart condition [Sha81 ]. For example, the amplitude and timing of theP and QRS portions indicate
the condition of the cardiat: muscle ma~s.. Loss of amplitude indicates muscle damage. whereas increased
amplitude indkates abnonnal heart rates. Too long a de1ay in the atrioventricular node is indicated by
a very long P-R interval. Likewise. blockage of some or all of the contraction impulses is reflected by
.inlennittent synchronization between the P- and QRS-waves. Most of these abnormalities can be treated
with varions drugs. and the effectiveness of the drugs. .can again be monitored by obsetving the new ECG
waveforms taken after the drug treatment.
In practice, there are various types of externally produced artifacts that appear ~n the ECG signal
[fom81]. Unless these intelferences are removed, it is diffiC'Jlt for a physician to make a correct diagnosis,
A common source of noise is the 60-Hz. power lines whose radiated electric and magnetic fields are
coupled to the ECG instrument through capacitive coupling and/or magnetic induction. Other sources of
interference .are the electromyogrn.phic signals that are the potential& developed by contracting muscles.
These and other interferences can be removed witt, careful shielding and signal processing techniques.
Chapter 1 : Signals and Signal Processing

Figure 1.13: Multiple EEG s;gnal traces.

ElectroencephaJogram (EEG) Signal


The summatiGn of the electrical a"Ctivity caused by the ranriom firing of biHions of individual neurons ir.
the brain is represented by the EEG >.ignal fCoh86], [TomS I 1. In multiple EEG recordings, electrodes are
placed at various positions on the ~alp with two common el~-trodes placed on the earlobes, and potential
difference;; between the various elel:1rode.-; are recorded. A typical bandwidth of this type of EEG range.-;
from 0.5 ro about 100Hz. with the amplitude!. ranging from 2 to 100 mY. An example of multiple EEG
trac~ i~ shown in Figure 1.13.
Both frequency-domain .and lime-domain analy~>es of the EEG signal have been used for the diagnosis
of epilepsy, sleep disordcn., psyclliatric malfunctions, etc, To this end, the EEG spectrum is subdivided
mto the following five bands: ( 1) the delta r.mge, occupying the band from 0.5 to 4Hz; (2} the theta range,
occupying the band from 4 to 8Hz.; (3) the alplw range, occupying the band from 8 to 13Hz; (4; the beta
runge • .;_>C;cupying the band from t 3 to 22 Hz; and (5) rhe gamma range, occupying the band from 22 ro 30
H.>:.
The de.ita wave is norma! in the EEG signal!> of children and sleeping adults. Since it is not common
in <tlert adults, its presence indicates certain brain diseases. The theta wave is. usually found in children
even though it ha!> been observed in alert adult;;. The alpha wave is common in all normal humans and is
more pronounced in a relaxed and awake subject with dosed eyes. Likewise. the beta activity is common
in nonnal adulrs. The EEG exhibits rapid, low-voltage waves. caHed.rapid-eye-movemem {REM) waves.
in a subject dreamlng during sleep. Otherwise. in a sleeping subject. the EEG contains bursts of alpha-like
~·av~. called sleep spindles. The EEG of an epileptic patient exhibits various types of abnonnalities,
depending on the type of epilepsy that is caused by urn:ontrdled neurai discharges.

Seismic Signals
Th~~;;;e type,._ of signals
are caused by the movement of rocks. resulting from an earthquake, a voJcanic erup-
tion, or an underground explosion tBol93l The ground movement gerterates elastic waves that propagate
through rhe body of the earth in all directions fmm tbe. source of movement. Three basic types of elastic
Wllves are generated by the earth movement. Two of these waves propagate through the body of the earth,
mtc moving faster with respect to LI-te other. The faster moving wave i.'> called the primary or P-wave, while
1.3. Examples of Typical Signals 15

lhe slower moving om: ;s calk>d !he sccandary or 5-wave. The third type of wave is l;nown as the surface
I wave. which moves 4long the grvuod ~llr!~"e- The ..e sci!>mic waves. are converted into electrical signa!;.;
by a seismograph and are recorded on a >.tri.p dwrt recorder or a magnelic tape.
Because of the thrcc-dim..:n;,ional nalure ot ground movement, a sei:-.mograph usually consist:- of
three -.epara!e recording in!:-lrumenh that provide information about the movement;, in the two horizontal
din."Ctiom and one vertical direction illld develops three records as indicated in Figure 1.14. Each such
record ts a one-dimensional signal. From the recorded ~ig;:mls it is possible to determine the magnitude of
the e-arthqua(e or nuclear explosion and the location of the source of the originul earth movement.
Seismic signals abo play an i:nportant rule in the ge;)physical explordlion for o:l and ga.'> !RobBOJ.
In !his type nf applicatJon linea; ;;nay~ .,,r seismic ;;ourc~o.. ->Uch as high-energy explm.ives, are placed
<It regular intervals on the grour.:J surface. The explosions cause seismic wave;. to propagate through
the subsurfa;;e geological slrucmres and reflect b<Jck to !he surface from interfaces between geologKal
strata. The reflected wave~ are converh:d inhl electrical signals by a composite array of geophones laid
out in certain pattern~ and displ.aycd a;,. a two-dimensior.al s.ignal that is a functioc of time and space.
called a trore gather. as indicated in Figure 1.15. Before these signals are analyzed, some preliminary
time and amplitude correction:; are made on the data to compen~ate for differenl physical phtnomcna.
Frorn the correctetl data, the time differen;:cs between retle..:!ed seismic signals are used to map structural
Cefmmatiom, wherea~ the amplitude changes usually indicate the presence of hydrocarbons.

Diesel Engine Signal


Signal proce,sing is playing an important role in the precision adjustment of diesel engines during produc-
tion [Jur8lj. Efficit."'lltoperahon of tbt: engine requires the accurate determination of the topmost poi.nl of
pi~ton travel (called the tap de<:.:d {'~-ntrr_~ ins1de the cylinder of the engine. Figure 1.16 shows the signals
generated by a dual probe inserted into the -combustion chamber of a diesel engine in place of the glow-
plug. The prohe con;;;isls of a micmwave antenna and a photodiode detector. The microwave probe captures.
signals refle::!ed from the ::ylint.!cr cavity c<J.used by the up and down motion of the piston while the engine
is running. Interestingly, the waveforms of these signals exhibit a symmetry around the top dead center
independent -of the engine speed, temperature. cytinder pres-.ure, or air-fuel r.-~tio. The point of symmetry
is determincd automatically by a microcomputer. and the fuel-injection pump positbn is then adjusted
bf the computer accurately to within ±0.5 degree using the lumino,_'>ity signal sensed by the photodiode
deteclor.

Speech Signals
Tbe linear acou;-,tic theory of sp.xch pmducrion has led to mathematical models for the representation
of speech signal!-.. A speech signal is formed by exciting the vocal tract and is composed of two types
of soun.d<;: vviad and unvoiced [Rab78J, !S!u84;. The voiced sound. which includes the vuwels and a
number tJf consonants such as. B. D, L, M, N, and R, is excited by the pulsatile airflow resulting from the
vibration of the vocal folds. On the other hand, the uo\'oiced sound is produced dm\'nstream in the forv.rard
part of the oral c-.a-..ity (mouth) wi(h the WK<Ll cords al rest and includes sounds like F. S, and SH.
Figure 1.17(a) depict-;. the spee..·h wavetOrm of a male utlerance "every salt breeze comes from the sea"
fFia79j. The :otal Juwtion of t)K' spCL--ch ..,.avefonn is 2.5 -,econds. M~gniiied vers-i-ons of tl!e "A" arn:l
"S" ~gment<; in the word '':-.alt" are skek·hcd in Figure L 17(b) and (c), respectively. The slowly varying
low-frequency voiced \lravethrm uf "A'. and the high-frequency unvoiced fricathe waveform of "'S" are
evident from the magnified w-aveforms. The voiced wavefOnn in Figure l.17(b)is seen to be quasi-periodic
and can be modeled by a sum of a finite number of sinmoids. The lowest frequency of oscillation in this
rcprc-~.:ntation 1s .:alled the fundamental frequencv or pitch__frequenc_v. The unvoiced waveform In Figure
l.l7(c:) has no regular fine structure and is more noise-like_
16 Chapter 1: Signals and Signal Processing

ti 0.10
.ll
~- 0.05

"'
·c
~ 0.00
>
i!
e-0.05 -
c
~ -\J.lO S wavel'l
-~
> 8 12 10 14 16 18 ~ 22 h ~ 28 30
Timei~)

ti
~
• 0.2-
.!'
"'
·;::;

";': 0.0
t
c

~ -0.2
Pwaveg
~=-----'
S wav<:S
"f 0
z 8 12 10 14 16 18 20 22 M 26 D 30
Time (Red

'0

i
"'·o
~
~

~
~
,0
~
:i
"' 8
Time (seeJ
30

Figure 1.14: Sehimograph re-eord ofthe Northridge aftershock, Janub.I)' 29, 1994. Recorded at Stone Canyon Reservui.r.
Los Angetes, CA. (CotlJ1f:.y of Institute for Crustal Research, Univc.r>ity of California, Santa Barbaro, CA.)
1.3. Examples of Typical Stgnals 17

Figure LlS: A typical sei~m~e signal trace gather. (Courtesy of lnstltute forCJm;tal Research, Uruversity of California.
Sarna Barbara, CA.)

One of the major applications of digltal signal proces~ing techniques ls in the geReTal area of speech
processing. Prubiems in this area are u:rually divided into three groups: (I) speech analysis., (2) speech
synthesis, and (3) speech analysis and synthesis [Opp78!. Digital speech analysis meth~ are used in au~
tOiriatic speech recognition. speaker verification, and speaker identification. Applications of digital speedt
synthesis techniques include reading machines for the automatic conversion of written text imo speech,
and retrieval of data from computers in speech form by remote access through tenninafs or telephones.
One example belonging to the third gm~p is voice scrambling for secure transmission. Speech data c-orn-
pression for an efficient use of the transmission medium is another example of the use of speech analysis
followed by synthesis. A typical spee1.-h signal after conversion into a digital form contains about 64.000
bits per second (bps). Depending on the desired quality of the synthesized speech, the original data can be
compressed considerably, e.g., down to about I 000 bps.

Musical Sound Signal


The electronic synthesizer is an example of the use of modem signal processing techniques [Moo77],
[Ler83J. The natural sound generated by most musical instruments is generally produced by mechanical
vibrations caused by activating some form of os~illator that then causes other parts of the instrument to
vibrate. Ali these vibrations together in a single :nstrument generate the musicaJ sound. In a violin the
primary oscillator is a stretched piece of string (ca! gut). Hs movement is caused by drawing a bow across
it; this sets the wooden body of the violin vibrating, which in tum sets up vibrations of the air inside as
well as outside the instrument. In a piano the primary oscillator is a stretched steel wire that is set into
vibratory motion by tbe hitting of a hammer, which in tum causes vibrations in the wooden body (sounding
board) of the piano. In wind or brass instruments the vibration occurs in a column of air, and a mechanical
change in the length of the air colunm by means of valves or keys regulates the rate of vibration.
18 Chapter 1: Signals and Signal Processing

time-->

F.igure l.Hi: Diesel englfle signals. (Reproduced widl permission from R. K. Jmgen, Detroit bets on electronics to
stymie Japan, IEEE Spec;rum. vol. l&, July 1981, pp. 29-32 ©198; IEEE.)

The .round of orchestral instrument!> chll be classified into two groups: quasi-periodic and aperiodic.
Quasi-periodic sounds can be described by a sum of a finite number of sinusoids with independently
varJing amplitudes and frequencies. The sound waveforms of two different instruments, the cello and the
bass drum, are indicated in Figwe Ll8(a) and (b}, respectively. In each figure, the top waveform i.s the
plot of an entire isolated note, whereas the ·:xmom plot shows an expanded version of a portion of !he note:
I 0 rns for the cello and 80 ms for the bass drum. The waveform of the note from a cello is seen to be
quasi-periodic. On the other hand, the bass drum wavefonn is dearly aperiodic. The tone of an orchestral
instrument is commonly divided into three segments called the attack part, the steady-state part. and the
tkcay part. Figure 1.18 illustrate£ this division for the two tones. Note that the bass drum lone of Figure
1. I8{b) shows no steady-state part. A reasonable approximation of many tones is obtained by splicing
together lhese parts. However, high-fidelity reproductiOfl requires a more complex mode.!.

Time Series
The signals described thus far are continuous functons with time as :he independent variable. In many
cases the signals of interest are natumlly discrete functions of the independent variables. Often such signals
are of finite duration. Examples of such signals are the yearly average number of sunspots, daily stock
prices. the value of total monthly exports of a country, the yecriy population of animal species in a certain
geog.-aphicai area. the annual yields per acre of crops in a country, and the monthly totals of international
airline passengers over certain periods. This type of finite ex rent signal, usually caJied a time series, occurs
in huslness, economics,. physical sciences. social sciences, engineering. medicine. and many other fields.
Piou of some typical time series are shown in Figures 1.19 to J .21 _
1.3. Examples of Typical Signals 19

''"~~~----tiMMi!IIM!IILI&Ili.!IDII'bt"'----it;~
''"b
orll:am::: I* . . . . lhlhhfii. ,,..
'

......
20J<!Jl-'l !tl

0~~._.......~
• 256
I

Figure 1.17: Speech waveform example: (a) sentence~length segment. (b) magnified version cf the ..-uiced segment
{lhe letter A). and(>:) magnified version of the unvoiced segment (the letterS). (Reproduced with permission from J.
L. Flanagan et al., Speech coding, IEEE Trans. on Communications, vol. COM:-27, April 1979, pp-. 710-737 ©1979
IEEE)

There .are many reasons for analyzing a partkular time series {Box70]. In some applications, there
may be a need to develop a model to determine the nature of the dependence of the data on the independent
variable and u~e it to forecast the future behavior of the &er:ies. As an example, in business: planning,
reaoonably accurate sale;; forecasts are necessary. Some types: of series possess seasonal or periodic
components, and it is important to extract these components. The study of sunspot numbers is important
for predicting climate variations. lnvariably, the lime series data are noisy, and their representation." require
models based on their statistical properties.

Images
As indicated earlier, an image is a two-dimensional signal whose intensity at any point is a function oftwo
spztial variables. Common examples are photographs. stiU video images, radar and sonar images, and chest
and dental x-rays. An image sequence, such as that seen in a television, is essentially a three-dimensiooal
signal for which the image intensity at any point is a function of three variables: two spatial variable:. and
time. Figure 1.22(a) shows the photograph of a digital image.
20 Chapter 1: Signals and Signal Processing

Figuft 1.18: Wavefnl"fm.{)f (a) the cello and (b) the bass drum. (Reproduced wilh permission from I. A. Moo©, Signal
processing aspect~ of oomputer mu~1c: A survey\ Proceedings o_f the IEEE, vol. 65, Augus:t 1977. pp. ll08-Jl37
©1977 IEEE).

Figure 1.19; Seasonally adjusted quarterly Gr~i'l Na!iOflal Product of United States in 1982 dollars from 1976 to
1986. (Adapted fmm !Lill9ll.)

The basic problems in image processing are .image .signal representation and modeling, enhancement,
restoration. reccmstruction from projections, analysis. and coding [Jai89].
Each picture element in a specific image represents a certain physical quantity; a characterization of
the element is ~.tiled the imo.ge representmion. For example. a photograph represents the luminances of
various objects as seen by the camera. An infrared image taken by a satellite or an aiJplane represents the
temperature profile of a geographical area. Depending on the type of image and its applications, various
1.3. Examples of Typicai S:gnats 21

Figure 1.20: Monthly mean St. Louis. Missouri temperature in degree:; Celsius; for the years 1975 to 1978. (Adapted
from lMar87].}

Figure 1.21: Monthly gasoline demand in Ontario, Canada (in millions of gallons) from January !971 to Decembe~
197.'5. (Adapted from [Abr83j.)

types of image models are u~'llaUy defined. Such models are also based on perception, and on local or
g;oba.l chantcteristics. The nature and performance of the image processing algorithms depend on the
image model being used.
Image enhancement algorithms are u:;;ed to emphasize specilk- image features lo improve the quality of
the image for visual perception or to aid in the analysis of the image for feature extraction. These include
methods for contrast enhancement. edge detection, sharpening, linear and nonlinear filtering, zooming.
and noise removal. Figure J.22(b} show.'> the corrtrast-enhanced version of the image of Figure 1.22{a)
<k:veloped using a nonlinear filter I_ThuOO].
The algorithms used for elimination or reduction of degrada1ioos in an image. such as blurring and
ge:ometric distortion caused hy the imaging system andJor its surroundings are known as image restor(}.tion.
Image reconstruction from projectiom;. involves the deve!<:~pment of a two-dimensional ima_ge slice of a
three-dimensional object from a number of planar projections obtained from \-<Uious angles. By creating a
number of contiguous slices. a three-dimensional image giving an in...ide view of the object is developed.
Image mmlyJis methods are employed to develop a quantitative description and classification of one
or more desired objects in an image.
For digital processing, an image needs to be sampled and quantized using an analog-to-digitalronverter.
A reasonable size digital image in icsoriglnal furm takes a considerable amount of memory space for storage.
For example, an image of size 512 x 512 samples with 8-bit resolution pe-r sample contains over2 millior:
bits, Image coJing metliods are used to reduce the total number of bits in an image without any degradation
in visual perception quality as in speech coding. e.g .• down to about 1 bit per sample on the average.
22 Chapter 1: Signals and Signal Processing

'F·

flaute 1.22: (a} A digital image, and (b) itscorrtrast-enham;ed vtThion.


(Reproduced with pennission from Nonlinear
Image Processing, S. K. Mitra and G. Sicuranza, Eds., Academic Press, New York NY, ©2000.)

1.4 Typical Signal Processing Applications3


There are numerous applications of signal processing that we often encounter in our daily life without being
aware of them. Due to space limitations, it is not possible to discuss all of these applications. Howevu,
an overview of selected applications is presented.

1.4.1 Sound Recording Applications


The recording of most musical programs nowadays is usually made in an acoustically Inert studio. Tile
sound from eacll instrument is picked up by irs own microphone clo~ely placed to the instrument and is
recorded on a single track in a multitrack tape recorder containing as many as 48 tracks. The signals from
individual tracks in the master recording are then edited and combined by the sound engineer in a mix-down
system to develop a two-track stereo recording. There are a number of reasons for following this approach.
Frrst, the closeness of each individua1 microphone to its assigned in,."trument provides a high degree of
separation between the instruments and minimizes the background noise in the recording. Second, the
sound part of one instrument can be rerecorded later if necessary. Third,. during the mix-down process the
sound engineer can manipulate individual signals by using a variety of signal processing devices to alter
the musical balances betw-een the sounds generated by Ute instruments, can change the timbre, and can
add natmal room acoustics effects and other special effects [Ble78], (Ear76].
Various types of signal processing teclmiques are utilized in the mix-down phase. Some are u§ed to
modify the spectral characteristics of the sound signal and to add special effects, whereas others are used to
improve the quality of the transmission medium. The signal processing circuits most commonly used are:
(I) compressors and limiters, (2) expanders and noise gates, (3) equalizers and filters, (4) noise redtJct.ion
systems. (5) delay and reverberation systems, and (6) circuits for special effects [Ble78], [Ear76J, (Hub89].
[WOT89]. These operations are usually perlonned on the original analog audio signals and are implemented
using analog circuit components. However, there is a growing trend toward all digital implementation and
its use in tbe processing of the digitized versions of the analog audio signals [Ble78}.
3-nus section has been adapted from HandbQQkffJr Digital Sigrwl Processing. Sanjit K. Mitra .and James fl. Kaiser. Eds., 01993.
John Wiley & Sons Adapted by pennu.Mon of John wney &. Sons.
1.4. Typical Signal Processing Applications 23

-
Variable
threshold
Input in dB.

Figur-e 1.23: Transfer characterislic of a typka\ compressm.

,,, (b)

Figure 1.24: Parameters characterizing .a typical compressor.

Compressors and limrters. These devices are used for the compression of the dynamic range of an
alldlo signaL The compressor can be com;.idered as an amplifier with two gain levels: the gain is unity for
input signal levels below a certain threshold atld less than unity for signals with levels above the threshold.
The threshold level is adjustable over a wide nu:.ge of !he input signaL Figure L23 shows the transfer
characteristic of a typical compressor.
The parameters characterizing a compressor are its compression ratio, threshold level, attack time, and
release time, which are illustmted in Figure 1.24.
When the input signal level suddenly rises above a pre~ribed threshold. the time taken by tbe com-
pressor to adjust its normal unity gain to the lower value i.<; called the attack time. Because of this effect,
the output signal exhibits a slight degree of overshoot before the desired output level i.s reached. A z.em
auack time is desirable to protect the system from sudden high-level transients. However, in this case, the
impact of sharp musical attacks is eliminated. resulting in a dull "lifeless" sound [Wor89J. A longer attack
time causes the output to sound more percussive than normal.
~imilarly, the time taken by the compressor to reach its normal unity gain value when the i::tput level
suddenly drops below the threshold is called the release time ffi' recovery time. If the input signal fluctuates
rapiclly around the threshold in a small region, the compressor gain also fluctuates up and down. ln such
a situation, the rise and fall of background noise results in an audible effect called breathing or pumping,
which can be minimized with a ;onger release time for the compressor gain.
There are variom; applications of the compressor unit in musical recording lEar76]. For example, it car:
tx· UM:d to eliminate variations in the peak" of an electric bass output signal by clamping them to a constant
level. thus providing an even and solid bass line. To maintain the original character of the instrument, i! is
24 Chapter 1: Signals and Signal Processing

-
V ariable
tb<eiliold
lnput in dB

Figure 1.25; Trnmfer char:octeristic of a typkal expander.

necessary to use a compressor with a long recovery time compared to the natural decay rate of the electric
bass. The device is also m;eful to compen:;ate for the wide variations in the signal level produced by a
singer who moves frequently. changing the distance from the microphone.
A compreswr ·with a compression ratio of W-to--1 or greater is called a limiter siiKe it.'i output levels
are- essentially damped to the threshold level. The limiter is used to prevent overloading of amplifiers and
other devices caused by signal peaks exceeding certain levels.

Expanders and noise gates. The expander's function is opposite that of the compressor. It is also
an amplificr with two gain levels: the gain i;; unity for input signal levels above a certain threshold and
less than unity for signals with levels below the threshold. The threshold level is again adjustable over a
wide range of the input signal. Figure 1.25 shows the transfer characteristic of a typical expander. 1he
expander is used to expand the dynamic range of an audio signal by boosting the high-level signals and
attenuating the low-level signals. The device can also be used to reduce noise below a threshold level.
The expanCer is characterized by its expansion ratio. threshold level. attack. time. and release lime.
Here, the time taken by the device to reach the normal unity gain for a sudden change in rhe input signal to
a level above the threshold is defined as the attack time. Likewise, the time required by the device to lower
the gain from i:s normal value of one for a sudden decrease in the input signal level is called the release
time.
The noise _gate is a special type of expander that heavily attenuates slgnah; with levels below the
threshold. It is Jsed, for example, to totally cut off a microphone during a musical pause so as not to pass
the noise being picked up by the microphone.

Equalizers and flhers. Various types of filters are used to modify the frequency response of a recording
O£ the monitoring channel. One such filter, ca1led the shelving filter, provides boost (rise) or cut (drop) in
the frequency response at either the low or at the high end ofthe audio frequenc;· range while not affecting
the frequency response in the remaining rru1ge of the audio spectrum, as shown in Figure 1.26. Peaking
filters are used for midband equalization and are designed to have either a bandpass response to provide a
bomt or a bandstop response to provide a cut, as indicated in Figure 1.27.
The parameters characteriz.ing a low-frequency shelving filter are the two frequencies fJL and f2L,
where the magnitude response begins tapering llp or down from a constant level and the low-frequency
gain levels in dB. Likewise. the parameters characterizing fl high-frequency shelving filter are the two
frequencie;;; /JH and hH, where the magnitude response begins tapering up or down from a constant level
and the high-frequency gain tevei!O in dB. In the case of a peaking filter. the parameters of interest are the
center frequency fo, the 3-dB bandwidth D.f of the bell-shaped curve, and the gain level at the center
1.4. Typ1cal Signal Processmg Applications 25

l_c,;o,f"''-' ;h<)Vjl]~ flt"_et


- -~~----~

5
Jr ~~:~_\mum_oo.~l:_~--

---- ----
-- ~>-- -:::-----------------
/~-:.--~~

-5\'
'--
i Y,a_~imum'-'"' __j
w" w'
Frequ<>Ky.Hz

(a) (b)

Figure L26: Frequency responses of (a_l low-frequency shelving filter and (b) high-frequency shelving filter_

'L_____
w' w'
F~gure 1.27: Peakkg filter frequency response.

fyequcncy. Most often. the quality factor Q = Jo/ D.-f is used to characterize the shape of the frequency
r;~sponse instead of the bandwidth ~f.
A rypical equalizer consists of a cascade of a low-frequency shelving filteo, a high-frequency shelving
filter, and three or more peaking filters with adjustable parameters to provjde adjustment of the overall
equalizer frequency response over a broad range of frequencies in the audio spectrum. In a par-ametn"t.·
equalizer. each individual parameter of -its constituent filter bloch can be varied independently without
a:ffecting the parameters of the other filre-n. in the equalizer.
The graphic equalizer consists of acascadeofpeaking filters with fixed centerfrequenci:es but adjustable
gain levels that are contmlled by vertical slides in the front paneL The physical position of the slides
reasonably approximales the overall equclizer magnitude response, as shown schematically in Figure 1.2-S.
Other types of filters that abo find app1ications in the musical recording and transfer processes are
tbe lmvpass, highpa.vs, and notch filters. Their corresponding frequency responses are indicated in Figure
1.29 The notch filter :s designed to attenuate a particular frequency component and has a narrow notch
width so a.-. not: to affect the rest of the rrwsica1 program.
Two major applications of equalizers and filters in recording are to correct certain types of problems
that may ha'>"e- occurred during the recording or the transfer process and to alter the harmonic or timbrnl
Ct•ntenr.." of a recorded sound purely for m;~sical or creative purposes [Ear76}. For example, a direct transfer
of a musicaJ recording from old 78 rpm disks to a wideband playback system witt be highiy noisy due
to the limited bandwidth of the old disks. To reduce this noise, a bandpass filter with a passband match-
ing the handvi1dth of the oJd recnrds is utilized. Often, older recordings are made more pleasing by adding a
Chapter 1: Signals and Signal Prxessing

r
i
OJI
.s 1

63 160 400 1000 2SOO 6800


Frequeocyin Hz
{a) (b)

Figure 1.28: Graphic equalizer: {a) control panel settings. and (b) corresponding frequency response. (Adapted from
[Ear76].)

Higllpa" fi!Mr
0
(
I'
.a-w I
I
•'
/~Vl
c
~=" I
~

1=-

;;;'
"' •if
Frequ..m:y, Hz
(b)
m'

(c)
-
j - Q~~

"""
FiguR 1.29; Frequency n:sponses of other types of filters: (a) lowpass filter. (b) higbpass tilrer, and (c) notch filter.

broad high-frequency peak in the 5- to 10-kHz range and by shelving out some of the lower frequencies.
The notch filter is particularly useful in removing 60-Hz power supply hum.
In creating a program by mixing down a multichannel recording, the recordin,g engineer usually employs
equalization ofindividuaJ tracks for creative reasons [Ear76l. For example. a "fuUness" effect can be added
to weak instruments sucb as the acoustical guitar, by boosting frequency components i.n the range 100 to
300Hz. Similarly. by boosting the 2- to 4-kHz range, the ttansients caused by the fingers against the string
of an acoustical guitar can be made more- pronounced. A high-frequency shelving boost above the }-to
2-kHz range_ increases rhe "crispness" in percussion instruments such as the bongo or snare drums.
1.4. Typical Signal Processing Applications 27

CumpJcssor
our
Compressor

:Filter#! Filter #1 Filter #3 Filter#4

Response
in dB
¥7 '

10 1100 lk l
l
H<:· 3k 9k
Frequency iu Hz

"''
F9n;o 1.30: The Dolby A.-type U(>ise reduction :.cherm for l.he recording mode. (a) Block diagram, and (b) ffCQU(:OCY
msponses of the four filters with cutoff frequencies as sho->¥n.

Noise reduction system. The overall dynamic range ot human hearing is over 120 dB. Howevel:", most
mcording and transmission mediums have a muct smaller dynamic range. The music to be recorded must
be above the sound background. or noise. If the background noise is around 30 dB, the dynamic range
a~'ailable for the music i-. only 90 dB, requiring dynamic range compression for noise reduction.
A noise reduction system consists of !wo parts. The first part provides the compres~'ion during the
mcording mode while the second part provides the complementary expansion during the playback mode.
'Io this end, the most popular methods ~n musical recording are the Dolby noise reduction schemes, uf
which there are severalty~ (Ear76]. fHub89j, LWor89j.
In the Dolby A-type method Uiied in professional recort!mg, for the recording mode, the audio sigoal is
split into four frequency bands by .a bank of four filters; separate ~:.urn pression is provided in each band and
the outputs of the compres...,;ors are combined, as indicated w Figure 1.30(a). Moreover, the compression
i:r: each band is restricted to a 20-dB input range from -40 to -20 dB. Below the lower threshold {-40
d:B), very low level signals are boosted by 10 dB, and above the upper threshold {-20 dB), the system
has unity gain, passing the high-level signals unaffected. The transfeT characteristic for the record mode
is thus as shown in Figure 1.31.
In the playback mode, the scheme is essentiaily the ;,arne as that in the recording mode. except here
the compressors are replaced by expanders with complementary ttansfer characteristics. as indicated in
F,gllfl;! L3 t. Here, the expansion is limited to a 10-dB input range from -30to -20 dB. Above the upper
28 Chapter 1: Signals and Signal Processing

"' -:0


s
g
-20

-3<]

-40

-50~~~-c~~~~~~--~,----->
-sO -40 -30 -20 -1C o El
lnpnt tn dB

Figure 1.31: Compressor and e,;p:mder L>an.siet" characteristic fur the Dolby A-type no:!,!! reduction sdl.eme.

threshold (- 20 dB). very high level signals arc cut by 10 dB. while below the !ower threshold (- 30 dB).
the syJ.i:em ha<> unity gain pas~ing the lrn.v-level signal!, unaffected.
Note that for each band, a 2-to-l compression is followed by a l-to-2 complementary expansion
such that the dynamic range of the signal at the input of the compressor is exactly equal to that at the
expander output. This type <)f overall signal processing operation IS often caJled companding. Moreover,
the companding operation in one band has no effect on a signal in another hand and may often be masked
by other bands with no companding.

Delay and reverberation systems. Music generated in an inert studio does not sound natural com-
pared to the music performed inside a room. such as a concert hall. In the Iauer case. the sound waves
propagate in all directions and reach the listener from various direc!ions and at various times, depending
on the distance travele<l by the sound waves from the source to the listener. The sound w~ve coming
directly to the lisrener. called the direct sound, reaches first and determines the listener's perception of the
location, size, P.nd nature of the sound source. This is followed by a few closely spaced echoes, caJied early
reflections, generated by reflections of sound waves from aJI sides of the room and reaching the listener
at irregular times. These echoes provide the hstener's !.ubconscious cues as to the size of the room. After
these early reflections. more and more densely packed echoes reach rhe listener due to multiple reflec-
tions. The 1atter group of echoes is referred to as the reverberatjon. The amplitude of the echoes decay
exponem!ally with time as a result uf attenuation at ea<:h Tefiection. Figure L32 illustrates this concept.
The period of time in which the reverberation fans by 60 dB is called the reverberation time. Since the
absorption characteristics of different materials are not the. same at different frequencies. the reverberation
time varies from frequency to frequency.
Delay systems with adjustable delay factors are employed to artificially <.:reate the early reflections.
Ele~tromcally generated reverberation combined with artificial echo reflections are usually added to the
recordings made in a studio. The block diagram representation of a typical delay-reverberation system in
a monophonic system is depicted in Figure 1.33.
There are various other applications of electronic delay systems, some of which are described next
[Ear76].
1.4. Typical Signal Processing Applications 29

L _ __j__ _ _--l::':::!::~;::~~H~Illlllllllllllllllll~:;--~ Time u-. liD.


Direc:
~od

Figure 1.32: Variom types of echoes generated by a single sound smm.:e in a room.

Direct wund

Early reflection"._

Flgure 1.33: Block diagram of a (Omplete Uelay-reverberati(m !>y-.;tem in ll monophoruc :-;yMem.

Attenuation Delay
network ~ystem

Left Righl

Figure 1.34: Localization of sound source u:.ing delay ,;ystems and attenuation networt:.
(
>

l Special effects. By feeding .in the same sound signal through an adjustable delay and gain control, as
'' indicated in Figure 1.34, it is possib1e to vary the localiz.ation of the sound source from the left speaker
to the right for a listener located on the plane of symmetry. For example, in Figure 1.34, a 0-dB loss in
dlt' left channel and a few milliseconds delay in the right channel give the impression of a localization of
i the sound source at the left. However, lowering of the left-channel signal level by a few-dB loss results
in a phantom image of the sound source moving toward the center. This scheme can be further extended
I' to provide a degree of sound broadening by pha..e shifting one channel with respect to the other through
~llpass network..s4 as sbown in Figure 1.35 .
•• 1All ailpass netW(ll"k: !S chara.:terired by a magnitude spectrum which io equal to ooe fO!'" all frequencies,
)
30 Chapter 1 : Signals and Signal Processing

Left Right

Figure 1.35: Sound broadening usmg allpass netwO£ks.

Ri~ilt
+
/

~:IJG
(

R<'Verl>l

l"

l ffi
I
I
+J
Figun 1.36: A possible application of delay systems and reverbeTatioo. in a ~tereopbonic system

Another application of the delay-reverberalion system is in the processing of a single track into a
pseudo-stereo !Onnat wh.ile simulating a natural acoustical environment, as illustrated in Figure 1.36.
The delay system can also be used to generate a chorus effect from the sound of a soloist. The basic
scheme used is illustrated in Figure 1.37. Each of the de.lay units has a variable delay contro.Ued by a
low-frequency pseudo-random noise source to provide a random pitch variation !Bie78J.
It should be pointed out here that additional signal processing is employed to make the stereo submaster
developed by the sound englneer ;nore suitable for the record-cutting lathe or the cassette tape duplicator.
l<
1-4.2 Telephone Dialing Applications
j
Signal proc~sing plays a key role In the deteciion and generation of signalmg tones fOT push-button
teJepbone dialing [Dar76]. In telephones equipped -with TOUCH-TOI>-.'E® dialing. the pressing of each
'
button generates a unique set of two-tone signals, called dual-tone mu/tifrequency ( DTMFJ signals, that are
processed at the telephone central office to identify tbe number pressed by delermining the two associated
tone frequencies. Seven frequencies are used to code the W decimal digits and the twQ special buttons '
marked '"' * " and .. # ". The low-band frequencies are 697 Hz, 770 Hz, 852 Hz. and 94i Hz. Tbe l
remaining three frequencies belonging to the highband are 1209 Hz.. 1336 H7., and 1477 Hz. The fourth
high-band frequency of 1613 Hz is not presently in use and has been assigned for future applications to
'
permit the use of four additional push-buttons fo.r special service.'>. The frequency assignments used in the
TOUCH-TON£® dialing scheme are shown in Figure 1.38 [ITU84]. '
i
1.4. Typical Signal Processing AppJications 31

I
At 1

'""'"
., Oi.Jll'UT

",
.,
Fq;un: 1.37: A scheme for Implementing chorus effecL

!i91 Hz

TXIH7.0

852Hz

-.41 Hz

1209Hz U36 Hz 1477Hz 1633 Hz

Figure 1.38: The tone frequency assignments for TOUCH-TONE® dialing.

The scheme used t-o identify the two frequencies associated with the button that has been pressed is
s~'loY.n in Figure 1.39. Here, the two tones are first separated by a lowpass and a highpass filter. The
passband cutoff frequency of the lowpass filter i'> slightly above 1000Hz, whereas dtat of the higbpass
filter is slightly below 1200 Hz. The output of each filter is next converted into a square wave by a limiter
and then processed by a bank of bandpass filters with narrow passbands. The four OOndpass filters in the
low-frequency channel have center frequencies at fEl7 Hz, 770Hz. 852Hz, and 941 Hz. The foor bandpass
filters in the high-frequency channel have center frequencies at 1209Hz, 1336Hz, 1477Hz, and 1633Hz.
The detector foUow1ng each bandpass filter develops the necessary de switching signal if its input voltage
is. above a certain threshold.
Ali the signal processing fuoctions described above are usually implemented in practice in the analog
d.:nna.Jn. However, increasingly, these functions. are being implemented using digital techniques. 5
5 ~e Section 1 !.I.
32 Chapter 1; Signals and Signal Processing

77Q Hz

852Hz

941 H~

1209Hz

i336 Hz

t-
~ ~~~H Delector 1477 HL

·, ~~~H ~ t- !663 H2

Figure 1.39: The tone detection scheme for TOUCH-TON£® dialing.

1.4.3 FM Stereo Applications


For wireless transmission of a signal occupying a low-frequency range. such as an audio signal, it is
necessary to transform U'le signal to a high-fre-quency range by modulating it onto a high-frequency carrier.
At the receiver, the modulated signal is demodulated to recover the tow-frequency signal. The signaJ
processing operations used for wireless transmission are modulation, demodulation, and filtering. Two
commonly used modulation schemes for radio are amplitude modulaton (AM) and frequency modulation
(FM).
We next reo"iew the basic ldea behind the FM: stereo broadcasting and reception scheme as used in the
Uni.ted States lCou83]. An important feature of this scheme is that at the receiving end, the signal can
be heard over a standard monaural FM radio with a single speaker or over a stereo FM radio with two
speakers. The system is based on the frequcncy-dlvision multiplexing (FDM) method described earlier in
Section 1.2.5.
The block diagram representations of 1he FM stereo transmitter a.'ld the receiver are shown in Figure
L40(a) and (b), respectively. At the transmitting end, the sum and the difference of the left and right
channel audio signals, sL(I) and s 8 (t), are first formed. Note that t.'le summed signal sl..(r) + sR(t) is
used in monaural FM radio. The difference signal sL(t) - SR(!) is modulated :.Ising the double-sideband
suppressed carrier (DSB-SC) scherne6 using a subcarrier frequency fsc of 38 kHz. The summed signal,
the modulated difference signal, and a 19-kHz pilot tone signal are then added, developing the composite
baseband signal SB tt). The spectrum of the composite signal is shown in FI.gure 1.40(c). The baseband
signal is. next modulated onto the main carrier frequency fc using the frequency modulation method. At
the receiving end, the FMsignal is demodulated to derive the ba..(.Cband signals 8 (t) which is then separated
into the low-frequency summed signal and the modulated difference signal using a lowpass filter and a
bandpass filter. The cutoff frequency of the Jowpass filter is around 15kHz., whereas the center frequency
of the bandpass fiJter is at 38-kHz. The 19-kHz pilot tone is used in the receiver to develop the 38-kHz
6 &.-e Secrkm. !.2.4
·1 .4. Typical Signal Processing Applications
33

Left ch,anne liL (!)

+ 5£(1)-..,sR(t}

channel sR!t)
J'Lll)-.>R(<)
DSB-SC sa(t)
~ FM
} subcarri<:r
modulator
f .=.Jgk.Hz
+ tranMnitter r-'

f
Oscillator
<
=d8kH T:~~~~~t-
(a)

mono audio

' fM
"'J;__=_'_iv_cr_j

(b)

Specln!m of
si.(l)+ s[((l) Spectrum of

j r---·'----, Pilot !Oflt DSB 0\ltpL!I


~
,_----=-._
..j..

0
(c)
"
Figure 1.40: The FM ste.--eo 5)'5tem: (a} transmitter. {b) recei\'--e£,. and (c) spectrum of the composite baseband signal
sn(t).

reference signa1 for a coherent subcanier demodulation for recovering the audio difference signal. The
sum and difference of the two audio signals create the desired left audio .and right audio signals.

1.4.4 Electronic Music Synthesis


The generation of the sound of a musical instrument using electronic circuits is another example of the
application of signal processing methods [AliSO]. [Moo77}. The basis of such music synthesis is the
Chapter 1 : Signals and Signal Processing

Figure 1.41: Perspective plot of the amplitude functions Ak {f} for an actual note from a clarinet. {Reproduced with
pennission from J. A. Moorer, Signal processing aspects <..'ri' computer music: A survey, Proceedings of the IEEE. vol.
65. August 1977. pp. llOS-ll37 ©1977 lEE£.)

follo""ing representation of the sound sJgnal s (r):


N
s(!) = ,L A.<:(t) sln{2nfk(t)t) (1.20)
k=l

where Ak(t) <UJd f;,(t) are the time-varying amplitude and frequency ofthektb-componentofthe signal The
frequency function ft(t) v-aries slow1y wilh time. For an instrument playing an isolated tone, ftc{t) = kfo.
i.e"
s{t} = Ak(r) sin (2rr:kj"'vt) (1.21)
where fo is called the fundamemal frn;uency. In a musical sound with many tones. aU other frequencies
are usually integer multiples of the fundamental and me called partial frequencies, also caJled hamumics.
Figure 1.41 shows, for example. the perspective plot of the amplitude functions as a function of time of
17 partial frequency components of an acrual note from a clarinet. The aim of the synthesis is to produ<:e
electronicaJly the A;,_(t) and Jr.(t) functions. To this end. the twu most popular approaches followed are
described next.

Subtractive synthesis. This approach, which nearly duplicates the sound generation mechanism of a
musical instnunent, is based 011 the generatioo of a periodic signal containing all required harmonics and the
U.<re of filters to selectively attenuate (i.e., :mbtract) unwanted partial frequency components. The frequency-
dependent gain of the filters can also be used to boost certain frequencles. The desired variations. in the
amptiwde functions are generated by an analog multiplier cr a voltage-controlled amplifier-. Additional
variations in the amplitude functi~s can be provided by dynamically adjusting the frequency response
clmracteristics of the filters.
1.4. Typical Signal Processing Applications 35

Flgt:rn. 1.42: A piecewise-linear approxirtlatJon to fue amplitude functions of Figure t .4i. (Reproduced with permis-
slon from 1. A. Moorer, Signal processing a:;pects of computer :nusk: A sUI"Vey, Proceedings of the IEEE, vuf. 65.
August 1977. pp. I i08-l 137 ©1917 IEEE.)

Additive synthesis, Hcre, partial frequency components are generated i.ndcpcndendy by oscillators
wi!h rime-varying oscillation frequencies. 1be amplitudes of the required signals are then individually
modified. approximating the actual variations obtained by analysis and combined (i.e., added) to produce
the desired sound signal. For example, a piecewise linear approximation of the clarinet note of Figure
1.41 is sketched in Figure 1.42 and can be used to generate a reasonable replica of the note. Usua1ly, some
alterations to the amplitude and frequency fllf.lctions may be needed before the music that is generated
sounds as dose as possible to that of the original .in.stroment.

1.4.5 Echo Cancellation in Telephone Networks


In a telephone network: the central offices perform lhe necessary switching to oonnect two subscribers
tDut801. [Fre78j, [Mes82].For economic reasons.. a two-wire circuit is used to connect a subscriber to
his/her central office, wllereas the central offices are connected using four-Kiredrctrits. A lwo-wire circuit
is bidirectional and carries signals .in both directions. A four-wire circuit uses. two separate unidirectional
paths for signal transmission in ?P.tH directions. The latter is preferred for 10Jig-distanee trunk connections
since signals at intermediate pO~;.S: in the trunk. can be equalized and ampliti~ using repeaters and, if
necessary, muki:p!ex:ed easily. A hybrid coil in the central office prov.idei; the interface between a two-wire
circuit and a four-wire circuit, as shown in Figure 1.43. The hybrid circuit ideally should provide a perfect
impedance match ro the two-wire circuit by Impedance balancing so that the incoming four-w.ire receive
signal j,s passed directly to the two-wire circuit connected to ihe hybrid with no portion appearing in the
four-wire transmit pat.h. However, to save cost, a hybrid coiJ is shared among several subscribers. Thus,
if is not possible fo provide a perfect impedance match in every case since \he length of the subscriber
lint',.s \'ary. Tile: resulting imbalance causes a large portion of the incoming receive signal from the distance
talker to appear in the transmit path., and it is returned to the talker as an ecbo. Figure 1.44 illustrates the
normal transmission between a talker and a lisrcner as well as two possible major echo paths.
36 Chapler 1: Signals and Signal Processing

Fom:-wire

Transmit
oo Two-wire
r-'~-, Two-wire
Hyhid @
B
A

Figure 1.43: .Basic 2/4-wire interconnectiOll scheme,

A--+--' L--~-·
(a)

Figure 1,44: Various sigmd paths in a telephone network. (a) Transmission path from talker A to listener B, (b) ecbo
palh foc talker A, and (c) eclro path fo£ listener B.

The effect of the echo can. be annoying to the talker, depending on the amplitude and delay of the echo,
i.e.• on the length of the trunk circuit. 1he effect of the echo is wont for telephone networks involving
geostationary satellite circuits, where the echo delay is about 540 ms.
Several methods are followed to reduce the effect of the «ho. In trunk circuits up to 3000 km in length,
adequate reduction of the echo is achieved by introducing additionaJ s[gnal loss in botb directions of the
four-wire circuit. In this scheme, an improvement in the signal-to-echo ratio is realized since the echo
undergoes loss in both directions while the signals are attenuated only once.
For distances greater than 3000 km. echoes are controlled by means ~echo suppressor- inserted in the
trunk circuit. as indicated in Figure 1.45. The device is essentially a voie~<:.activated switch implementing
two functions. It first detects the direction of the conversation and then blocks the opposite path in the
four-wire circuit. Even though it introduces distortion when both subscribers are talking by clipping parts
of the speech signal, the echo suppressor has provided a reasonably acceptable solution for terrestrial
transmission.
For telephone conversation involving satellite circuits, an elegant solution is based on the use of an
echo canceler. The ci:n:uit generates a replica of the echo using the signal in the receive path and subtracts
it from the signal in the transmit path, as indicated in Figure 1.46. Basically. it is an adaptive filter structure
whose parameters are adjusted using certain adaptation algorithms untH the residual signal is satisfactorily
1-5. Why Digital Signal Processing? 37

Reccivec__________J__________,

Fipre 1.45: Echo &uppre-;skm "£heme.

~---<! Hybrid

Receive L _ _ _ _ _..J-_____ ~

Figtlt'e 1:.46: Echo cancellation scheme.

Analog-tD- ' ThgiW-ro- A-g Analog


Analo Digital Low,.., f4output
~
Srunple- Digital AruUog
inpu and-Hold Processor Filter
Converter Converte>

Figure 1.47· Scheme for the digital pmces&ing of an :malog sigttal.

minimized. 7 TypicaHy, an echo reduction of about40 dB i.s considered satisfactory in practice. Toehminate
the problem generated when both subscribers are talking. the adaptation algorithm is disabled when the
signal in the transmit path contains both the echo and the signal generated by the speaker closer to the
hybrid coit

1.5 Why Digital Signal Processing?"


In wme sense. the origin of digital si.gnal proces~;ing techniques can be traced back to the seventeenth cen-
tury when finite difference methods, numerical integration methods. and numerical interpolation methods
were de1o·eJoped to -solve physical probl.ems involving continuous variables and functions. The more recent
interest in dlgital signal processing arose in the 1950s with the a\o·ailability of large digital eomputers.
Initial applications were primarily concerned with the simulation of analog signal processing methods.
Anmnd the beginning of the t 960s. researchers began to consider digital signal processing as a .separate
field by itself. Slnce then, there have been significant and myriad developments and breakthroughs in both
lheory and applications of digital signal processing.
Digital processing of an analog .s-ignal consists basically of three steps: conversion of the analog signal
into a digital form. processing of the digital version. and finally, conversion of the processed digital signal
back into an analog form. Figure 1.47 shows the overall -scheme in a block diagram form.
38 Chapter 1; Signals and Signaf Processing

~.
~
'
J • • ~"IOUU10
;&..e.!n >'"""

(a) (h)

' ' ' • " " " "


Ti!nl':,it>!.lsec

(c) (d)

·r--r- -t-- ,;2 !J


- - i
• i'
.' .I
~

" ' ' • • '" " •• "


li""'·'•l'=
(e) (I}

F'Jgo:R 1.48: 1ypical waveforms of signals .appearing at various stages in Flgure 1.47. (a) Analog input signal, (b}
output of the S/H cin;ult, (c}AID converter outpUt, (d) output of tbedigital processor, (e) D/A crutverter output,. and
{0 analog output signal. In (.:) and (d). the dighal HIGH and LOW levels are shown M positive and oegati.vt pulse!<
fordarity.

Since the amplitude of the analog input signa1 varies with time, a sample-and-hold (S/H) circuit is used
tin<r to sample the analog input at periodic intervals and hold the sampled value constant at the input of the
analog-to-digital (AID} converter to penrit accurate digital conversion. The input to the AID converter
is a staircase-type analog signal if the SIH circuit holds the sampled value until the next sampling instant.
The output of the AID converter is a binary data stream that ts next processed by the digital processor
implementing the desired signal processing algorithm. The output of the digital processor, another binary
data stream, is then converted into a staircase·type analog signal by the digital-to-aJUllog (DIA) converter.
Tile lowpass filter at the output of the D/A converter then removes aillllldesired high-frequency components
and delivers at its output the desired processed analog signal. Figure L48 illustrates the waveforms of
the pertinent &ignals at various stages in the above process, where for clarity the two leveJs of the binary
signals are shown as a positive and a negative pu1s.e, respectively.
In contrast to the above, a direct analog processing of an analog signal is conceptually much simpler
since it involves only a single processor, as illustrated in Figure 1.49. It is therefore natural to ask what
the ad\'.antages are of digital p~ssi:ng of an analog signaL
There are of course many advantages in choosing digital signal processing. The most important ones
are- discussed next lBel841. [Pro921.
1.5. Why Digital Sifjnal Processing? 39

Analo~ Analog ~g
mput Processor output

Figure 1.49: Analog pmcess.1rrg of arudog signals.

Unlike analog circut1s. lhe operation of digital circuits does not depend on precise values of the
digital signals. As a result. a dig.ital cirwit is less sensitive to tolerances of component values and is fairly
inde-pendent of temperature, aging, and most otber extern<J parameters. A digital circuit can be reproduced
~ily in volume quar:tities and does not require any adjustments either during construction or Tater while In
use. Moreover, it is amenable to fuH integr.nion, and with :he recent advances in very largescale imegmted
(VUi[) drcuits. it has been po~s.ible to integrate highly sophisti,cated and complex digital signal proce..<>Sing
system" on a single ch.ip.
In a digital processor. the signal:> and the coefficientsde~criblng the pi'OCt!Ssing operation arerepresenled
as binary words. Thus. any desirable accuracy can be achieved by simply increasing the wordlength, subject
to cost limitation. Moreover, tbe dynamic ranges for signals and coefficients can be increased still further
by using floating-point arithmetic if neces.<:ary.
Digital pwces.<;ing allows the sharing of a given proc~sor among a number- of sjgnals by timesharing,
thus reducing the cost of processing per :.ignaL Figure 1.50 iHustrates the concept of timesharing where
two digital signals are combineU into one by time-di.visior:. multiplexing. The multiplexed signal can then
be fed into a single processor. By sv.i.tching the processor coeffi.dent5 prior to the arrival of each signal
at the input o.f the processor, the proce;;wr can be made to !.ook like two different systems. Finally, by
demuftiplexing the omput of the proc~or. the proces:<~ed signals can be separuted.
Digital1mpiementation permits easy adjustment of processor characteristics during processing, such as
that needed in implementing adaptiw filters. Such adjustments can be simply carried out by periodically
cllangin:g the coefficients of the algorithm representing the processor chamcteristics. Another application
of the .-:hanging of coefficients i~ in the re-alization of systems with pmgrammable charac~ri.stics, such as
ftequency selective fi!Eers with adjustable cutoff frequencie.'i. Filter banks with guaranteed complementary
frequency response characteristic;. are easily implemented in digital form.
Digital iw.plementation allows the realization of certain characteristics not possible with analog imple-
mentation, such as exact linear phase and mu1tirate proces~ing. Digital circuits can be cascaded without
any loading pmb1erns unlike analog circuits. Digital signals can be stored al:most indefinitely without any
lo~ of infonnation on various storage media such as magnetic tapes and disks, and optical disks. Such
stored signais can )ater be processed off-line, such as in the compact disk player, the digital video disk
player, the digital audio tape player, or simply by using a general purpose computer as in seismic data
prxessing. On the other hand, stored analog signals deteriorate rapidly as time progresses and cannol be
reco-.ered in their original forms.
Another advantage is the applicability of digital processing to -..-ery Iow frequency signals. such as those
oc...-:urring in seismic applications. where inductors and capacitors needed for analog processing would be
physically '-try large in size.
Digital signal processing is also assocmted with some disadvantages. One obvious disadvantage is the
increased s.y,stem cnmplexity in the dlgital process'mg of analog signals because of the need. for additional
pre- and postpmcessing devices such as the AID and D/A conveners and their associated filters and complex
digital circuitry
A second disadvantage a-s;;ociated with digital signal processing is the limited :ra&ge of frequencies
available for proces:sing. This property limits its appllcati:on particularly in the digital process.ing of analog
signals. As Hhown later, in general. an analog continuous-time signal must be sampled at a frequen;;y that is
40 Chapter 1: Signals and Signal Processing

Time in tJ. ..ec

(a_\

--
--
~

-:~ ': ~~~ J-_;u: ~~~ ,~~lr-,


1 '] g; 11
' Timeir,!J.;;ec
13 15

(b)

Tlme in 1.1 sec

(c_l

Ji'ignn> 1.50: Jllu~trati<;Jn of the time-sharing concep!. The signal shown in (c) has been obtained by time-multiplexing
the signals shown in (a) and (h).

at least twice the highest frequency component present in the signaL If this condition is not ..atisfied. then
signal components with frequencies above ilalfthe sampling frequency appear a,-, signal components below
this particular frequency, totally distorting the input analog signal wavefonn. The available frequency range
of operation of a digital signal processor is primaril)' determined by the SIH circuit and the ND converter,
and as a result is: limited by the state of the art of the technology. The highest sampling frequen<:y reported
in the literature presently is around l GHz [Pou87}. Such high sampling frequencies are not usually used
in practice since the achievable resolution of the AID converter, given by the wordiength of the digital
equivalent of the analog sample~ decreases with an increase in the speed of the converter. For example, the
reported resolution of an.hJD converter operating at 1 GHz is 6 bits [Pou87J. On the other hand, in most
applications, the required resolution of an A/D converter is from 12 bits to around 16 bits. Consequently,
a sampling freqaency of al most 10 MHz is presently a practical upper limit. This upper limit, however, is.
getting larger and larger with advances in technology.
The third disadvantage stems from the fact that digital systems are constructed using acti•,re devices
thaf consume electrical power. For example. the WE DSP32C Digital Signal Processor chip contains
over 405,000 transu;mrs and dissipates around 1 wau. On the other hand, a variety of analog processing
algorithms can be implemented using passive circuits employing inductors. capacitors, and resistors that
do not need power. Moreover, active devices are less reliable than pa.%ive components.
However, the advantages far outweigh the disadvantages in various applications, and with the.L.'Ontinuing
decrease in the cost of d.lg~tal processor hardware, applications of digital signal processing are increasing
rapidly.
Discrete-Time Signals
2 and Systems in the
Time-Domain
The signals arising in digital signal processing ace bm;ically discrete-:ime signal'S, and discrete-time systems
are used to process these s1gnals . .N; indicated in Figure l.l(c), adis;::rete-time signal in its most basic form
is defined at equally spaced discrete values of time. the independent variable, with the .<.i.gnaJ amplitude
at these discrete time~ being cowinuous. Consequently, a discrete-time slgnaJ can be represented as a
sequence of numbers, with the independent time- variable represented as llfi integer in the range from
-octo +oo. Discrete-time signal proces.&ing then involves the processing of a dis.crete-time signal by a
discrete-time ~ystern to develop another discrete-time signal with more desirable properties or to e-xtract
certain information about the original discrete-time signaL
In many appftcations. it is increasingly be1..-"'0ming more aUrncti\le to process a continuous-time :;ignal
by discrete-ti;ne signal processing methods. T-o thi,-, end, the (.;ontinuou,-,-time signal is first converted into
an "equivalent" discrete-time signal by periodic samplJng; the discrete-time signal is then processed b-y a
discrete-tlme system to generate another discrete-time signal, and the latter is converted into an equivalent
continuous-time :;ignal. if necessary. As we shaH show later in this book, under certain {ideal) conditions,
the conversion of a continuous-time signal can be canied ou1 such that the discrete-time equivalent ha'> all
the lnfornmt'ion contained in the original continuous-time signal, and lf necessary, can be converted back
into ~he original continuous-time- signal without any distmiion.
Thus, to understand tbe theory of digital signal processing and the design of discrete-time systems,
we rn:ed to know- the characterization of dl&crete-time signals and systems in the time-domain, a subject
we discuss in this chapt:ec It turns out that it is often convement to characterize the discrete-time signals
and :,ystems in a transformed domain. This ahernative representation is considered in the following two
chapters.
In this chapter, we first discuss the time-domain repre.-;en:ation of a discrete-time signal as a sequence of
numbers and its various classifications. We then describe several basic discrete-time signals or sequences
that play important roles in the tir.w-domain characterization of arbitrary discrete-time signals and discrete-
time ~ystems. A number of basic operations that generate other sequences from une or more sequences
are described next. As we show later, a discrete-time system i:> comp~sed of a combination of these basic
operations. The problem of representing 11 continuous-time signal by a discrete-time sequence is examined
for a simple c.ue. A more thorough mathematical treatment fnr the general case is deferred until Chapter 5
since it is based on a transform-domain reprc.~ntation of the discrete-time signal discu~sed in Chapter J.
In the latter half of thl,; chapter, we introduce the general concept of the processing of a discrete-time
signal by a discrete-time system and the cias.sificatloo of such systems. Of these systems, the class oflinear,
time-invariant type is of exclusive interest ln this book, and we describe its time-domaint:haracteri.-:ation in
several different forms. \Ve also introduce the conce~t of cro~s-correlation belween a pair of discrete-time
sequences which provides a measure of the degree of similari1y between the pair.

41
42 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

x[-5)

__tlllf111JJI1~.~'~'!"J~"
-Hl-9--S 7--6-~--4 3-1-l 0 I Z
/
.>:{3j

Figtlre 2.1: Graphical representation of a discrete--time sequence lxln]J.

Most of the book deals with the processing of signals that aredetenninistic in nature. However, in some
instances, the signals encountered could be random. and a discussion of the time-domain representation
of discrete-time random signals is also included in this chapter.
Throughout this chapter and successive chapters, we make extensive use ofMATLAB to illustrate through
computer simulations the various concepts introduced.

2.1 Discrete-Time Signals


2.1.1 Time-Domain Representation
As indicated earlier, in digital signal processing. signals are represented as sequences of numbers called
samples. A sample value of a typical discrete-time signal m requence ls denoted as x[n] with the argument
n being an integer in the range -oo and oc. It should be noted that .x [n J is de f. ned only for integer values
of n. and is undefined for noninteger values ofn. The discrete-time signal is represented by {.x[n]}. Ha
discrete-time signal is written as a sequence of numbers inside braces, the location of the sample value
associated with the time index n = 0 is indicated by ~n arrow t under it. Th.e sample values to its right
are for positive values of n, and the ~pie values to its left are for negative values of n. An example of a
discrete-time signal with real-valued samples is given by

{x[nl} = f... , 0.95, -0.2, 2.17, 1.1, 0.2, -3.67, 2.9. -0.8, 4.1, ... }. (2.1)
t
Fonhe above signal. xr-ll = -0.2, x[Ol = 2.17, .xl_l] = U, and so on. The graphical representation of
a sequence {x(nJ} with real-valued samples is illustrated in Figure 2.I.
In some applications l:l discrete-time sequence {x l n]} is gw..erated by periodically sampling a continuous-
time signal x,.(t) at unifonn time intervals:

n= ... ,-2.-1,0, t,2, (22)

as illustrated in Figure 2.2. The spacing T between two oonsecutive sarnpJes in Eq. (2.2) is caJJed the
sampling inlen..al or sampling pen'od. The reciprocal of the ~ampling interval T, denoted as Fr, is called
the sampling frequency:
Fy= r·1 (2.3)

The unit of sampl.ing frequeru::y is cycles per second. or hertz (Hz), if the sampting period is in seconds
(sec).
2.1, Discrete-Time Signals 43

Figu~ 2.2: ~quence generated by sampling a continu...."US-time signal xa(t).

It should be noted thai, whether or not a 1>equence {x[nJ} has been obtained by sampling. the quantity
x [n] is called the nth sample of the sequence. Por a sequence {x[nJ}, the nth sample value x[n] can. m
general. take any real or complex value. If x[n] is real for aU values of n, then {.x[n]) 'is a real sequence.
On the other hand, if the nth sample value is complex for one or more values of n, then it is a complex
sequence. By separating the real and imaginary parts of x{nj, we can write c.. complex sequence (x{n]} as

(2.4)

where x.-.,[n] and x,-m[n] are the real part and the imaginary part of x[n], respectively, and are thus real
sequences. The complex conjugate sequence of {x[nJ} is usually denoted by lx'"[n]} and v.tritten as
{x-*{n}} = {x.-.,[n]} - j {x,m[nH. 1 Often tbe braces are ignored to denote a sequence if there is no
ambiguity.
As defined in the previous chapter, ct.ere are basically two types of discrete-time &ignals: sampled-data
signals in which the samples are continuous-valued and digital .signals in which the samples are discrete-
valued. The -peninent signals in a practical digital signal processing system are digital signals obtained by
t;uanrizing the sample values either by rounding or by truncasWn. For exampl~ the digital signal {i[n}}
cbtained by rounding the samrle values of the discrete-time sequence xfnl of Eq. (2.1) to the nearest
i~lteger values is given by

{X[n])= {,..,I. 0, 2, I, 0, -4, 3. -!.4, ... }.

Figure 2.3 show:; a digital signa! with amplitud~ taking discrete integer values in the range from -3 to 3.
For digital processing of a continuous-time signal, it is first converted into an equivalent digital signal by
mean..~ of a sample-and-hold circuit followed by an analog-to-digital converter. The processed digital signal
i:; then converted back into an equivalent continuous-time ~ignal by a digital-to-analog converter followed
by an analog reconstruction filter. Chapter 5 is concerned with lhe digital processing of continuous--time
signals_ It develops the mathematical foundation of the sampling process and describes the operations of
variDus interface circuits between the continuous-time domain and the digital domain. Chapter 9 considers
the effect of discretization of the amplitudes.
The discrete-time signal may be a finite-length or an inJinite-length sequence_ A finite-length (also
called finite-duration. or finite-extent) sequence is defined only for a finite time interval:

(2.5)

where -oo < N1 and N2 < oc- with N2 2: N1- The lwgth or duration N of the above finite-length
s;~uence is
N = Nz- Nt + L (2.6)
- -
1TI\e complel> conjugation opemt!on i> denoted by the symbol *.
44 Chapter 2: Discrete-Time Signals and Systems in the Tlme-Domain

'Y

-1

Ji'i.gure 2.3: A digital signa!.

oo•

'
.ll!j ''!!l!j] F' d•••.
'N,

(a) (b)

Figure 2.4: (a) A right-s1ded sequence. and {b) a ldt-'>ided sequence.

A length-N dbcrete-tirr.e :;equen::e consists of N samples and is often referred to as an N -point -sequence.
A finite-length sequence can alsc be considered as an infinite-length sequence by assigning zero values to
samples whose arguments are outside the above range. Tile process of lengthening a sequence by adding
zero-valued samples is called appending >'lith zeros or zero-padding.
There are three types of infinite-length sequences. A right-sided sequence xln I has zero-valued samples
forn < Nt. i.e..
x[n] = 0 forn<Nt. (2.7)
wil<~re Nt is a finite integer that can be positive or negative. If Nt ~ 0, a right-sided sequence is usually
called a causal sequence. 2 Likeviise, a lejk'iided se-quence x!nJ has zero-valued samples for n > Nz, i.e.,

xfnl = 0 (2.8)

where N 2 is a finite integer which can be positive or negative. Tf N2 :::; 0. a left-sided sequence is usuaHy
called an anticausal sequence. A general two-sided sequer1a is defined for all values of n in the range
-oo < n < oo. Figure 2_4 iUustnrtes the above lwo types of one-sided sequences.
For simplicity, for finite-length sequences defined for positive values of the time index n beginning at
n = 0, the first sample in the sequence wilt always be assumed to be the one associated with the time index
n = 0 without the arrow being explicitly shown under it.

2. 1.2 Operations on Sequences


A sing!e-input, single-'Output discrete-time system operates on a sequence, called the input sequence,
according to some prescribed rules and develops ar.other sequence, called the output sequence, uslia!Iy
with more desirable properties. For example. the input may be a signal corrupted by an additive noise, and
the discrete-time system is designed to remove the noise component fr-om the input. In some applications,
the discrete-time sy-stem can have more than one mput and more rhan one output. An M-input, N-
output discrete-time system operates on M input ~ignals. generating N output signals. The FM stereo
2.1. Discrete-Time Signals 45

~w,"J
yfn]

(a) (b)

r-._A
>"{n! -------:..~ w 3 tn; r:n1 --0--w,:_[n]

w~!n]=x[n-1]
w ~n]m
3 A;;{n]

(c} (d)

~In!-[]----+ w 5 [n] x[nJ -·~~r~• xJ.nJ

'1" [n] =..>;fn+ 1]


5 *J
(e) (f)

Figure 2.5; Schematic repre'iCnta.tiuns of basic operations on seqtie'nces: (a) Ill{)dulat(lJ". (2) adder, (c} multiplier. (d)
unit delay, (e) unit advance, and (f) p1ck-uH node.

transmission ~ystem is a two-inpul, single-output system since here the left and right channel audio signals
are combined into a high-frequem::y composite baseband signal. Ill most cases, the operation defining a
particular discrete-time system is composed of some basic operations that we describe next.

Baslc Operat~ons

Let .t[n] and y[n 1 be lwo known sequer,ces. By forming the product of tbe sample values of lhese two
sequences at each instant, we form a new sequence 11.'1 (n 1:
(2.9)

In wme applicalions, the product opera!ion is abo knuwn :ts modulation. The device imp!enY:nting the
modulation operation is caHcd .a mvdulaior ant:! it.s schematic representatioo is shown in Figure 2.5(a).
An app-lication of dl.e product operation is in fom1ing a finite-le:~gth ..eq~:ence from an infinite-length
st:~quence by multiplying the latter with a finite-length sequence called a voindvw ;;equence. This proces~
of forming the finite-length sequence is usually called ~..·intlowing, which plays an impMtant role in the
&:sign of certain types of digilal filters (Section 7,6 )..
The second basic operation is tbe addition by whi.:h a new sequence w:dnJ is obtained by adding the
sample values. of two sequences x[n] and y{n]:

w2[n] = x[n] + y~nt


The device implementing the addition operation is caiied an adder and it<; schematic representation j;:.
shown in Figure 2.5(b).
The thin:!. basic operation is the scalar multiplication, whereby a new sequenl.--e is ~nerated by multi-
plying each sample of a sequence xln] by a scalar A:

u:Jfnl = Axfn.~. (2.11)


46 Chapter 2: Discrete-Time Signals and Systems io the Time-Domain

The device 1mplemen1ing lhe multiplication Opet"ation is called a multiplier and ils sc-hematic representation
is shown in Figure 2.5(c).
The rime-shifting operation illustrated below in Eq. (2.12} shows the relation betwt->cn xfnl and its
ti:ne-shifted \ersion w4fnj:
(2.12)

where N is an integer. If N > 0, it Is a dela.ying operation and if N < 0, it is an adwmcing upi!ration.


The device implementing the delay operation by one sample is called a unit delay and its schematic
represelltation i» shown in Figure 2.5(d). The reason for m;ing the symbol;: -I will be dear after \'le have
reviewed the :::-transform of sequences in Chapter 3. The schematic representation of rhc unit advance
nper-J.tion is shown in Figure 2.5(e).
The time-rt>rer.ml operation. abo called the folding operation, is another useful scheme to develop a
new sequen~..--e. An example is:
{2.13)

whkh is the lime-rever>ed versJOn of the sequence x!n].


ln Figure 2.5(f) we also~· a pick-<'ifnode which is u~d to provide mult1ple cop1e;; of a sequence.

As mdicated by the above example, operations on two or more sequences to generate a new sequence
can be carried out if ali sequence>. are of the same length and defined for the same range of the time index
n. Howe1ieC in some situations, this problem can be circumvented by appending zero-1iaJued samples to
the sequence(s) of smaller lengths to make all sequences have the same range of the time index n. This
pnx:ess is illustrated in the following ex:ample.
2.1. Discrete-Time Signals 47

xfnj

yfnJ

Figure 2.': Discrete-time syslem of E.umple 2.3 _

,
_ '

xjn-
' W·''
-l{n-lJ,,

[J
2-J b2

Figure .2.7: Discrete--time sy:.<cm of Example 2A

Combination of Basic Operations


Jn most applications, combinations of the above basic operations are used. Illustrations of such combina-
tior:s are given in the fo!Jowing two examples.

Wif & i&ttw~4!¥m: 110)1041 '•}

"''*'
lo

-I

Sampling Rate Atteration


Another quite useful operation is the sampling rate alteration that is employed to generate a new sequence
with a sampling rate higher or lower than that of a given sequence. Thus, if x[nJ is a sequence with a
sampling rate of FT Hz and it is used to generate another sequence y[n] with a desired sampling rate of
F~ Hz, then the sampling rare alteration mtio is gi\'en by
48 Chapter 2: Dtscrete-Time Signals and Systems in the Time-Domain

xfnl~y[nl
(b)

Figure 2.8· Rcpresem:at;on of h<l.sic sampling raie alt.::-:ration devi;;es: (a) up-~>atnpler. and (b-; down-:.amp'er

(a) (b)

Figure 2.9: lllustration of lhc up ..:.1mpling proceS~>.

Ff.
-=R. ~.2.16)
,~

'
lf R 1. the process i;; called interpolation and results in a :,equence with a higher sampling rate. On the
.>
other hand. if R < 1, the :-.ampling rate is decreased ~y a pn.>eess called decimation.
The basic operations employed in the sampling rate altcrw ion process are- called up-sampling and d()>·m-
.>amplin&. These operations play important roles in multirale discrete-time systems and are considered in
Chapter 10.
In up-sampling by an integer factor L > I, L - I equidistant zero- valued samples are inserted by
the up-sampler between each two conseci.ltive samples of the input sequence x{nl ro develop an output
sequence y[n] according lo the relation

I J-
Xu n
_I x[n/LJ. tt =0, ±L, ±2L ..
O, ntherwi;,e.
(2.17)

Note that the s.<impling rate of .v/n! is L times larger rh.an that of the criginal sequence .x{n J.
The block-diagram representation of the up-sampler. <ibn called a sampling rate e.xpander, :s shown
in f'igure 2.8(a;. Figure 2.9 illustrates the up-sampling operation for an up-sampling factor of L = 3.
Conversely, the down-sampling operation by an .integer factor,\.!> 1 on a sequence x[n1 consists of
;.;.eeping every Mth sample of x j.n 1 and removing A-f- I in-between samples, generaling a.; output sequence
.1'Ln J according to the relation
yfn] = xfnMJ. (2. ( 8)
ThiHesults !n a sequeno;:·e y!n 1 whose ,;.ampiing rate is ( 1/ M):.h that of x[nj. B<tsically, all input samples
with indio;;e>, equal to an integer multipit of M are retained at L'le output and aU others are discarded.
The schematic representation oJ the down-sampler or .<.ampling rate compressor is shown i.n Fig-
ure 2.8\b). Figure 2.10 illu~trates the down-:;ampling opera!iun for a down-smr.pling factor of M = 3.
:2.1. Di.screte-Tlme Signals 49

"
l<

n .... ,,oc,"
(a) (b)

Figure 2.16· lllustratio:< of the dnwn-~amplmg proces.s.

(a)

(b)

Figure 2.1 J: (a) An even sequence, and (b) an odd sequence.

2,1.3 Classification of Sequences


A di<K:rete-time signal can be clas_c,ificd in various ways. One classification discussed e-arlier is in terms
of tEe number of samples defining the sequence. Another classification is w:ith re~t to the symmetry
exhfoited by £he samples with respect to the time index n = 0. A discrete-time sign--at can also be classified
-in terms of its other properties such as periodicity, summability, energy, and power.

Classification Based on Symmetry


A sequence xln j is called a umjugate-symmetri-c sequenC!' if x[n] = x*f -n]. A real coojugal:e-symmetric
sequence is called an even sequence. A se-quence x[n1 :is ~ailed a conjugate-anti.rymmetric .~cquence if
x[n 1 = -x~(-nJ. A real ccnjugate-amisymmetric sequence is called an odd sequence. For a eonjugate-
antisymmetric sequence x[n l, the sample value at n = 0 must be purely imaginary. Consequently, for an
odd sequence x[OI = 0. Examples of even and odd sequences ttre shown in Figure 2.1 I.
Any complex sequence x[n J can be expressed as a sum of its conjugate-symmetric part xcAnl and its
colljugate-anti<>ymmetr:ic part x,a(nj:

x[n] = xc..!nJ + x,-,,[n], (2.19)


50 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

where
' =
x, . . 1ni 2l {_x[nj
' + l- '[ -n,.
Jl (2.20a)

x,"~n] = j: (xlnJ- ,-"!-nJ)- (2.20b)

A~ indiLated by Eqs. (2.2Ua) and! 2.20b), the cumpmanori of the UJilJUgate-~ymmetric and conjugatc-
anti~ymmetric parts of a sequence involves nmjuga~ion, t:r.:tc-reversal, addition. and multiplication opem-
tiom,_ Because nf the time-rewnal operation. Ihe decomposition of a finite-length sequence into a sum of a
conjugate-:.ymmetrlc sequence and a conjugate-antisymr:~etric sequence is possible. if the parent sequence
is of odd length defined for a symmetric :interval. -M :::; 0 _::: M.

likewise, any real sequence ...:{n] can be expressed as a sum of its ev-en part Xe11 [n] and its odd part
X 0 d[n]:
x[nj = Xevln] + xoo[rl], (2.2l)
where
I
Xev[n]= 2(x[nl+x[-n]), (2.22a}

XoJ[nl = ~ (x[n]- x[-n]). (2.22b)

For a length-N sequence defined for 0 n :s :s


N - 1, !he above definitions of symmetry are not
applicable. The definitions of symmetry in the case of finite-length sequences are given instead using a
modulo operation with all such symmetric and antisymmetric parts of a length-N sequence being also of
length N and defined for the same range of values of the time index n. Thus, a length-N sequence x[n 1
can be expressed as
x[n] =XpcslnJ +xpc0 [n], 0 :S n :S N -1,
2.1. Discrete-Time Signals 51

where x,.,cAnJ and x pca[n J denote, respectively, the periodic nmjugate-s;.mmetric part and the periodic
conjugate-anrisymmetric part, defined by 3

Xp..-sln]=~(x{n]-'-x*l\-n)N;)=~(x[n]+x*[N-n)), Osn:::;N-1, (2.24a)

X;u-,tn~=~(.x~nJ-x,.[(-n]N])=~(x[n]-x":N-nJ), 0:<5:n_:::N-L (2.24b)

For a real sequence xrn ], the periodic conjugate-symmetric part i& a real sequence, called the periodic
even part, and denoted by x p.-fn l- Likewise, for a real sequence x[n ], the periodic conjugate-antisymmelric
part is also a. real sequence, called the periodic odd part, and denoted by Xpv[n].
A length-N sequence x[n], defined for 0 :::= fl :::= N- I is said to be periodic conjugate-symmetric
if .x[n] = x*[(-n)N] = x·!N - nl, and is said to be periodic conjugate-anti:symmetric if x[n] =
-x*l{-n).;v] = -x"IN- n]. A fini[e-length real periodic conjugate-symmetric sequence is called a
symmetric sequence and a fmite-1ength real periodic conjugate-antisymmetric sequence is called an anti-
symmetric sequence.


--
W--IIVIJ!II- 'W¥1J-~>:f1-1¥> · - W4'{wt#i ni M ·-li¥0¢111lf--Y)'l¥l!M04¥j!Mf¥#cWfw]

The symmetric properties of sequences often simplify their respective frequency-domain representa-
tions and can be exploited in signal analysis. Implication& of the symmetry conditions are considered in
Chapter 3.

Periodic and Aperlodic Signals


A sequence i[n J satisfying
i[r.] = X(n + kN I for all n (2.25)
is called a perim:lic sequence with a period N where ,V is a positive integer and k is any integer. An
example of a periodic sequence that has a period N = 7 samples is shown in Figure 2.12. A sequence is
called an aperiodic sequence if it is not periodic. To distinguish a periodic sequence from an aperiodic
3 {k)N- k modulo N.
52 Chapter 2: Discrete-Time Signals and Syste'TIS Ln the Time-Domain

• • • 0.5 •••
~~~~~LL~~~~~~-
~ -5 -4 -~ --2 -1 0 I 2 3 4 5 6 7 8 9 10 II 12 IJ 14 L5

Figure 2.12: An example of a periodic sequence.

sequence, we shall denote the former with a •..--.. on top. The fiuuiamental period N f of a periodic signal
is the smallest value of N for which Eq. {2.25) holds.

Energy and Power Signals


The total energy of a sequence x[n] is defined by
00

e, ~ I: lx[n[l 2 • (2.26)
11=-0Q

An inf.nite-length sequence with finite sample values may or may not have finite energy, as illustrated in
the following example.

The average power of an aperiodic sequence x[n] is defined by


I K
2 (2.29)
'P"' = lim "" lx[n][ .
K--+oo 2K + 1 n=-K
L.,

The average power of a sequence can be related to its energy by defining its energy over a finite interval
-K :-::; n :S K as
K
E:..K = L 1
lx[n][ . (2.30)
n=-K
2.2. Typical Sequences and Sequence Representation 53

Then
. l (2.31)
?x = lim fx K·
K .....oo2K+l '
The average power of a periodic sequence .i(n] with a period N is given by
, N-1
Px = ~ L 2
i.i[n]J . (2.32)
.~

The average power of an infinire-length sequence may be finite or infinite.

An infinite energy signal with finite average power is called a power signal. Likewise, a finite energy
signal with zero average power is called an energy signal. An example of a power Mgnal is a periodic
sequence which has a finite average power but infinite energy. An example of an energy signal is a
finite-length sequence which has finite energy but zero average pow-er.

Other Types of Classification


A sequence x[n] is said to be bounded if each of :ts samples is of magnitude less than or equal to a finite
positive number B:x:, i.e.,
jx[nJI:::; Bx <·XI. (2.33)
The perioillc sequence of Figure 2.12 is a bounded ~quence with a bound B" = 2.
A sequence x!n] i.E; said to he absolwely sunur.able if
=
L ;x[n}l < oo. (2.34)
n=-oo
A sequence ill said to be square-summable if

(2.35)
n=-oc
A square-sunnnable sequence therefore has finite energy and is an energy signal if it also has zero pov:er.

2.2 Typical Sequences and Sequence Representation


We now consider several special sequences that play important roles in the analysis and design of discrete-
time systems. For example, an arbitrary sequence can be expressed in tenns of some of these basic
54 Chapter 2: Discrete-Time Signals and Systems in 1he Time-Domain

(a) (t>)

Fi~Lll"\' 2.13: (a) The unit sample sequenct: !8[ n JJ. and (b) the shiftd unit san: pie scquen~·c iO[n - 21).

y 0 9
\ \ ?

' ' '


l '
(a)
• ' ,,
'''
'
l'
'

"
' ~
I
'
i
(b)
"
I

''
!• ..
"

Figure 1:.14: ,:a) The unit step sequence !,u[nj L ami the ~titled unit sl.ep sequence {JLin + 2)).

sequences. Another fundamental application that is the key behind di;;;crete-time signal proce!;.<.ing is the
representation of a class of discrete-time <;ystems in term,; of the response of the system to certain ba~ic
sequences. This representation permits the computation of the response of the discrete-time system to
arbitrary discrete-time signals if the latter can be expressed in term•;; of these basic sequences.

2-2.1 Some Basic Sequences


The mosi common basic sequences are the unit ,ample sequence, the umt slep sequence, the sinusoidal
~cqueoce, and the exponential sequence. These sequences are defined next.

Unit Sample Sequence


The sirnpiest and one of the rr-.ost useful sequences is the unit sample sequence. often called the diJcrete-tilr-.t'
tmpuhe or t.'le;mit impuhe, as ;;hown in Figure 2.:3{.a). It ts denoted by J[n] and i.lefir,ed by

b[n l
·
= 11. 0,
n = 0,
n #= 0.
(2.36)

Th<: unit sample seque-nce shifted by k samples is thus given by

O'n- kl
·
= l L
0,
n
n
= k,
i=- k.
Figure 2.B(b} -:bows O!n - 2!. We shall s.how l:oter in this section that any aibitrary sequence can he
represented a:; a sumo:· weight.W lime-shifted unit sample sequences. In Section 2.6 1 we demonstrate
that a certain class of discrere-time systems i5 completely characterized in the time-domain by :ts uutput
response 10 a unit impul;;e input. Furthermore, knowing thi~ particular response of the system, we can
compute it'> rc5ponse to any arbitrary input ~uence.

Unit Step Sequence


A sec(md basic ~equence is the unit stf'p :::toquence shown in Figure 2.14( a). ll is denoted by p.[n j and is
defined by
2.2. Typical Sequences and Sequence Representation 55

u n). =
· I
II.
0,
nO>O.
n < 0.
(2.37)

The unit step :.equcn~ shifted by k samples is thus given by

.
i.i'fl - kl =
I0 'L n > k,
- k
n < .
Figure 2.14(b) shows ptn + 2}.
The unit ~ample and the unit step seyucnces are related a~ follows (Problem 2.l9):

"'"] ~ 2.:" oik], bin]= i.ilnl- ji[n - 11- (2.38)


"-=-~

Sinusoidal and Exponential Sequences


A commonly encountered sequence is the real sinusoidal sequence with constant ampUtude of the form

x[n l = A cos(w.,n + ¢), -~ < 11 < 00, (2.39)

where A.. w,, and q, are real numbers. The parameters A., w,, and .pare called, respectively. the amplitude,
the angular j"n·qut•ncy, and the phrue nf the sinusoidal sequence x[n).
Figure 2.15 ;,hows different types of :-.inusoidal sequences. The real sinusoidal sequence of Eq. (2.39)
can be written alternatively as
x!nJ = x;{n i + ~q!nJ. (2.40)
where x;[n J and Xq [n 1 are. respectively, the in-phose and the quadrature components of x[n], and are given
by
xdn] = .4.cns¢c..--Js(w._,n). xq[n} = -Asin¢sin(w.,n). (2.41)
Another .set of basic :,cquences is formed by taking the nth sample value to be the nth power of a real
or complex constant. Such sequences are termed npvnential sequences and their most general form is
,E ..iven by
x[n] =A a", -oo < n < oc, (2.42)

where A and"' are real or complex numbe1::.. By ex.pres..;;ing

A. = i A.j e-1'¢.
we can rewrite Eq. (2.42) as

x[nJ = Aek.,.o+jw,)ll =~AI eV,J'-eH"'"'"+¢1 (2.43a)


=:AI e'-'"'' t·os(wan + f/J) +j (AI eO'"" sin(a•0 n + ¢). (2.43b)

to amve. at an allemmive geneml form of a cmriplex expoP.eniial sequence where a,,¢, and (iJ0 are now
real ilUtnbers. lfwe Write .xin] = Xr_,.[nl + jx,m rnJ, !hen from Eq. (2.43b):

Thu< the real ani..\ imaginary parts of a compLex exponential sequence are real smusoidal sequences with
conslam {0"_, = 0). growing (a, > 0), or decaying_ (r:r<> -< 0) amplitudes for n > 0. Figure 2.16 depicts a
56 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

l1
"
2,

dI ..!.,.
0
';'
~ i i'
:t' ,i

f'"'iCeculiCilcuUoi~·cluu"'LJu'""cc'u'""'~'uu-"1'
;

'
1

'
' :,:,
-
-1

-2 -2oc_-----c,oo-----c,ooc-----~~o,--­
0 10 20 40
Timeimkx n Time inde.t< n
(a) (b)

~
.::: j,l
~ o:~~-'-rr'o-YYnY'+~Yy,
t, 1L 6
-l 0~-----~10;;------,~0~ -2 o::-------ctoc-----c,ooc-----~m;:------!40
Tune index ll Tune index n

(c) (d)
(!}
0
,. 0 911
2 --~
-----,
'<~9!¥ 'ff'f'?~
r ~ 0
f 1
~
1
9 3 ~ r T3 ! ! I ~
'' ''
'
: I I J Jll
' ; '
' '
b1:
~0
< -1 ,r' 66 6
0
c.
~~6
6
I!:
"I'
' :
J L_d
1

'
-2
0 to 20
Tmoe index n
-
~o 10 20
Time index n
40

(e) (I)

2,--
00
' '

iI
i
6J,
-
o !
9
c
fr, r "
1 :
ID
o
6 2
~J .
6
9

..
6
..
10 20 )()
Tmreindexn

(h)

Figure 2.15: A family of sinusoidal sequences given by x(nj = 1.5 cosw.,n: {a) w, = 0, (b) Wo = 0. !Jr, {c)
Wo == 0.2rr, (d)"'" = O.Sl'J", (e} Wa = 0.97:-, (f) w 0 = rr. (_g) w 0 = l.l :-r, and (h) w 0 = 1.2,--..
2.2. Typicai Sequences and Sequence Representation 57

Rcai part· Imagmary pan

0.5 'f
!
:j '
6] 6
~I L___---:c----c:;c--~----o -0.5 L--"'"c----cc---::----'.
0 10 W 30 40 U JO ~ 30 ~
Tink index n TiiTI~ mdex n
(a) (b)

Figure 2...16: A complex exponential sequence x[n J = e( -l/l2...... j:r f!Sfn. (a} Real part and (b) imagiillll)' part.

50,----------~---o
0
2
?
401 0. = i.2
't': l5
a=0.9

]" 'i

'j" I
'E_IO
~
5

'' ''
0
JO 0 5
Timeinde> n
Ia} (b)

F'""Igure 2.17: Examples oheal exponential sequences: (;;) x[n J = 0.2( 1.2)", {b) x[n} = 20(0.9)"-

complex exponential sequence with a decaying amplitude. Note that in tbe disp:ay of a complex exponential
sequence, its real and imaginary parts are shown separately.
With both A and a real, the sequence of Eq. (2.42J reduces to a real exponential sequence. For n ::::_ 0
such a sequence with In- I < I decays exponentially as n increases and with Ia ~ > 1 grows. exp<~nentially
as n increases. Examples of .real exponential sequences obtained for t\VO v-alues. of a are shown in Figure
2.17.
We shall show later in Section 3.1 that a iarge class of sequences ean be expressed in terms of complex
exponential sequences of the form ejwn.
Note thal the sinusoidal sequence of Eq. (2.39} and the comp.lex exponential sequence of Eq. (2.43a)
wi"thuo = Oare periodic sequences of period .Vas long asw0 N is an integer multiple of2rr, i.e., wuN = 2rr r
where Nand rare positive integers. The smaHest po&sible N satisfying this condition lS thejundnmental
period of the sequence. To verify this, consider the two sinusoidal sequences x; [n] = cos(w0 n + ¢) and
x~fnl = cos{w"(n + N) +¢}.Now

x2[n)= cos (wa (n + N) + .q)) = cos(w,n + l/J) cos.w,_,N - sin(w,n + r./>) sinw,N
which will be equal tocos.(w,n +¢) = XJ[n] onlyifs.in w.,N = 0 and eos waN = 1. These two conditions
are satisfied lf and only if w,_,N is an integer multiple of2:r, i.e ..

(2.44a)
58 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

N
(2.44b)
Dk.· r
If 2,7 jw0 is a noninteger rational number, then the period wiB be a multiple of 2:rfw<>. If 2;riwo is not
a rational number. then the sequence is aperiodic even though it has a sinusoidal envelope. For example,
x[n] = cos(·.i'3n +-¢)is an aperiodic sequence.

The numher r.v0 in !he two sequences of Eqs. (2.39) and (2.43a) is callt:tlthe angufar frequency. Since
the time instant n is dimensionless, the unit of angular frequency W 0 and phase ,P is simpJy radians. If the
unit of n is designated as samples., then the unit of w 0 and dJ is radians per s.ample. Of1en i.n practice. the
angclar frequency w is expressed as
w = 2rrf, (2.45)
wbere f is frequency in cycles per sample.
Two interesting properties of these sequences are discussed next Consider !WO complex exponential
sequences xdnl = ei"' 1" and x 2 [n] = ej'll'J.n with 0 s w 1 < br and 2n k s ..,2 < 2JT(k + l). where k is
any positive or negative integer. If
W:2 = »>1 + 2nk, (2.46)
then
, 1_1-_
X 20 "
ejwzn _ ej(wJ+2nkJn _
- - t'
jwp• _ x [n'
- I .:·

Thusthesetwosequencesareindistinguishable. Likewise, two sinusoidal :<>eqUencesxl [n] = cos(w; n +¢)


and x;![n] = COS(w-:!n + ¢) with 0 s w1 < 2Jr and 2nk ::: W::! < 2n(k + 1}, where k i.'> any positive or
negative integer, are indis[inguishable from one another if w;: = w 1 + 2rrk.
1he second interesting feature of discrete-time sinusoidal signals can be seen from Figure 2.15. The
frequency of oscillation of the discrete-time sinusoidal sequence x[n] = Acos(w0 n) increases as w_,
increases from 0 to 1f, and then the frequency of oscillation decreases as Wo increa-.es from n to 2n.
As a result of the first property. a frequency w.,. in the neighborhood of w = 2:rrk is indistinguishable
from a frequency w 0 - 2:Jrk in the neighborhood of w = 0, and a frequency w_, in the neighborhood of
w = .rr{2k + 1) i'l- indistinguishable from a frequency .w"- .7(2k + 1) in !he neighborhood of w = Ir for
any i:nteger value of k. Therefore, frequencies in the neighborhood -of W 0 = 2Jrk are usually called low
frequencies. and frequencies in the neighOorhood of Ul_., = r. (2k + I) are called high frequencies. For
example, v 1[n] = cos(O.J rrn) = cos{1.9:rn) showE in Figure 2-l5(b) is a low-frequenc) signal, whereas,
v:2[n] = cos(0.8r.n) =cosO .2.Jrn) shown in Figure 2.l5{d} and (h) is a high-frequency signal.
Another application of the modulation operatiilll discus.sed earlier in Section 2J 2 is to transform a
sequence with low-frequency sinusoidal componems to a sequence with high-frequency components by
moduJatmg the former witll a sinusoidal sequen-ce of very high frequency, as i;Iustrated in the following
example.
2.2. Typjcaf Sequences and Sequence Representation 59

2.2.2 Sequence Generation Using MATLAB


MATLAB includes a number of functions that car be used fur signal generation. Some of these functions
of inteTest are

exp, sin, cos, square, sawtooth

For example, Program 2_1 given below can be employed to generate a complex exponential sequence of
the form shovm in Figure 2. 16,

% P.:-ograrn 2_l
% Generation of complex exponential se~~a~ce
%
a inpuc i 'Type ln real exponent = ') ;
b input {' T).'Pe in imaginary exponent = ' ) ;
c a + b*i;
K
N
i~put('Ty~e ~n the gain consta~t = ' l ;
input ( 'Typ~ in ler~gth of sequence
1: N;
-
' '
..
n
x K*exp (c*n);
stem(n,rea:,_(x)};
;;;label ( ":'ime i.::tdex n') ;ylabel: 'Anpli':ude');
ti~le('Real part');
d~sp{'?RESS RET8RN for ~maginary part'!;
pause
stel".(n, i:nag(x));
xlabe:i_('Time index :l');y:!_abel('A..-np":..itude'};
citle('Imaginary pare');

Likewise. Program 2_2 listed below can be employed to generate a real exponential sequence of the fonn
shown in Figure 2.17.

% Prograrr. 2_2
% Generation of real expo::tential seq..1ence
%
a input {'Type in argument = • ) ;
K inputi'Type in Che gai~ constant= 'l;
N in~ut ('~~e i~ ~ength of sequence= ');
4
Th<;: appe.aran.:e of a h.igh-frequern:y Ngmd cos({w; + w:t)n) as alow-1requeru;:y signal cos{(2...,- - WJ - 612)11} is called aliasiDE
isee Sec!Km. 23).
60 Cnapter 2: Discrete-Time S\gnals and Systems in the Time-Domain

1.5

0.7:>

~4-3-2-10

Figure 2.18: An arbitrary sequence .x[ n l-

n = o~N;
x ~ K*a.~n;

stemfn,x);
xlabel('?ime index n'};ylabel{'Amplit~dB');
ti~le( l '\aip~a = ',num2str\~iJ};

Another type of sequence generation using MATLAB is given later in Example 2.14.

2,2,3 Representation of an Arbitrary Sequence


An arbitrary sequence can be represented in the time-domain as a weighted sum of some basic sequence and
it& delayed versions. A commonly used basic sequence in the representation is the unit sampk sequence.
For example, the sequence x(nl of Figure 2.18 can be expressed as
xfn] = 0.50:[n + 21 + 1.55[n - [)- b[n - 21 + S[n- 41 + 0. 758[n - 6]. (2.47)
An implication of this type of representation is considered later in Section 2.5. i, where we develup the
general expression for calculating the output sequence of certain types of discrete-time systems for ar.
arbitrary Input sequence.
Since the unit step sequence and the unit sample sequence are simply related through Eq. (2.3R), it is
aho pO'-\>ible to represem: an arbitrary sequence as a weighted combination of delayed unit step sequences
(Pt-obl.em 2.24}. ·

2.3 The Sampling Process


We indicated earlier that often the discrete-time sequence is developed by unifonnly sampling a;:.ontinuaus-
time signal x.,(n, as illustrated in Figure 2.2. The relation belween r..r.e two signals is given by Eq. (2.2),
where the time variable t of the continuous-time signal is related to the time variable n of the discrete-time
signal only at di~<ete-time instants In given by
n 2nn
t,. = nT = ~ ~ ~-, (2.48}
Fr Rr
with Fr = 1/T denoting the sampling frequency and QT = brFr denoting the sampling angular
frequency. For example, if the continuous-time signal is
Xa(l) = A cos(:brfct + ¢} = .1 cos(Q,_.,t + lfJ}, (2.49)
the corresponding di~rere-time signal is given by
x(nj =A cos(Q,nT + ¢)
2"r!"
=A cos ( _Q;-n +¢ ) =A ros(w 0 n + ¢), (250)
2.3. The Sampling Process £1

\~ ' :
' ,) '·' ;\

'

"' "
Figure 2.19: Ambiguity in tr.e di.;crete-time represen:a(ion of continuous-time signals. g1 (t} iS sho-wn with the solid
line-. 1?2{0 is shown with the dashed line, g3(!) is S..'lown with the das..>ted-dot line, and the sequence-obtillned by
"'lmpling is 'bown with cin:les.

where
w., = 2nQo
--- = Q0 T (2.51}
f>r
is the normalized digital angular frequency of the discrete-time signal x[n]. The wtit of the normalized
digital angular frequency w, is radians per sample, while !he unit of the normalized analog angular fre-
quency Q, is radiallS per second and the unit of tlle analog frequency fo is hertz if the unit of the sampling
period T is in seconds.

!
'3
-
AA&~.
t

~w.

'' ln the general case, the family of continuous-time sinusuids

x,..~: (1) = A cos(±(Q,t + ¢:) + H2rt). k =0,±[,±2, ... (2.52)


leads to identical sampled signals:

~
x ... k ( n T ") = A cm;((h.,+k ..~~r)nT+I/J)=A.cos (2;r(Q;. +kflr)n +¢ ')
- f2r

2rrfl"n
=A cos ( &1r + (jJ ) = A cos(w_9 n + t/J) = x{n]. (2.53)
62 Chapter 2: Discrete-Time Signals and Systems in the Tfme-Domain

The above phenomenon of a continuous-time sinusoidal signal of higher frequency acquiring the
identity of a sinusoidal sequence of lower frequency after s.amphng is called aliasing. Since t~e ~an
infinite number of continuous-time functions that can lead to a given sequence when sampled penod1cally,
additional conditions need to be imposed so that the sequence {x[n 1} = {x., (nT)} can uniquely represent I
rhe parent continuous-time function x,{t). In which case,_ x.,(t) can be fully recovered from a knowledge
of {x[n]}.

itH '' JJ;QIP tf !11


~•U•{jd'' tUiitw J;l!ne ,~

It follows from Eq. {2.51) that if ilT > 2Q.., then the corrnsponding nonnalized digital frequency w 0 .l
of the discrete-time signal x[nl obtained b)' sampling the parent continuous-time signal Xa(t) will be in
the range ~:!f < w < rr, implying no aliasing. On the other hand, if QT < 2Q0 , the normalized digital
i
frequency will fold into a lower digital frequency w., = (2:rl.1 0 /fir)2,. in the range -rr < (J) < rr because '
of aliasing. Hence, to prevent aliasing, the sampling frequency ~h should be greater than 2 times the
frequency Q.,. of the sinusoidal signal being sampJed. Generalizing the above: result, we observe that if
we have an arbitrary continuous-time signal x 0 {t) that can be represented as a weighted sum of a number
of sinusoidal signals, then x 0 (1) can also be represented uniquely by its sampled vcrsion {x{n]} if the
sampling frequency O:r is chosen to be greater than 2 times the bighesl frequency contained in .x,. (t ). The
condition to be satisfied by the sampling frequency to prevent aliasing is called the sampling theorem.
which is formally deri\'ed later in Section 5.2.
2.4. Discrete-Time Systems 63

x(n]

Input sequen~
J_jL -;i,.,rete-ti~
~ystem
_____
L
cOutput sequence
__j
yfn]

I Figure 2.20: Schematic representation of a discrete-time system.

The discrete-time signal obtained by samplir:g a continuous-time signal Xa{t) may be represented as
a sequence (xu[nTl}. However, we shall use the more common notation {xln]} for simplicity {with T
assumed to be normalized lo 1 sec). It should be noted tl::at when dealing with the sampled version of a
continuous-time function, it is essential to know the numerical value of the sampling period T.

2.4 Discrete-Time Systems


The function of a discrete-time system is to process a given input sequence to generate an outpuJ sequence,
In most applications. the discrete-time system used is a single-input, single-output system, as shown
schematically in Figure 2.20. The output sequence is generated sequentially, beginning with a certain
value of the time index n, and thereafter progressively increasing the value of n. If the beginning time
index is n 0 , the output y{n 0 } is first computed, then y[n 0 + 1) is computed, and so on. We restrict our
attention in this text to this class of discrete-time systems with certain specific properties as described later
in this section.
in a practical discrete-time system. all signals are digital signals, and operations on such signals also
l-ead to digital signals. Such a discrete-time system is usually called a digital filter. However. if there is
no ambiguity, we shall refer to a discrete-time system also as a digital filter whether or not it has been
implemented using finite precision arithmetic.

SimpNt Discrete-nme Systems


The devices implementing the basic operations shown in Figures 2.5 and 2.8 can be considered as elementary
discrete-rime systems. The modulator and the adder are examples of two-input. single-output discrete-time
systems. 11Je remaining devices are examples of single-input, single-output discrete-time systems. More
complex discrete-time systems are obtained by combining two or more of these elementary discrete-time
sysrems as illustrated :in Figures 2.6 and 2.7, respectively. Some additional examples of discrete-time
systems are given below.
64 Chapter 2: Discrete-Time Signals and Systems in t~e Time-Domain

0.5 ---~"'-;c" __,______,


.~·Q I
~
''
'

lO 2G 50
Time mde'l.n
!_a_! {b)

Figure 2.21: (a) The original uncorrupted sequence .dn 1. and (b) the noise sequence d[n j.

:t)iim-tlbrtQ &m191 ;ai; ihr; #VEUiMiJMw U ~ l%i¥ ii X::mild !J\1g11W 4$1[!lMJ!Wm,'pq~'¢rqm:


{ ,(illWI\k/1V!

We next illustrat:e the operation of a d::screte-time system by two examples.

'
''
» -ndwttf U:LtJ~n wt:wt«z "'"'v~ LPmH+
1« ::{; {
+1: " t~fi4G;jii(,Jj ··0, r; 7
+; iit~-$$0Gik Ji?M , ETk'%'!' £$!',1/k,.,± ,! $1!'117$!')
41 t ;>t is··±
2.4. Discrete-Tlme Systems 65

g ;-·

<>f r-"-- --~


.. ' .... " !
"- -,.<nj:

L ~ir.l: j
''
'
- ~ ' -
,, 20 30
T:melndun

(a) (b)

Figun! 2.22- Pertinent si_gnab of Example 2.14: s[n] is the original uncorrupted sequence, d[n) is the noise sequence,
x!n] = sin J + d[n J, and y[n] is the output ilf the movmg-average filter,

i%""-"':Y."'" _;-_
w·,kfJ} \'P ( .f,
+:-;00\dB 0-
1 Ls!:t«'- ; t "Ti:it" •.tt<t'f.:!K ;- : 7 t.:s:St: :t i ""4Zf$C 1 i. :: •.>tiS'. • i
-:: tiwt r::vHJt;:.t ctt·:::n:vt7£'P'Ji .Yli:<!l\kt'hC4· ;
'"'E{>L&l { 4,:;;. d 1
w'.lS!P:::GJ,;]t
:w H7420 ! t ' 'J'JJV011' ; Yf'Vlf'V '!"J : f
0. 1 t.lw t 'NG,iNN

-
-- -- '8- $":rnq:::Jtt"; -L-*
'%, :b vLiir 1 ,100¢;;5 t.T!! tk% Z?d :0 ?\&Pi' ;_ tlR:f"' .0-? s:-.r:-0-0'1'
•t+
«h Ui:JR:""
S:D:
u &W:i:Z?i:W:.,.A-t ---;!, >;

tvr,.;:t"' '?• "'_,ft, »,, 'I>'"'" *;m,:w;, "t:t; r;


w:lwJM;i l' ' "X Lird-11< tr" }" '~-'li
Tifij.t:n·J-J JP· "''t:!tti' "' "'r"-N-Yt~f'r"'\it
100¢;\f!M\
!9 A..AG-<J 1£ i 'blr.Z?&r0 t ._,.y
&? fJi:r i 4"/J$;

jp{W{{W,, £t ff-"' '/tiL J> '\-Y" Y t Y


kM420i4 '»'"' r "y}w;t', p,, r.,' "4 t
X:L.:w±lli-.1 !."' '-IL0P"m :. T-?0-0 A t+" j }' Ltilt-< j P ~ H 'tutio ' 1
66 Chapter 2: Discrete-Time Signals and Systems In the Time-Domain

Note that in Figure 2.22(b), the output y[n] of the 3-point moving-average filter is nearly equal to the
desired uncmrupted input s[n], except that it is delayed by one sample. We shall show later in Section
4.2.6 that a delay of (M- 1)/2 samples is inherent in an M-point moving-average filter.
2.4. Discrete-T<me Systems 67

2.4.l Classification of Discrete-Time Systems


There are various types of classification of discrete-time syste~ that me des...-ribed next. These dassifica-
tions are based on the input-output relation of the system.

Unear Syst-em
The most widely used discrete-time system, and the one that we shall be exdusi~-ely concerned with in
this text. is a linear system for which the superposition principle always holds. More precisely. for a
linear discrete-time system, if Yt [n] and yz[n] are the responses to the input sequences x 1 [n] and .x2[n].
respectively. then for an input

the response is given by


y[n] = ay1[n] + fi.>'2{n].
The superposition property must hold for any atbitrary constants, a andfJ, and for all possible inputs, XJ [n]
and .x2(n}. The above property makes it ''elY easy to compute the response to a complicated sequence
that can be decomposed as a weighted combination of some simple sequences, such as the unit sample
sequences or the complex exponential sequences. In this case, the desired output is given by a similarly
weighte-d combination of the outputs tu the simple sequences.
68 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

It can be easily verified that the discrete-time systems ofEqs. (2.14). {2.15), (2.17), (2.18}, (2.56), and
(2.58) are linear .systems (Problem 2.25). However, the linearity oftbe discrete-time system ofEq. {2.15)
depends on the type of input being applied.

"'"

Shift-ln118rlant System
Tile Shift-invariance property is the second condition imposed on most digital filters in practice. For a
shift-invariant discrete-time system, if Yl [n] is the response to an input Xt [n], then the response to an input

.x{n) = x1[n- no]


is simply
y[n] = YI[n- noJ,
2.4. Discrete~ Time Systems 69

when~ IIIJ is an\" positive or nt.-gative ill1eger. This relau,m bt'!lween the inpu: and ,,utpu~ must hold for any
arbitmry mpu1-s~quence .and 1ts ct~rresponding output. ln !he case of sequences and systems with indices
n related to di1>erele instanls of time. the above restriction is more commonly called the time~inmriance
property. The time-im·arianl.:e property e.nsures that for a specified input, the output of the system is
independent of the time the inpul is being applied.
A linear time-invariant (LTI) discrete-lime system satisfies both the linearity and the time~invariance
propenies. Such systems are mathematlcally easy to analyze. and characterize, and as a conseq.-.ence, easy
to design. In addition. highly useful signal processing algorithms have been developed utilizing this class
of systems ever the last 5e\o'-era! decades. In this text, we consider abnost entuely this type of discrete~time
system.

Likewise, it can be shown that the down-sampler of Eq. (2.18) is a time-varying .system.

CausaJ System
In a::tdition to the above two properties, we impose, for practicality, additional restrictions of causality and
stability on the class of discrete~time systems we deal with in this tex:t. In a causal discrete-time system,
the n,.th output sample y[n,] depends only on input samples x[n} for n :::; n 0 and does not depend on input
samples fm n > n 0 • Thus, if n[nl and )'2[11] are the responses of a causal discrete~time system to the
inputs Ui [n] and u2(n], respectively. then

UJ[n] = u2[n] for" < N

implies also tbat


Yl[n] = Y:!LnJ for n < N.
S1mply speaking, for a causaJ system. changes in output samples do not precede changes in the input
samples. It should be pointed out here that the definition of causality given above can be applied only to
discrete-time ~ystems with the same sampling rate for the input and the outpu!.5
It can be easily shown that the discrete~time sy:.1ems of Eqs. (2.14), (2.15), (2.54}, (2.55). and (2.56}
are <:ausal systems. However, the discrete-time systems defined by Eqs. (2.58) and (2.59) are noncausa1
systems. It should be noted thar these tv.o noncausal systems can be implemented a..<; causal systems by
simply delaying the output by one and two samples, respectively.
5 [itt.e input arul ornput <ampling l'l!!CS are notlhe l;alnC. the- ~tinition of cau...a!ity t.as to- be modified.
70 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

Stable System
There are various definitions of stability. We define a discrete-time system to be stable if ~d only if, for
every bmmdecl input, the output is also bounded. This implies that, if the response to x[r.]ts the sequence
yfn1 and if
!x[n]: < B,.
for all values of n, then
:y[n]j < By
for aB values of n where B.z and By are finlte comtants. This type of stability is usually referred to as
bou.nded~input. bounded-output (BffiO) stabilily.

Passive and Losslesa Systems


A discrete-time system is said to be passive if, for every finite energy input sequence xin], the output
sequence y[n] has, at most. the same energy. i.e..,

=
L 2
ly[n]l :S (2.62)
n=-= n=-oo

If the above .inequality is satisfied with an equal sign for every input sequence, the discrete-time system is
said tc be losslcss.

As we shall see later, in Section 9.9, the passivity and the losslessness properties are crucial to the
design of discrete-time systems with very low sensitivity to changes in the filter coefficients.

2.4.2 Impulse and Step Responses


The response of a digital filter to a unit sample sequ~e f~[n ]} is called the unil sample resporue, or simply,
the impulse response, and is denoted as {h[nl}. Correspondingly, the response of a discrete-time system
to a unit step sequence {,u[n]}, de11oted as {s[n]}, is its unit step response or simply, the step response. As
we show next, a linear time-invariant digital filter is completely characterized in the time-domain by its
impulse response or its step response.
2.5. Time~Dornain Characterization of LTl Discrete-Time Systems 71

2.5 Time-Domain Characterization of LTI Discrete-Time


Systems
In most cases an LTI discrete-time system is designed as an interconnection of simple subsystems. Each
subsystem in rum is implemented with the aid of the basic building blocks discussed earlier in Section
2. L2. In order to be able to analyze such systems in the tinw-domain, we need to develop the pertinent
relationships between the input and the output of an LTI di!'Crete-time system. and the charncterization of
the interconnected system.

2.5.1 Input-Output Relationship


A consequence of the linear, time-invariance property is that an LTI discrete-time system is completely
specified by its impulse response; i.e.• knowing the impulse response, we can compute the output of the
system to any aroitrary input. We develop this relationship now.
Let h[n.] denote the impulse response of the LTI discrete-time system of interest. i.e., the response
to an input b[n]. We first compute the response of this filter to the input x[n) of Eq. (2.47)_ Since the
discrete-time system is time-invariant,. its response to .S[n - I] will be h[n - 1]. Likewise, the responses
to O[n + 2]. S[n - 4], and 8[n - 6] will be, respectively, h{n + 2], h[n - 4], and h[n - 6]. Because of
linearity, the response of the LTI discrete-time system to the input

x{nj = 0.58[n + 2] + 1.55[n- I] - 8{n- 2) + &[n - 4] + 0.758[n - 6]


72 Chaoter 2: D1screte-T1me S1gnals and Systems m 1he Time-Domain

will be simply

_vln} = 0.5h[n + 2] + 1.5/•ln- 11- h[n- 21 + hfn- 4} +0.75h[n- 6].


It follows fmm the above result that an arbitrary input sequence x[n} can he expre,.<;ed a..<: a we1ghted
lin•!ar combination of delayed and advanced unit sample sequences in the form

=
xrn1= L x[kjS[n-k]. 12.63}
k=· =
where tbe weight x[k] on the righL-hand sjde denote,<; ~pecifica!ly the kth sample valce of the sequence
!xlnJ!- The response of the LTJ discrete-time !-iystem to the sequence x!k].S[n- k1 will be .xlklhln - kj.
As a result. rhe respon>.e y[n j of the dL'Crete-time sysre~ to xtn I wlll be given by
00

y[n] = L x[kjh[n- k]. (2.64a)


k= -- "'-'

which can be alternately written as


00

Yl•l ~ L xln- kjh[k] (2.64b)


k=-00

by a simple change of variables. The above ~urn in Eqs. {2.64a) and (2.64b) is called the crmvolutio•: sum
of the sequence~ xfnl and h[n], and represented co'll.pactly a.'i:

y[n] = xfnj@h]r;j (2.65l

where the nota:ion 0 denote.s the convo!mion sunL;;.


The convolution sum nperation satisfies sevenli useful properties. Fira, the operation i~ commulatiwr,
i.e.,
X! fn J@x::;n l = x~(rt]Gxl (nj. {2.66)
$e(:ond, the convolution operali(m, for stable and single-sit.ied ~equences, is associativr. 1.e.•

(2.67)

and ]a;,t. the operation is di.~trihu:ive, i.e .•

!2.68}

Proof ::>f these propertie;_ is !eft as an exercise (Probl.ems 2.37 to 2.39).


The coJJVolutinn sum operati<Jn of Eq. (2.64a) can he interpreted as follows. We firs! time-reYcr,e the
c.eqll.eoce hfk! arriving a! h!-k]. We then shift hl-k]lo the right by n sampling periods it n > 0, orto the
lefi by n ~mpl;ng periods if n < 0. to form the sequence hfn - k]. Next we form the product sequ~nce
v(k! = xjkJh[n - k]. Summing all samples of v[k J then yield-;. the nth sample of y[n 1 of the convolution
!.urn. The process of generating t·[kj is ilbstrated in Figure 2.24. This process is implemented for each
value of n in the range -oo < n < .:XJ. The representation of the alternate form of the convolution sum
operation given by Eq. (2.64b) is ()btained by interc:1anging 1he sequences x(k] and hfk; in Figure 2.24.
_ _ _ , -----:-:-:c-c ---:--:,
{.h tlle h~eraturc, the ,;ymbol for !he -convuluti:m ;urn'-" .. wuhcut the c:rck, Howe.-er. as the.- \Upcr'><'ripf • "";;lw;;y, usc·,t fOJ
df'mting tl;.c <.:<>mpkx coojt:gatkm cperatioo. in this t ..,.t we have ad<JptcC. th~ 'Ymbol ® l<>dc.-note the -conv,Ju::<ll" '"m "l'<'fl<t,on
2.5. Time-Domain Characterization of LTl Discrete-Time Systems 73

~·]
h{-k1 y[n]

x(k]

Figun" 2.24: Schematic represenlation of tbe convolution sum operation.

h is dear from the above discussion that the impulse response (h[nl} completely characterizes an
LTI discrete-time system in the time-domain because. knowing the impulie response. we can compute.
in principle, the output sequence y[n] for any given input sequence x{n] using the convolution sum nf
Eq. (2.64a) or (2.64b). Tile computation of an output sample is simply a sum of products involving fai:iy
simple arithmetic operations such as additions, multiplications, and delays. However, in practice, !he
convolution sum can be employed to compwe the output sample at any instant only if either the impulse
re«ponse sequence and/or the input sequence is of finite length, resulting in a finite sum of products. Note
that if both the input and the impulse response sequences .are of finite lengtb. the output sequence is abo of
finite length. In the case of a discrete-time system with an infinite-length impulse response, it is obviously
not possible to compute the output using the convolution sum if the input is also of infinite length. We
shall therefore consider alternative time-domain descriptions of such systems that involve on1y finite sums
of products.
74 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

,<[k] 1 h[k]

' '
k k
-I
-2
{a)

2 2

.
-1
-S -4 -J -2 -1
h{-3-k)

'
k
-6 _, -4
h[-k]

2 3
k
-1
(b) (c)

>tk]h[-k] 2
h[l-k]

s- -4 -3
_, I
0
I 2 3
k
-5 -4 -3
k

-2
-1
2
'
(d) (e)

2
>tk]h[1- k] h{2-l]

-:5 -4 -3 ' -1
0
I 2 3 • -4 _, -2
-1
k

-4
(f) (g)

<>---o---0--1:!
-J-2-IG

(h) (i)

Figure 1.25: ruustration of the comulution process.

It should be noted that the sum of the indices of each sample product inside the summation of either
Eq. (2.64a) ot' Bq. (2.64b) is equal to the index of the sample being generated by lbe convolution swn
operation. For example. the sum .in the computation of y[3] in the above example involves the products
2.5. Time-Domain Characterization of LTI Discrete-Time Systems 75

5
y[n]
3

' ,

Figure 2.26: Sequence generated by the convolution.

x[O]h£3], x[l]h[2], x[2]h[l], and x[3]h[OJ The sum of the indices in each of these fuuc products is equal
to 3 which is the index of the sample y[3].
As can be seen from Example 2.24, the convolution of two finite-length sequences results in a finite-
length sequence. ln this example, the convolution of a sequence {x[n)} of length 5 with a sequence {h[nJ}
of lengili 4 resulted in a sequence {y[n]} of length 8. In general, if the lengths of the two sequences
being convol-ved are M and N, then the resulting sequence after convolution is of length M + N - 1
(.Pmblem 2.40).
In MATLAB, the statement c "' conv(a,b) implements the convolution of two finite-length se-
quences a and b, generating the finite-length sequence c. The process is. illustrated in the following
example.

' W\l;t;lg'%"~
"" ~------~-,~-~"~-,,~"""'"'
' n JJ;JB!I:; £"${;; ±rm 01±
'
itt ·t
<:. y '$'%i13ti"\ 41 y
01 l Wl1¥lllr,l\ fq
.fi & '0f: 4 Lffi?/

·~
'" l1 j ¥
• • ""'
~
!

-
l1

""I "!
I"
76 Chapter 2: Discrete-Time Signals and Systems in the Time~Domain

Q
''
Q '

:: ~ -"A~::---:--: -~~
0 2 3 4 5 6
Tim~ index n

Fit.rure 2.11; Sequern::e gener:ued by convolution using MATLAR

Figun- 2.28; The "ascade :.:cnnection.

2.5.2: Simple Interconnection Schemes


T\nJ widdy used schemes fordeveJopmg complex LTl discrete-time systems from simple LTl dis;;-rete-time
system sections are described next.

Cascade Connection
]n Figure 2.28, the output of one filter is connected to the input of;; second filter and the two fi!ten; are
said to be connected in cascade. The overall impulse response hln] of the cascade oftwo filtersofimpulse
re!>pOnM:s h1 [nj and h2!nj is given by

(2.69)

Note that, in general, the ordering of the filters in the cascade has no effect on l're- overall impulse response
becau~ of the commutative property of convolution.
It can be ~hown thai the cascade connection of two stable systems is stable. Likewise, the cascade
omn~tion of two passive (lossless) systems is passive (lossless).
The cascade connection scheme is employed in the development of an inverse system. If the two LTI
~ y;;.tems in the casc<Jde connection of Figure 2.28 are such that

h!fn1®hzinl = 8fn]. (2.70}

then the LTI system hzfnJ is said to be the inverse of the LTI system hi[n], and vice versa. As a result of
the above relation. if the 'input to the cascaded system is xfn],its output is also x{n]. An application of this
concept is in the recovery of a signal from its distorted version appearing at the output of a transmission
chanrtt•! This is accomplished by designing an inverse system if the impulse response of rhe channel is
known.
Th.: following example illustrates the development of an inverse system.
2.5. Time-Domain Characterization of LTI Discrete-Time Systems 77

Figure 2.l!J; The parallel connection.

-

Parallel Connection
The connection scheme of Figure 2.29 is called the parallel connection, and here the outputs of the two
filters are adC.ed to fonn the new output while the same input is fed to both filters. The impulse response
of tl:e overall filter is given here by
h[n] = ht[n] + h2(n]. (2.72)
It is a simple f;xercise to show that the parallet connection of two stable systems is stable. However,
the parallel connection of two passive (lossless) systems may or may not be passive (lossless).
78 Chapter 2: Discrete-Time Signals and Systems in the Tjme-Domain

+
h_,[n]
I

Figure 2.30: 1be dlscrete-time ~ystem of Example 2.21.

" ,,

2.5.3 Stability Condition in Terms of the Impulse Response


Recall from Section 2.4.1 that a discrete-time system is defined to be: stable. or precisely, bounded-input,
bounded-output (BIBO) stable. if the output sequen&e y l n J of the system remains bounded for .all bounded
input requences x[n J. We now develop the stability condition foc an LTI discrete-time system.. We show
that an LTI digital fi[ter is BIBO stable if and only if its impulse response sequence {hinH is absolutely
summable. i.e.,
~

S = L lh[nJI < oo. (2.73)


><=-=
We prove the above statement for a real impulse response hln 1- The extension of the proof for a complex
impulse response sequence is left as an exercise {Problem 2.59). Now, if the input sequence x[nJ is
bounded, i.e., jxfn]l .:::: Bx < oo, then the output amplitude, from Eq. (2.64b), is

iy[n]l = l,too h[k]x[n- k]l :0 ,too lh[k]![x[n- k]l


S Bx L lh[kJI = BxS < CO.
k=-=

Thus, S < oo implies ly[n.]l .::S By < <X>, indicating that yfnl is aiso bounded. To prove the converse,
assume y[n] is bounded, i.e.• !ylnH 5 B 7 . }.;ow. consider the input given by

x[n] = { sgn(h[-n]), if h[-nJ i= 0, (2.75)


K. if h{-nJ = 0,
2.5. Tim&Domain Characterization of LTI o;screte-Time Systems 79

where sgn{c) = -;-1 if c > 0 and sgn(c) = - l if c < 0, ~ IK I ~ 1. Note that since !x[n]j :::: l. {x[nl}
is obviouslj• bounded. For this input, y[n] at n = 0 is
00

y[O] = L sgn (h[k])h[k) = S .S B 7 < oo. (2.76)


""'-00
I
Tilerefore Lv[nJI :::;: By implies. S < -oc.

T!MfitFI¥ j' d ·JIC tf


V)'d'1L#tlitdt

2.5.4 Causality Condition in Terms of the Impulse Response


We now develop the condition for an LTI discrete-time system to be causal. Let Xt[nl and x1[n] be two
input sequences with
(2_78)
From Eq. (2.64b) the corresponding output I>arnples at n = n 0 of an LTI discrete-time system with a:n
impulse response {h[nl} are then given by

= 00 -1
Yt[no]= L h[kJxJ[n..,-kJ= Lh£k}xl[n 0 -k]+ L h[k]X![n 0 -k], (2.79a)
k=-= 1:=0 k=-=
IXl 00 -1

Y2{noJ = L h[k]x2!nv- k] = L,:hfklx2fnv- kJ + _L h[k].¥"2[n 0 - k]. (2.79h)


k=-= .1:=0 .1:=-oo

If the LTI discrete-time system is also causal, then y 1 [n 0 J must be equal to- Y2[n 0 ]_ Now. because of
Eq. (2.78), the- first sum on the right-hand side of Eq. {2.79a) is equal to the first sum on the right-hand
side of Eq. (2.79b). This impiies that the second sums on the right-hand side of the above two equations
must be equal. As Xt [n] may not be equal to x2[n] for n > nn, the only way these two sums will be equal
i.-. if they are each equal to zero, which is satisfied :f

h[k] =0 fork< 0. (2.80)


80 Chapter 2: Discrete·Time Signals and Systems in tt;e Time-Domain

As a result. an LTI discrete-time system i~ ~-ilu.wl if and only if its impulse rc~pon:-:c sequence {h(n!i is a
G!'lSili scqucm:c ~<J.tisfymg :he condition of Eq. (2.80).
It follows from E.!w.mp!e 2.21 that the discrete-time .sys:em of Eq. (2.14) is a causal system since
:ts unpu!se response sansiics the cau;..ality condition of Ee:. (2.80). Likewise, from Example 2.22 we
oh~e:rve thill the discrete-time accumulator of Eq. (2.54) b; also a causal system. On the other hand, from
Exampk 2.21lt c-an he seen that the factm--Df-2 linear inte:-polator defined by Eq. (2.58) is <1 noncausal
-.ystem b::causc ics impulse response doe~ nut ~tisfy the cat:.;;ality condition of Eq. (2.80). However, a
m.mcausal diflcrete-time ~ystem with a tinite-length impuL-;f' re~pon~ can often be realized as .a causal
S)'l;tem hy ins.oning a delay of an appropriate <~mounL For example." causal version of the discrete-time
faetof-{)f-2 linc-arlnterpolaror i~ obtained by delaying the output hy on<' ~ample periud with an input-output
relation given hy
I
y[n] = Ku[n- lJ + 2 (x,.[n - 2j +x,(nj).

2. 6 Finite-Dimensional LTl Discrete· Time Systems


Ar, important subclass of LTI discrete-time systems is characterized b~ a linear constant ;:::oefficient differ-
cn~e equation of the fonn
N M
L d;.y[n - kl =I:= pp[n - kj, !2.8()

wtere x[n I and vl n Jare. respecti>-ely. the input and the output of the system. and [dk) and IPk I are const:mt->.
Tht! order of the discrete-time sy!>tem is given by max(.'V, M). which is the order of the difference equatii>n
characle:rizing the ~ystem. It is pos~ible to implement an LTI system characterized by Eq. (L8l_l :,lnce
the computation here involves rwo finite sum of products even though such a system, in gen~al, has an
impulse response of intinile length.
The output y[n 1 ~.:an then be computed recursive:y from Eq. (2.81). If we assume the system to be
caus<1L then we can rewrill' Ey_. !2.81) to express yln] explicitly as a function of x[n];

N d M
_\'[n}=-L~yfn-k)+L:kx[n-k}, (2.82)
-k=l dtJ k=O 0
pn:,vided de f=- 0. The output vfn I can be computed for all n :::_ n,, knCMting x[n] and the initial condiuons
y]¥',- ll. y!n.,- 2},. _ , _\'{n,- N].

2J).1 Tota! Solution Calculation


The vocedun.: for computing the ;;;olution of the constant coefficient difference equation of Ey_. (2.31) is
Ye:ry similar to rhat employed in solving the constant coetlicienr differentia] equatiun in be case of an LTI
continuous-time system. In the case of the discrete-lime system 0fEq. (2.31) the output response y[nJ also
consis{:. of two components whtch are computed irn.lependenily and then added lo yield the total hOiution:

y!nl =_>-An]+ YplnJ. (2.83}


In the above equation the component v,-[n~ is ~he solution ofE4. (2.81). with the input x[n] = 0; i.e .• it i);
the solution of the homogeneous difference equation:
A"
Ed.,_vtn-kl=O. (2.84)
~=Cl
2.6. Finite-Dimensional LT! Discrete-Time Systems 81

and the component Ypln J is a solutiOil of Eq. (2.81) with x[nl :1= 0. Ycin] is called the complememary
solution. while y [nl is called the particular solution resulting from the specified input xfnl, often called
theforcingftmctt'on. The sum of the complementary and the particular solutions as given by Eq. (2.83) (s.
called the total solution.
\Ve first dei'Cribe the method of computing the complementary solution Yc{n], To this end we assume
that it is of the form
Yc[nl =A". (2.85)

Substituting the above in Eq. (2.&4) we arrive a':


N N
L d;y[n- kl ~ Ld;A"-'
k=fJ k={j

= A"-N (d{)J/" + dtAJ'l-l + · · · + dN-1). + dN) = 0. (2.86)

The polynomial L~=D dk AN -k is called me dum.u:reristic polynomial of the discrete-time system of


Eq. (2.81). Let A1 , ). 2 ... . , A.v denote i.ts N roots. If these roots are all distinct, then the general form of
the complementary solution is given by

(2.87)

where at- a2, ... , a!-· are constants detennined from the specified initial conditiom of the discrete-time
sys.tem. The complementary solution takes a different form in the case of multiple roots. For example,
if '-1 is of multiplicity Land the remaining N - L roots. A2. A3, ... , AN-L, are distinct, lhen Eq. {2.87)
take;, the form

(2.88)

Next, we consider the detenninatioo of the particular solution ypln] of the difference equation of
Eq. (2.81 }. HeTe the procedure is to assume that the particulac ~lutiUil is also of tbe same fO£m a5 the
specified input x[n] if x[n] has the form AQO..o # Ar, i = 1, 2, .... N) for alln. Thus. if x{njisa constant,
then Ypln] is also assumed [O be constant. Likewise, if xrnJ is a sinusoidal sequence, then Yp[n] is also
assumed to be a sinu~>Didal sequence, and so on.
We illustrate below lhe delerrnination of the total solution by means of an example.

::::::~·;;•:.,,:;:~"'=-"ll&ll&l l lti&ul ;dJJmi&JlYU !~ mh --''"""


(I I!Ji-Vft-nnl!H
~ 1~ - & '\l,J'ttj
82 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

"<tl)<;t; !pt « "} nr}1k y' 4


1t1ful "'"'h'' r ,;s n Jltih d dv

'
If the input excitation is of the same form as one of the terms in the complementary solution, then it is
necessary to modify the form of the particular solution as illm;trated in the following example.

:1\ llu'+ 'Vt ii't .i\ ;t .'\ j 4t !{ ;;,;nc;n;q10 ""r ldV! \dh;;rp rp ttY " I ; of ! iw:r,fsi!T1¥'f£t: mjjhiilmi!& uf
'Z t "1 ''; r i "' dh riu ·'-"hii\ ;.h".;\t I wwi« d1'G ""' !H tb¢ &"¥'- h <!'
i'l\, ;q;ftdA("/ ;,u ;!1n {\&!'VVLH tvd1!\(/ffiL ;)a: tdf#i?Hh
Hstf\ 4" t!'lt' ;,:~tvJt;;,>t tJf%9" ++v:wr '"-"7 H/011 <F ;; f;:pt:n ;"'" :tv m44ll;ml
s;%;illr:&ns Wv~
2.6. Finite-Dimens~'onal :_Ti Discrete-Time Sys:t:!ns 83

2.6.2 Zero-Jnput Response and Ze-ro-State Response


An alternate approach to detennining the total solution y[n) of the difference equation of Bq. (~.81) i~ by
computing its zero-inpuJ response Y;::;[n] and zero-state response Yz .. {n]. 1be comJ><:nent Yo;iin]ts obtai~
by solving Eq. (2.81) by setting the input x~n] = 0. and the component Yzs[nJ lS obtatned ~y S?lvrng
Eq. (2.81) by applying the specified input with an ini:tial conditions set to zero. The. total solutionts then
given by ?u[n] + Y~s[n].
This approach is illustrated by the following example.

'\/\/\ vt !j ;,
~~II%

2.6.3 Impulse Response Calculation


The impulse response h[n] of .a causal LTI discrete-time system is the output observed with input x[nJ =
O[nj. Thus. it is simply the zero-srate response with xln1 = 8[nl. Now for such an input. x[nj = 0
for n > 0, and thus. lhe particular solution is zero, i.e .• YplnJ = 0. Hence the impulse response can be
computed from the complementary solution of Eq. (2.87) in the case of simple roots of the characteristic
equation by determining the constants a; to satisfy the zero initial conditions. A similar procedure can
be followed in the case of multiple roots of the characteristic equation. A system with ail .zero initial
conditions is often caUed a reku:ed system.
84 Chapter 2: Discrete~ Time Signals and Systems in the Time-Domain

=JVt *' 1lf;v.


= ~ 1"

It follows from the form of the complementary solution given by Eq. (2.88) that the impulse response
uf a finite-dimensional LTI system characterized by a difference equation of the form of Eq. (2.81) is of
infinite length. However, as illustrated by the following example. there exist infinite impulse response LTI
discrete-time systems that cannot be characterized by the difference equation form of Eq. (2.8 1 ).

Since the impulse response h[ n 1 of a causal discrete-time system is a causal sequence, Eq. (2.82) can
also be used to calculate recursively the impulse response for n ~ 0 by setting initial conditions to zero
values, i.e.• by setting y[-J] = y[-2] = ·· · = y[-N] = 0, and using a unit sample sequence O[n] as
the input .xln }. The step response of a causal LTJ system can similarly be computed recursively by setting
rero initial conditions and applying a unit step sequence as the input. lt should be noted that the causal
discrete-time system of Eq. {2.82} is linear only for zero initial conditions (Problem 2.45)

2.6.4 Output Computation Using MATLAB


The causal LTI system of the form ofEq. (2.82) can be simulated in MATLAB using the fuoction filter
already made use of in Program 2_ 4. In one of its forms. the function
y ~ filter(p,d,xl
processes the input data vector x using the system characterized by the coefficient vectors p and d to
generate the output vector y assuming zero initial conditions. The length of y is the s.ame as the length of
x. Since the function implements Eq. (2.82), the coefficient do must be nonzem.
The following example illustrates the computatior: of the impulse and step responses of an LTI system
described by Eq. (2.82).
85
2.6. Finit&DimensionallTI Discre1e-Time Systems

1~--- ·~·---,1

Q
'

-0,5 oL_ _~!O;;---ZI>:;;---~3il7:"' 40


Time inde:.. l'. Time inde~ n
(aj (b)

Figure 2.31: (a) Impulse response and (b) step n:sporu;e of the system ofEq. {2.93).

---
.• -
' n 1'" wfu:t>n/»•
~ '
01%

.
:v~

"
l<
'
;g .. t:l
¥ (j t f4'l&~'14
j\jj
t;,hi«t ~%,'4T
~tsrw
Hi- w"' T
J!· +-"±s

AitX811Lt}K

2.6.5 Location of Roots of Characteristic Equa1ion for BIBO Stability


It should be noted that the impulse response samples of a stable LTJ system decay to zero values as the
time index n becomes very large. Likewise, the step response samples of a stable LTI system approach
86 Chapter 2. Discre1e-TifY'e S;gnais and Systems in the Time-Domain

:.t ,;onq:.t:JI v~llu~ ;,._ n b<>-Come;; v~1v lnrgc-. J<'l·om the rlor:;. of Figure 2.3l{a) and (h) we can conc:ude that
m·.)~! Jik~h the LTI sy>.cem or Etj. -(2.93) i,; BfHO sl<!bk'. Hmvever, lt is impossible to -che~;k !J;e ~tubilit)
' -
ot u syq<:m ju-.t by examining only a finite segmcr.t cf ~ts impulse or step response a." in these figures.
Th<.: BJBO "tabdity of a cam3.l LTI :.y:-.tcm charao.:t~rizc:d hy a constam cocfiicient difference equation
uf the form of F.q_ (2.81) can be Jcferred fmm the values <1f the roots A, of its characten-;tic polynomial.
Tc estaOiish the -.ta-,._,llity -cundit10ns, recall that the fo:-m of the nnpubc response i." the same as that of the
complementary :.olutioo. From Eq {2.87}. H"suming all the ;·oots to be distinct. we h;;v;?
,'1:

htn] = 2:::C·,A;'dnl.
,_,
!2.94)

Thl' cun.,J;;nts a, in the above ex pre:.sion are detenr.ined Lo -;atisfy 7.ero init:al <::unditiom. From Eq. C2. Sl4)
we _get

I: lh[nJI ~ f:: l:tu,(>,)"l ~ t lad I: 1'-d"-


II'=IJ "=0 i=l i=! n=O
t2-95)

It follows from t!le above equation that if :A, r < l for all values of i. then L~=O lA; i," < oc and as a
re:;u!t L~o lh[nJI < oo, i.e., the impulse response is ab".olutcly summable implying BIBO stability of
the Cil:lsal I ;n discrete-lime sy~tem. However, the impulse rc~ponse sequence is not ah,.;>lutely "Urnlllable
if ·:lne or more of the roots A; ha-.; a magnitude greater than or equal to une. It should be noted that !he
di~:o::rcte-time .'iystem of Example 2.30 JescribeJ in F.q. (2.S9; is de-arly an un-.;table :.ystem as both roob
of {)-)e char.Klcristic equation have magnitudes greater than nne.
J~ the case of multiple roots of !be characteri~tic equation. the impulse response will contain ter:ns of
tht· fr:rm nK A7. As a remlt, the expTcssion for L~o ;h~n]; '"'ill contain the teem
X

l::JnK(Ar)nl,
rr=O
wbch converges if IA,-j < I (Problem 2 73) and a5 a result. here a~:-;o the impulse re:<ponse is absolutt-ly
summ<~ble.
Summarizing, a ~:.:ausal LT! sy~tem characterized by a linear constant cocffkient difference equation of
the fonn of Eq_ (2.81) i_, BIBO stable, if the magnitude of each of the roots of it'i chamcteristic equation
is lcs~ !han one. This condition i~ bofh necessary and ·mfficienL

2.15.6 Classification of LTI Discrete-Time Systems


Linear Time-invariant (LTI) disc-ret0-time ~y;;tems are usually classified either according to the length of
their impulse response sequence;-; or according to the method of calculation ernploy<!fi to detennine the
t_>utpul: samples.

Classification Based on Impulse Response Length


If n:nJ i;, of fin!te length. i.e ..
h).'lj = 0 forn < .Vt and n > .v?. 12.90)

then it i~ known as >lfimte impuf>e rnponse !FIR) discrete-u:ne ~ystem. for -,;,.·hich the convolution sum
redun.: ... tu

vlnl ~ L h[klxl,;- kl. C.97J


k ,VI
2.6. Finite-Dimensional LTI Discrete-Time Systems 87

Note that the above convolution sum, being a finite sum, can be used to calculate y[n] directly. The
b.asic operations involved are simply multiplication and addition. Note tha~ the calculation of the presem
value of the outpul sequence involves the value of the input sample at n = Nt and N2 - N; previous
values of the input sequence along with the N 2 - Nt-;- 1 impulse response sampies describing the FIR
discrete-time system.
Examples of FlR discrete-time systems are the moving-avernge system of Eq. (2.56) and the linear
"interpolators of Eqs. (2.58) and (2.59).
If h [ n J is of infinite Jength. then it is known as an infinite impulse respome (IIRl) discrete-time system.
For a cau~l IIR discrete-time o,ystem with a causal input x{n), the convolution sum can be expressed in
th<? form:

y[nJ ~ I:x£k)h(n -k],


K=O

which can be ured to compute the output samples. However, for increasing n, the computational complexity
increases caused by the growing number of terms in the sum.
ThedassofHR filters we are concerned with in this text is the causal system characterized by the linear
constant coefficient difference equation of Eq. (2.82). Note that here also the basic operations needed in
the output calculations are multiplication and addition, and involve a finite sum of terms for aH values of
n. An example of such an IIR system is the accumulator of Eqs. (2.54) and {2.55). Another example i.s
de~ribed next.

l 'fAft(V \tw 0Hk)k"tV!i;kwf lWifj;$g)lj 4< ivl!\e4\k4¥0' h '(»J f!m •"''""~


,,(1] "t,tl;

Classification Based on the Output Calculation Process


If the output sample can be calculated sequentially, knowing only the presen! and past input samples, the
filter is said to be a nonrecursive discrete-time system. If, on the other hand, the computation oftbe output
involves past output samples in addition to the present and past input samp]es., it is knoV.""Il as a recursive
8B Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

discrete-time wstem. An example of a nonrecursive system is the F1R discrete-time system implemented
using Eq. {2.9i). The IIR discrete-time system implemented using ~e differe~ equation ofEq. (2.82} is
an example of a recursive system. This equation permits the recurs1ve computatmn of UU:
~~tput r~onse
beginning at some instant n = n 0 and for progressively higher values of n provided the mmal condl_!tons
y[nn- 1] through y[n 0 - NJ are known. However, it is possible to implement an FIR system ustng a
~cursiv-e Gomputational scheme and an ITR system using a nonrecursive computational scheme [Gol68j.
The former case is illustrated in the example below.

Classification Based on the Coefficients


A third classification scheme is based on the reaJ or complex nature of the impulse response sequence.
Thus, a discrete-time system with a real-valood impulse response is defined as a real discrete-time system.
Likewise, for a complex discrete-time system, the impulse response is .a complex-valued sequence.

2. 7 Correlation of Signals
There are applications wbere it is necessary to compare one reference signal wi!h one Of" more signals to
detenn:.ne the similarity between the pair and to determine additional information based on the similarity.
For example, in digital communications. a set of data symbols are represented by a set of '.lnique discrete-
time sequences. If one of these sequences is transmitted, the receiver has to determine which particular
sequence has been received by comparing the received signal with every member af possible sequences
from the set. Similarly, in radar and sonar applications, the received signal reflected from the target is the
2.7_ Correlation of Signals 89

Ud.'lvctJ vcr~ion of the tmnsmille<l ~ignal and by mea&uring rhe delay. one can determme the location of the
t-arg~t The dett:ction probiem gets more complicated in practtce, as often the received signal is corrupted
hy <:~Jdl1ive random noi.~e.

2.7.1 Definitions
;\ mea;.ure or simihtrity bo;;=tween a pair of energy signuls, x:nj and v[n), is given by the cm.~s-corrl'latirm
:\ r•qut'•H·e r, -,-!-f) defined by

r,_,.ft\= L ~-dn\y{n -£1, t=O.±l,±2, ... (2.103)

The par-<~mctcr t called !ar;. imlKates the time-shift between the pair. The time sequence y[n J is. said to be
;,hifled by f ~~tmples with respect 1o the reference sequem:e x[nj to the righl for positive values of f.. and
-.tufted by I' :illmples to the lefl for negative value;; of£.
Th;c ordering of the subscript-; xy in Eq. (2.103) specifies that x(nj is the reference sequence which
n;;nams lix::tl Hl time w.herca:. th~ sequence yfnJ is being shifted with respect to x[nJ. 1f we wish to
n:ak::- r!n l the reference sequence and shift the sequence x/n 1 with respect to y[nl, then the coiTesponding
cros\-correlation sequence is given hy
~

rn[f] = L y[n_ixfn- fl
ll=-"'-'

= L y{m+t]x{•n!=r:..,-{-tj. (2.104)
m=-=
Tbus. r,, [f) is obwined by time-reversing the sequence rqld.
TI:t: rm!ocorrdatiun sequence nf x[nj is given by

r_,__,[fj = L x!nJxfn- t') t2.l05)


fl-='-00

obtained by !'CUing v{n J = x!nJ in Eq. (2. i03}. No:e from Eq (2.105) that r_,_.[O] = L~-cx: x 2 :n J = £_,.
the e-nergy Dfthe :~igna! x in!. From Eq. {2.104) it follows that r_n!f I = r_,x{-f] implying that r_.-, {£)is m1
Ch"n fimction !m re-al .I" ln).
An cxa.mina\!On of Eq. (2. i 03) reveals tha' the expression for rhc cross-correlation looks quite simi.!ar
to that <}f ihe cem·o!ution given by Eq. (2.f:4u_). This similarity j;;; much dearer if we rewrite Eq. (2. I03) as
.~

r._v[t!= L _\[njx(-(t-n)!=YU'l@x[-f!. (2. 106)


ll=-=

The- ahnno result impli.;;~ chatthenoss-correlationofthc ~ueoce :v!n] with the reference sequencex!n Jcan
be conlptHcJ h} processing v(n] wirh an LTl discrete-time system of lmpulse response K[ -n ]. Likewise,
the :Jll!;)condation of.-<-/~> j can be Jetermined hy pa&sing it through an LTI di-st-Yete-time system of impulse
re'>poll.'-e ;·[-nl.
90 Chapter 2. O~screte- T,rne Signals and Syscems u1 lh~ Timc-Dorratn

2.7.2 Properties of Autocorrelation and Cross-correlation Sequences


Vve next deri\e some bas-ic properties nf the autocorrdatilm .and cross-correla:iun ~.:4uen<--c;, fPro92].
CJmider two finite-energy sequences x[1rj and _v!n ;_ Now. the energy of the combined <>equen.:e .ax[n l -
y[n - t i ib also finite and nonnegative. 11lat IS

= X) = oc
L \ax[n-1 ___;. y[n- £]) 2 =a~ _L x 2 [n] +lit L x[n Lv!n - f] + L _-.· 2
[n - tl
n=-= Jj -= f1=-"'-'

(2.107;

w;1erc, r<[Oj = E, > ()and rH[O! = E, > 0 are energies of the seque-nce,_<; xfn] and y[n]. repcc:ivel;-.
We can rewrite Eq. (2.107) as
Ia l][r"_,_[01
~"nlt'J
!fJ]l'ai?-0
r,_-..
ryy[Oj l J
for ar.y finite value of a. Or in other words, the rrw.trix

is positive semidefinite. Thi'> imp!ie"

or cqt:.ivalcntly
"f'l ____:
ITn-~---- -c
: v ;-------,--,---,,. l'l'- r;;.·-£
r~c· ,. _\y ·-' -- V'"-" , .. (2.10~)

The ;;hove inequality pro•·:.ies an upper bound for the crc!>-.s-correlation sequem.:e ;;ample~. If we .~ct
yin]= xfn]. the ::~hove reduces to
(2.1091
ThiS is a <,ignih.::anl resul! as tt <;tatcs that at zero tag (f = 0), the s:.;mple >due of the autoc~•rrc:<-~tion
:;equcr.ce bas irs maximum valu::.
To derive an additional propnty of the -cro<,;,-correlation <.equcnce con:-;ider the cusc

yin}=± bxfn- .··VJ

wh:::re N is an :nteger anJ b > 0 is an .arbitrary number. In this ca!>e Ey = b 2E,. and rhcrer·ore
,--
= 1. ".,.--,
\'trc~ =l·t'
' ,.

Using the above- re;;ult in bq. (2.108) we gel

2. t• _3 Correlation Computation Using MATLAB


The Cn>\s-corre!alinn anJ the autocorrelati(m sequences can b.._· easily computed u>.ing MAl LA. R a<; Illustrated
ia the <o:towing two example!-i.
'JJ! $1;} !At !i.U~-& iiMi<W 41/t 1&\,t¢H)Wfl')Jtli#f1
Nfp't $i£L)U An% dtf! 'Y+ft:f:: m tNfi "iii!!+:) WIA Mf
~-N !!IJ01llj1fA4$i'r>%1\Hsilfljfj p A:q 1701!&& %1\1 tj ;: "~ ¥>17!#!!01

t IPH{F{IN{ +! <&1012 tll

l;~:~~~;::;:s:~~!~~~~=~-:M#&I:~:"ll:::'';:':::WA:·:~:~·:·=~-~~:Al!J wtq;w "tNfr


:t
#; p0zyRJ\AW10
J)i WtltNfWP{IIJWI!I&
74'1 1111 ;m #\1 ~1%1!
f£Q '%1\J "E
hiANf Vtli

:~~~~~1l:~::::::, :~
% ~t:r TIJA
j&&+liii$1i,}~!t: $1i,}t ~
4} ,;ww UD;'!!t%1 !J
"":i!jiidttii!H('P
'14

"'
!411faM*8WW!0
A ,lf\)01*8111 :mJ:mJAlll\5/i MdM!HlDN!&&rtitS 01£1 1di&f2Sii{
t' AWJ1 !&& AWJ117"'iW¢\IM ;r ~ RBJt n;

;J/r,,,,J ,;,~, /J ;d IJ<:<v


HP t
; t ,41(1!; "; T "J "{#Dif% 0 •,A!{/b { ;<S;"{ ;\Ll0 t :P
1 j "'"'"""'J"'
,;v: ,; >
/,;;r + ij
\"fa fThL:mJ\ f,"';;;r;t ,+"'
t&;:;:;:wt;H;:nw th\ l:HZ! +U f< ,YV'l 4 J/\:f;::J<<" 0
{ DMMJHttit(!'tf £'h!?t;'W;! Pt,(j ,>\\}(J, ; "'yj,; 1
M;tVfi>i"t{;>"As +I >!f1H i t'l 7 2\ Y 4+ 11 v v \, fA:t " \ "if{ Tr f

k),;fl)'tlt:Ltl:ii: 1)'/Ri tt: ';Ad ttL: t:ttnt t / y 1f\1t ")F ,viJY<" "{
'\dP' (\ }V) )\

~ PJt tl:mJAlt{;w !JsJ!'#fmiml?, IBM>


tii>IW§I'Wii!< >40!'11r:t Jill% 7»> tf¥" H Y$-11£

' l L"j

• • 1~ '
I

I
''
' 1 Jln
"""sm!n tt!PIIm!·WTI'''< 1:11<\1 Sill !8%ftr+&TJ' $1;}f J'l~Jli,~{

·;;;oo;mb;.s uoQt!p.uooo:pre (q) pu'll ;?:;!U;;Inh~ uO!l'llf:;}llO::KSo-Q (1l} :zt:·t 3.1113!!1


(q) (•)
• ' '
~~·"hl
o- ,_ ,_ ,_ ~1~
o:nr-r
' ~-
' ' r

0
-- ~-----,oz--

'
;•)I
'''
r'r, u 6
i l
~l·r,,!
- :o•' E
'

~
~ 6 l " '

''
v '
li"
iG~
i {IE

l 'oo
' ~'u~
''
'

~6
SjBU6!S !0 UO!+S:B.UOC} "L"Z
92 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

00--------~-----------c

t u,Jdlr~I sir\
_, -~
~~--4-1DJ46S
L"'l ,_1
(a) (b)

Figure 2.33: (a) DeJay estimation from cross-correlation sequence and (b) autocorrelation sequence of a noise-
corrupted aperiodic sequence.

It should be noted that the autocorrelation and Cl"OSS-(;orrelation sequences can aiso be computed
using the MATLAB function xcorr. Hov.-ever, the correlation sequences generated using this function
are tbe time-reversed version of those generated using Programs 2_7 and 2_8. The cross-correlation rn[l]
of two sequences x[n] and y[n1 can be computed using the statement r .. xc~rr \ x, y}, while. the
autocorrelation ru[l] of the sequence x[n] is detennined using the statement r .. xcorr ix).

2.7.4 Normalized Forms of Correlation


For convenience in comparing and displaying. nonnalized forms of autocorrelation and cross-cOIIelation
given by
l Txx[i]
Px..:( ] = ru[Of (2.110)

Pxv[t'] = r.c=jr,~y~[l"!]"T.'IT (2.111)


· ,jru[O]ryy[O]
are often used. It follows from Eqs. (210&) and (2.109) that IPxx[t']l :5: I and IPxy[i)j :5: I independent of
the range of values of x[n} and y[n).

2.7.5 Correlation Computation for Power and Periodic Signals


In the case of power and periodic signals, the autocorrelation and cross-correiation sequences are defined
slightly differently,
For a pair of power signals, x(n J and y[n], the cross-correlation sequence is defined as
I K
rxy[-l] = K-+rx>
lim
2 K+l
'""' x[n]y(n -l],
~
(2.112)
n=-K

and the autocorrelation sequence of x[nl is given by


. I K
r_._.[-l] = hm " ' x[njx[n- t}. (2.113)
K--+=2K+l ~
~K
2_7. CorrelatiOn of Signals 93

1. 1 ko:-.\'is~~- if \·!nl-<nd _\In! an: I 'A>' pt-·riodic signab \Nit!; period lv', then their cross-corrdation sequence
i'> given by
{2.1 14J

"-"

(2.115)

lt Jullo\\S rrom the ahove detinit:{>Ih (hat both t:; ;l-1-1 and r,-;[f 1 an: aLso pt.-'"rindic sequences w11h a p._:riud
y_
'1"'1c po::ri•xhcl) pwpcnics ot tbc au:oconcla!ion scqucrH.:e can he exploited to determine th;._• period
:\' nt-' pcriodJ( ~ign . d tha: ma;- h:1ve bc;;-n corrupted hy dl 2dditive random disturbance. Let :fin] be a
po-si:iv,; periodic ~ign<.~l ("orrupted by the randum :mise d\n \resulting in the signal

v.hid1 1s ohs.::rvcd fo-r 0 _2: 11 _:::: A! - 1 v.-here M » N. Th~ autocorrelanon of w!n 1 i~ given by
·'d -I
. I
ru-vif"l=--
M
L w~nj11 1 {n-t'j
-41+1
M-1

M
L: •.i!nl-7-d!n!){i\•1-t']+d!n-fl)
o=- M-t I

Af -- · M -I

+ ,\1
"=
I:: !.-f i I
X!nld!n- tl +
M I::
.'L=-M-tl
{2.Jl6}

"'it>W m the ab,n.-e equation. r_,--' !<' i is a pel-indic ">equence \'.--ith a period N and hence il wiU have peaks at
i = 0. N, 2.V. . . wi:h the '>arne amplitud~;, as £ :.pproaches M. A" i!n 1 and din 1 are not correhued,
"ampln. ufcro~~-;:-orrdmwn :-.cqt~encc-. r, J! t I and rJT l (' J arc likely In he very small relative to the amplitudeos
of r,, \I]. Th( Jutocnrre!atinn oi the di_y,turbance <ign~ll djr1] .\bows a peak at£ = 0 with other samples
h:l'>'lll;c! r._tprdly !.kcn:w.,in~ ;Jmph:udc-; wilh increa~ing value~ uf !-t!. Hence th\! peo.tks cf r,.,.!-t 1 for f > 0
:~n' <-'~\L'nti,llly Jue In the ~ub of r, ,-
j-": <<nd c::m be used to determine whether XIn J is a periodic !-.cquence
and it'- p.:riod lf tl1c ~:;Ls or cur ~t ]X'rit.xlil: m\crvilh.

2].6 Corre!at~on Comp-:Jtatton of a Pe-riodic Sequence Using MATLAB


\Vc iliu-.-rratc no:.\.! 1hc dcte-rm111il1~on nf the perivd of ;o noise-corrupted periodic sequence using _!\1 -'1'1 LAB.
94 Chapter 2; Discrete-Time Signals and Systems ln the Time-Domain

w;- ~0---,
'' ~

' ]
,,'·''
'' '
1

~
"" < ''
.'
"'I "" !k:
cb ~q;,s, irg.o- ~J0191?li tT?~ co$!-&,<P
1

. I
'"'
0 "' 0
6

-l
4 · CJ7i$>
' ~

-W
~

-1'.)
i!!,, ~ 6~
!'i'J'' ·o::
C 'r.
,;,
:0
I
,
"'
(a) (b)

Figure 2.34: (a) Autocorrelation seque-nce of the noise-corrupted sinusoid, and (b) a<Jtocorrelat.Jon sequent:<! of the
noise.

9 r·r,;;y .c:~s;, :!L

'
{if\:
; Ai;
Y , >tn•u •x hn Y 1 'H \;4! / ;; *'•-:\! fh ;
i) :",Jtn:;; •, <• t C tJ•+ i •' jf v t"kwt'\- {&

r. ' ,}r 4; <yyy,y: r v.<•:: ": ;; }/"ft:f v SA /: v<!V r '•-' r


''{)f\ y { :t j: ; ; '0 <~;;;:;y;··; t: t; c;r; {A t" < , 3' t' \A\ L.C it:], :P T' £> 1<<.
" -J:n. ~>A,
IrL4t¥j k> T ' it Vi ) rt 4
*; *t":is L k::ttl+r;s' } y<: ttkit:: "; i 0111%: j ; :r.>d:is

2.8 Random Signals


The underlying assumption on the discrete-time signals we have considered so far is that they -can be
uniquely determined by ·.veil-defined processes such as a mathematical expression or a rule or a lookup
table. Such a signal is usually called a de1erministic signal since all iampie values of !.he sequence are
~~.B. Random Signals 95

well Jefined for all values of the time index. For example. !.he sinusoidal sequence of Eq. (2.39) and the
t:xp~.;ae!ltial c.equeuce of Eq. {2.42) are deterministic scqu;;nces.
Signah for which each sample value is generated in a random fashion and cannot be predicted ahead
of time comprise anotht:r c!as> of sigm:Js. Such a signaL called a random signal or a stocht1Mic signal.
cannot he reproduced at will. not even u~ing the pruce~;; generating the signal, and therefore needs to be
modeled u-;ing sta~nti..:al infonnation about !he signaJ. Some common examples of rar.dom signals .arc
speed:!, mLsic. a~d :.cismic signah The error signal generated by funning the difference between the ideal
I <><tncpkd version of a continuou,;-time sig:-~al and its quant1zed ver,;ion generated by a practical analog-to-
digital converter is usually modeled a~ a mndom sig:-~«l fer a:-~alysis purposes_ 8 The noise sequence d[n l
i..>f Figure 2.2 i.-(h) genem1eJ u,.;i·ng :he r0.::d function nf MAt'LAl3 is also an exam pic of a random -;ignal.
The J:tscrek-timc random .\i~nal or process consist<; -of a typically lnfin:te, collection or ensemble of
discrete-time .'>equencc-s {X[nj}_ One particular sequence in thi~ collection jxi:n]} is called a realization
of the random pwces:-.. At a given time mdex_ n, the observed sampi.e value x[n 1 is the value taken by tr.e
mndmn rariahle X[n ). Thu-., a :-andom procc~s is a family of random variables {X[n]i. In generat the
r;mgc ef samp-le values ls a con1inuum. We review in this sec:ion the important statistical properties of the
random vari:1ble and the random proce,;;,_

2.8. 1 Statistical Properties of a Random Variable


The s.t;.Histica: propertie;.. cf a random variable depend on :Is probability distribution funC'tion or, equiva-
knt\y. on it~ probability densi1y functinn. which are detinet:~. !le:xt. The probability that the random vanab!e
X :;1/..I.::. a value in a specified range from -cv to a is given hy its probability distribwwn.funCiion

Px(a) = ProbabilitylX :S: o:\. (2.1!7)

Theprohabilitv d.f:'nsitv.f!uwlim; of X 'is defined by

JPx (u)
px(a)= - - - (2 115)
Ja
:f X ..:an ;J:,MI!!lc a continuous ra:tge of •aiues. From Eq. !2.118) the probability dis.tnbution function is
therefore given b~-

l'x(a) = £"'
. -=
px(u)du. (2.119)

The probability d-'nsity function :..ati<>fics 1he fo!Jo·.ving two properties:

Px(~X) ~

i:
0, l2.120a)

px(a)da = 1. (1. t 20bi

Like·,;. i,;e. thcprohabihtydi:..tribu:ieo functior:. satisfies <:he following properties, which follow from Eqs. (2. 1 19!,
r2.12~kn. and (2._ L~Oh):

O::::Px(a)_:::J. (2.121a)
Px(uj)-::;::: Px(et2). foraiJa2 :=:: a1. (2.121b)
Px(-:~o) = 0, Px(+X>) = l, {2.121c}
Probahility[a; <:X ::s_a::d = Px(cQ)- Px(a!). {2.12ld)
-------------
'see s,ct•<>ll 9 5. L
96 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

Figure 1.35: Probability density function of Eq. (2, 125).

A random variable is characterized by a nwnber of statistical properties. For example, the rth moments
are defined by

J.tr = E(X'") =£:a' px(u) da,

where r is any nonnegative integer and EO denotes the expectation operator. A random variable is
completely characterized by all its moments. In most cases, all such moments are not known a priori or are
difficult to evaluate. Three more commonly used statistical pro~rties characterizing a random variable
are the mean O£expected value mx, the ltk!an-square value E(X }. and the variance ui as defined below:

mx = E(X) = L: apx(a)da, (2.123a)

£(X 2) = {x; u 2 px(a)da, (2.123b)

i:
--oo

aJ=E([X-mxf)= 2
(a-mx) px(a;da. (2.123c}

These lhree properties provide adequate infonnation about a random varlltble in most practical cases. It
can be easily shown that
(2.124)
The square root of the variance. ux, is called the standard de.·iation. of the random variable X. It foUows
from Eq. (2.124) that the variance and the mean-sqtJ.aTe value are equal for a random variable with zero
mean.
It can be shown that the mean value m x is the best constant representing a random variable X in
a minimum mean-squared error sense, i.e., E([X- K] 2} is a minimum forK = mx and the minimum
mean-square error is given by its variance cri {Problem 2.78). This implies that if the variance is small,
then the value a~sumed by X is likely to be close tom x and if the variance is large, the value assumed by
X is likely to be far from m x _
We illustrate the concepts introduced so far by means of an example.
2.8. Random Signals 97

Two probability density functions, commonly encountered in digital signal processing applications.,
are the uniform density }Unction defined by

Px(Q) =I ¢n,
0,
fora~ l:t"

otherwise.
~ b, (2.127)

and the Gaussian density fUnction, also called the normal density function, defined by

px_{a) = l=-e-<e<-mx}2j2oJ, (2.128)


axv2rr
where the parameters m x and ax are. respectively, the mean value and the standard deviation of X and lie
in the range -no < mx < oo and ax > 0. These density functions are plotted in Figure 2.36. Various
other density functions are defined 'in the literature (Problem 2.79).

[n the case of two random variables X and Y their joint statistical properties as well as their individual
stalistical properties are of practicaJ interest. The probability that X takes a value in a specified range from
913 Chapter 2: Discrete-Time S1gnals and Systems in the Time-Domain

..•
- - - - - - -~ -~l~u-·- - ~- - ·-·-_·_3·-"_o_l ~!~u
-

____

(b)

Figure 2.36: {a) Uniform and (b) Gau-,swn ?mbahi!ity demnty functions.

-,X> to a and that Y takes a value in a spe-eified range from -co \o f3 is given by !heir joim pmhabili!y
distributiOn /Unaion
Px r (a, /3) = Probability tX : ;:_ a. Y :::;: {3). (2.J1lJ
or. equivalcn!ly. by their joinr probability density function

i.! 2 Pxr(a. /))


Dv;·,'a
~"' ' fll· ~ -----·----.
Oaap C2.! 32l

The joint probability di:;tributwn function is thus given hy

P,;y(a.f3:1 = ~~~'~ Pxr(u, v)dudv. (?. 133)

The joint probability density fum·tion satisEe> the :'allowing two properties:

L:l:
pxy!a, /J) ~ 0, (2_ J 34a)

Pxr(a. f3)drxdfi = 1.

The joint probabihty distribution function sad::dies rhe following properties. which arc a direct consequence
of Eqs. {2.133). (2.1 34a), and (2. I34b):

0 _::: Pxy{a. j3) :S 1. (2.135a)


Pxy(aJ. t3t} :::;: Pxr \a2. flz) for a: ::=: a, and f:h ;:: {J 1, (2.135bJ
P.n-(-:x>, -CQ) = 0, Pxy(..;...oo. +oo) = 1, (2.J35c)
The joint statistical propertiesuftwo random variable& X andY are de~cribed bv their cross-correlation
and cross-covariance, as defined by •

r=
t/Jxr = E (XY) r =
= "-= .f-x Of/3px_r (a, fJ) da d/3,
2.8. Random Signals 99

Figure 2.37: Range of the random variables with the jomt probability dem;ity function of Eq. (2.139).

i:
YXY =E(IX-mxJ[Y-my])

= /_: (a- mx)(f3- my)Px.y(a, j3)dctdfJ

=t/Jxy- mxmy, (2.!37)

where mx and my are, respectively, the mean of the random variables X andY. The two random variables
X ar.d Y are said to be linearly independem or uncorrelated if

E(XY) ~ E(X)E(Y), (2.138a)

and stalisti.cafly iruiependenJ if


Px,y(a, {1) = Px(a)Py(f3). (2. I38b)
It can be shown that if the random variables X and Y are stathtically independent, then they are also linearly
independent (Problem 2.80). However, if X and Y are linearly independent, they may not be statistically
independent.
The statistical independence property makes it easier t-o compute the statistical properties of a random
variable that is a function of several independent random variables. For example, if X and Y are statistically
independent random variabl~ with means mx and my, respectively. then it can be shown that the mean of
the random variable V =aX +bY, where a andb are constants, is given by m,,.
= amx +bmy. Likewise,
f.
if the variances of X andY are a and uf, respectively. the variance of \t. is given by a~ = a2ui + b 2af
(Problem 2.82).
100 Chapter 2: DiSCfete-Time Signals and Systems in the Time-Domain

____£__ - -

i'"iaure 2.38. San:p!e :realizat:um; nf the :r.mdom sinuso:dal signal of Eq. (2.140} for wlj = 0.06:-r.

2.8.2 Statistical Properties of a Random Signal


As indicated earlier, the random discrete-:ime signal is a sequence of random variable1o and consists of
a typicaUy infinite collection or ensemble of discrete-time sequence1>. Figure 2.38 shows four possible
realization-, of a mndom sinusoidal signal
{Xfn]} ={A cos(w,n +If!)}, (2.l40)
with w,> = 0.06.<T, where the amplitude A and the phase $ .1re statistically independent random variables
with umform probabilitydistribucion in the rangeO :::;: u :::; 4forthe amplitude and intherangeO ::=: ¢ -:::_ 2;r
fm the phase.
The statistical properties of the random signal [X!n]! at time index. n are given by the statistical
properties of the random variable X{nJ. Thus. tire mean <Jf expected value of {X[n]j at time index n is

mx;n! = ElX[n]) = /_: Ctf'X!nJ(a; n)da. (2.141}

The mean-squ:Jre value of lX!n]} at time index n is given by

E(X[nf) = /_: a 2px[n!(a;n)da. (2.142)

The variance a~[n; of iXrnH at time index n is de!lned by

(2.143)

In general, the mean, mean-square value. and variance of a random discrete-time signal are functions of
rhe time index n, and can be considered as sequences.
So far we have assumed the random variables and the random signals to be real-valued. It i-. straight-
forw-a:d to generalize the treatment to complex-valued ranrlom variables and random signals. For example.
the nt:-; sample of a complex-valued randcm signal {XfnH i>- of the form
(2.l44)
2.8. Random Signals 101

where {X,.... [n]} and {X1m[n)} are real-valued sequences called the real and imaginary parts of fX(n]J,
respectively. The mean value of a complex sequence at time index n is thus given by

(2.145)

Likewise. the variance crj[n] of {X[n]} at time index n is. given by

(2.146)

Often, the statistical relation of tbe samples of a random discrete-time signal at two different time
indices m and n is of interest. One such relation is the autocorrelation, which for a complex random
di&.."rete-time signal {X[nl} is defined by

<l>xx[m, n[ ~ E (X[m]X'[nl), (2.147)

where "' denotes complex conjugation. Substitu:ing Eq. (2.144) in Eq. {2.147) we obtain the expression
for the autocorrelation of X[n]:

.Pxx[m, n] = ¢xux.. ';_m, n] + f/Jx 1.,x1,[m, nJ


- i.Px,.X;,., [m, n] + j¢x;.,.Xu[m, n}, (2.148)

where

¢x._x"[m, n] = E (Xre(m]X,...in]), {2.149a)


~X,,. X;,., [m, n] = E {X;m[m]X;m[nJ). (2.l49b)
.Pxr.x,.,.[m, n] = E (Xr.lm]X;m[n]), (2.l49c)
¢-x,,xu[m. n) = E {X;.,.[m)X...,.[n]). (2.l49d)

Another relation is the autocovariance of {Xfn]}, defined by

YXx[m,n] = E ((Xfm]- mx{mJ)(X[nJ- mx[nJ)'")


= tPxx[m, nJ- mx[m;(mx[nl)•. (2.]50)

As can be seen from the above, both the autocorn:lation and the autocovariance are functions of two time
indices m and n and can be considered as two-dimensional sequences.
102 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

·-

The correlation between two different random discrete-time signals {X[n] J and {Y[n]} is descn'bed by
the cross-correlation function

i,l)xy[m, n] = E (X[m]Y*[n]) = L:i: afJ* PX~ml,Y[ 11 J(a, m, P. n) dad{J,


and the cross-ct;variance function

yxy[m, n] =E ((X[m] -mxtmJHYfn]- mn,.J)*)


= ¢'xy[m, n]- mx[... J(mf!.>~J)*,

where PX[m!,Y[nJ(m, a, n. fJ} is the joint probability density function of X[n] and Y(n). Both the cross-
correlation and the cross-covariance functions can also be considered as two-dimensional sequences. 1De
two random discrete-time signals {X[n}} and {Y[n]} are uncorrelated if Yxrlm, n] = 0 for all values of
the time indices m and n.

2.8.3 Wide-Sense Stationary Random Signal


ln general, the statistical properties of a random discrete-time signal (X[nl}, such as the mean and v.uiance
of the random variable X[n], and the autocorrelation and the autocovariance functions. are time-varying
functions. The class of random signals often encountered in digital signal processing applications are the
so~caJled wide-sense stationary (WSS) random processes for which some of rhe key statistical properties
are either independent of time or of the time origin. More specifica1ly, for a wide-sense stationary random
2.8. Random Signals i03

pnx:e.~.<. 1 X!na. the mean t:(X!nll ha.;; the :>atnecon,.tanl value mx for ail va;ues ofthe time index n, and
!he autm;orrelation and lhe <l:l.I{(H,:nvariance functions depend only on !he difference of the time indices m
a'ld tL i.e.,
mx = E~Xfn J), for aH JL (2.159J
.Pxx!f} = IJJxx [n + {. n] = E{Xjn + fjX"[nl), for all nand L (2.160}
YAxli::l = yxdn + f, nl = E ((X[n + l1- mx)(X[n]- mx)*)
= ¢xxf.f]- Jmxf~, tOr all nand f. (2.161)
Nille that in the case of a WSS random process, the autocorrelation and the autocovariance functions are
one-dimensional sequences.
The mean-square value of a \VSS random pro.:e:-.:<. I Xlr !} 1s given by

(2.162)

and the vari3nc.; is given by


(2.i63)
ft follows from Eqs. (2.154) and (2. !56) Lh.al the random pnx.---ess of F....q. {2.140) is a wide-sense stationary
"ign<1L
The cross-correlation ar:tl cro;,;s-covariance functions between two WSS random prol..-esses {Xl>~]) anJ
{Yjn)iaregivenby
¢xrl'l ~ E (Xln + qy•lnl), (2,!64)
Yxd-€1---= E ((Xin + £}- mx HYlnl- my)"')
= ¢xy/fl- mx(my r·. (2.165)
The symmetry propen:ies smi'>fied hy ~he .autocorrelation. autOCO'I.'ariance, cross-correlation, and cross-
covar-ia:Jce function~ are:

tPxxl-t:J = .PXxrn (2.166.a"i


yxxi-£1 = YX.d-0. (2.i66b}
<t>.nf -t'J = ¢;x!EJ. (2. !66cj
y_!(Y: -f'l-= y; xl n (2. !66d)

From the above >;ymmetry propertie;, it (:an he seen lhat sequences .PxxftJ, Yxxr£J. ¢xYlfJ, aud Yxd.fj
are always two-sided sequences.
Some additional useful properties concerning these functions are:

.Pxx!Ul¢n'!O] :':. i<Pxrt£1: 2 , \2.l67a)


2
Y.n[O]yYy[OJ 0::: iYxdtll • (2.167b)
¢xxf01 0::: J.Pxx[tll. (2.J67c)
Yxx[•J] ::::. IYxx [f]l. (2. \67d)

A consequence of rhe above properties is that the autocorrelation and autocovariance furu::tions o.f a WSS
rar,dom proce~"' assume their maximum values at£ = 0. In addition, it can be shown that, fer a WSS
stgnal w:th nonzero mean, i.e., mx;.,! *-
0, and with no periodi<: components,

(2, 168)
104 Chapter 2: Discrete-Time Sfgnais and Systems in the Time-Domain

If Xln] has a periodic component, then ¢xxf.EJ will contain the same periodic component as illusttated in
Example 2.40.

2.8.4 Concept of Power in a Random Signal


Tile average power of a deterministic ~uence x[nJ was defined earlier and is given by Eq. (2.29). To
compute the power associated with a random signal {X[ n]} we use instead the following definition:

Px ~ E ( N-=2N+
lim I
I n=-N
t IX!n)l
2
). (2.170}

In most practical cases, the expectation and summation operators in Eq. {2.!70) can be interchanged,
resulting in a more simple expression given by

{2.171)

In addition, if the random signal has a constant mean-square value for all values of n. as in the case of a
WSS signal. then Eq. (2.171) reduces to

Px ~ E (!x!nl1 1 ) (2.172)

From Eqs. (2.162) and (2.163) it follows that for a WSS signal, the average power is given by
Px = 4-xx[O] = ai + lmx! 2 • (2.173)

2.8.5 Ergodic Signal


In many practical situations, the random signal of interest cannot be described in terms of asi:mple analytical
expression, as in Eq. (2.140), to permit computations of its statisticaJ properties, which invariably involves
the emluation of definite integraJs or summations. Often a finite portion of a single realization of the
random signal i.s available, from willch some estimation of the statistical properties of the ensemble must
be made. Such an approach can lead to meaningful results if the ergoclicity condition is satisfied. More
precisely, a stationary random signal is defined to be an ergodic signal if all its statistical properties can
be estmtated from a single realization of sufficiently large finite length.
For an ergodic signal, time averages equal ensemble averages derived via the expectation operator in
the lir.lit as the length of the realization goes to infinity. For example, for a real ergodic signal we can
compute the mean value, variance. and autocovariance as:
I M
mx = J~oo 2M+ I L
n=~M
x[n], (2.174a)
2.9. Summary 105

M
' . l
a¥= hm 7.~~~
- _,J__,_oo 2M
I::
+ 1 n=-M (x[n!-mx)
2
• (2.J74b)

I M
nx!t'l = lim ..,----
:\.f_.= ~M + I
L (xtn1- mx) (x{n + n- mx). {2.174c}
r.=-.M

The limiting operatiDn required to compute the ensemble averages by means of time averages is stiU not
practical in most situations and therefore replaced with a finite sum to provide an estimate of the desired
statistical properties. For example, appro-ximations to Eq. (2.174a}---{2.174c} that are often used are:

l M
mx = --~ LxrnJ. (2.l75a)
A·J + l .. =0
I M
a~=--~}
..
2
M+i _(_.J-(xln]- mx )- '
(2.!75b)
n=={)

! M
Pxx[fl = - - L{x!nl- mx){x[n + i ] - mx). (2.1?5c)
M+l,=(J

2.9 Summary
tn (hie; chapter w.: inlmduc-ed :some important and fUndamental concepts regarding the characterization of
discrete-time signals and systems in the ttme--domain. Certain basic disaete-time signals that play impor-
tant role-S in d'iscrete-time signal processing have been defined, along with basic mathematical operations
used for generdling more complex signal;; and systems_ The relation between a continuous-time signal and
the discre!e-time signal generated by sampling the fonner at uniform tlme intervals has been examined.
This text deals a!mo~t exclusively with linear, time--invariant (LTI) discrete-time systems that find
numerous apphcatim:s in practice. These systerrs are defined and their convolution sum representation in
the time-domain iR derwed. The concept:; of causality and stabiJil:y of LTI systems are introduced. Also
discussed j, an important das~ of LTI systems described by an input-output relation composed of a tin ear
consta"'t coefficient difference equation and the procedure for computing its output for a given input and
initial conditions. The LTI discrete-time system is usual!y dassified in terms of the length of its 'impulse
response. The concepts. of t:be autocorrehu:on of a sequence and the cross-correlation between a pair of
sequences are introduced. Finally. the cha~er concludes with a reviev.· of the time-domain characterization
of a discrete-time random signal in terms of some of its statistical properties.
For further derail;; on dis<.:rete-time Mgnab aud "Ystems, we refer the reader to :he texts by Cadzow
{Cad73j, Gabel and Roberts {Gab87J, Haykin and Van Vcen !Hay99], Jackson [Jac9J], Lathi [Lat98j,
Oppenheim and Willsky [Opp83J, Strum and Kirk [Stu88J. and Ziemeret al. lZie83]. Additional materials
(>n probability theory and statistical propen1es of rando-m discrete-time signals can be found in Cadzow
fCad87], Papoulis (Pup65J, Peebles [Pee87], Stark and Woods (Sta94]. and Therrien {The9-2:].
Further insights caa uften be obtained by cons1dering the frequency-domain representations of discrete-
time <;igna\s and LTI discrete-time >;ystems. These are discussed in the following two chapters.
106 Chapter 2: Discrete·Time Signals and Systems in the Time-Domain

2. '10 Problems
2.1 Consider the following length·? se-quences defined for -3 S: n :S 3:

xfnl = !3 -2 0 I 4 5 2},
yfnl=IO 7 I -3 4 9 -2}.
w[n] = f-.'5 4 J 6 - 5 0 lj.

Generate the folllfWing seq_ne.IICes: (a) u[n] = x[n] + y{n], {b) v[n] = x~nl · w(n!, (c) s[n] = y[nj- w[n!. and (d)
r{n! =4.5ylnl.

2.2 Analyze the ~lock diagrams of Figure P2.1 and deve-lop the relation between y[n] and x (nj.

y
y{nJ

{a)

_, ' x(nj + y[nj

' ' +
c<nJ y[."!j

d,

(;,;) (d)

Figure P2.1

2.3 Determine Ihe even a:ncl odd part~ of the sequences xln l. )''[n ], and w[nJ of Problem 2.!.

2.41 Let gln] and h[n J be even and odd real sequence:>. respectively. For each of the fol!O\Ving sequences, determine
if it is even or odd.
(a) x(nj = g[njg~nJ, (b) u[n] = g[njh[n], (c) v{nl h[n]h[n]. =
2.5 Let.ir[n],X1:n]. and i3!nl be three periodic sequence:; with fundamental periods, Nt, N2. and NJ, respectively.
Is a lineae combination of these throe periodic sequcru::es a periodic sequence? If it is.. wta! is its fundamental period?

2.6 Dewrmine the perioilic conjuga1e symmetric and periodic conjugate anti&ymmetrK: parts of the following se·
quenc:es:
(at {x(njJ = {Aa"l. -N .:S: n::::; N, where A and a are complex numbers,
(b) {"!n\l = {-2 + j5 4- j3 5 + j6 3 +j - 7 + ;2).
t
2.10. Problems 107

2.7 W~ich ones !![the following s::q:;enccs ;;re l:>ound-:xl ~equcnces,.1

(a) ~ '[n ]) = (A a"), wlk"'re A iY'.d tl' are comple1> numbe:-s, and 10: I -; ;: : I.
(bl ! _1 !n I) = Au" rd" ], where A and u are comple!l mr:obe,--,o_ and Ia I < I,
!U !liLnF '"""'Cti",t_.~lnl, where C :md {; arecornpl= number~_ and 1#1 > 1,
(dl lg!n!) = 4->m{uJ"n},
!e! !<'In I]= .~<.:os:>(w!>n 2 _l

2.8 (<t) She._., that a cau~ill real .sequern::e xjn) can be fully rocovercd from it-~ even part xevfnJ fur all n :2: D.
w3ere..L~ :·.can be recovered tL>m it~ <>C.d part x,,dlnl for a;l n > 0.
(bl J,; it posslhle tu fully rcco\>CT a umS<~l <.:ompkx <;cquem;e -;{n_l fro;n its conjugate antlsymmetric ;w.rt ;'calnl?
Can vi" I he fully renwer-ed fr.--.m tis conjuga:e symrnetric pa.'i _Vcsfr P Ju>;tlly your answers_

2.9 Show !bat the even and odd part~ ol a real ~quence are, respectiv-ely, even and udd ~ueoces.

2.1U '\how that the periodic conj;_;gate C':)'mmetric part xpcdn! <H:d :he peno-dic conjugate .:mti~}mmetrk part xpc,.ln]
of a lcngth-N ~equence x[n]. 0 _::: n :::: N - l. as defined in Eqs.C:0.24a) and (224b) can be altem<~tely expressed a~

-'·pcdn] = x:..fn l + xc 3l11 ~ N J. 0 _:;:; n :::; N - I (2.l76<!)


Xr<-·~lnl =.r,afnj--c---x;,.[n--1\']. 0 sn ::S N- I {2.176b";

wher~ .1 2 ,!nj ;and .rcafn I are, respn:th·eJy, the conjugate symmctri~ and conjt:g:<~te antisymmetf,c part~ of x[n 1.

2.11 Show ihat the periodic nmjut:ate ><ymmetric part Xp~,in l :md the perindJC conjLga!e antisymmetn.: part xpcaln j
of a length-N -.equcnce :;in~. I -::: " :'.': N - l a~ defined irLEqs. (2.24a} and (2.24b) ean also be expres:>ed a~

Xpc.:.,[rj = 1-(x[n] +x*;N -nJ). (2.l77a)


XpnlOJ = Rl:{.t~O;)_ (2.177b)
n = 1I tl!'l )
Xp.,·a I) - .r " Jll- nj). {2.177c)
Xp~a[O] = j lmt.t!O)). (2.177d)

2.12 Show thH ;an absolutely ;;ummahl('. sequence ha~ f. nit,;, energy. hut a finite energy sequence m11y not be abso.!Ut('}y
~ummable.

2.13 Show that the square-summablc Set.juem;e XJin J = &of Eq. (2 27) is not ab.~olutely summable.
2.14 Show that the sequence x2[n] = "'-'~':;'·~ _ l ::0 n::: x. i~ square-summable but not absolutely summab-k.

2.15 Le1 Xev In l and xoo!nl denote._ re~pecti,-ely. Ihe even and ndd parts of a ~quare-wmmab!e sequence x[n ]. Prme
the f<.'lluwmg result:
= =
"'1·,
~ x.,.,.n1--,-
n=-oo n=-oc

2.16 Compute the energy of :he kngth-N sequenc-e

2.17 De:ermme the averdge power ;md the energy uf the follow:ng ~e-quence:-.:
108 Chapter 2: Dtscrete-Time Stgnals and Systems in the Time-Domain

(a) -•rfnl = idn!.


{b) t=lnl = n!lfnl.
(;;} .\:;[nj = ti(,f'_iw,,l1,
•d) q[nj=Asio((2n-n/Ml+¢).

2.18 Express the :.cquence x(n I= I. -::x:; < n <. X'. in term~ ot the unit step sequence ~!nl-

2.19 Verify the relation ~et\>;een the unit sample ~equence i'ifn 1 anti the unit ~"tep sequence ~tfn 1 gi·,ren in Eq. (2.38!-

2.20 The :OJ!ov.ing sequt>'!Ce'> represe-nt nne period of a sinu,oidal ',elj_!ICflef of the form .x:[n] = A cos{ won + <{;;):
(a) {0 -../2 - 2 - ft 0 ,/2 2 .j2i,
{b) t./2 ../i - ./2 - .Jlj,
(C) /3 - }}.
(d) {0 l.5 {I - l5j.
Detenrune the v<J.!ue;. or the parameter~ A. w 0 • and 4> fm each case.

2.21 Dettorrnine the fundanler,tal period 0f the fo11o•Ning periodiC oequences:


(a} i1 {n} = e- ;OA_,.-,,
(b) bln-J = sm(0.6;rn + 06-,<t),
(CJ ;-_:dnl = 2co~(l IY<n- 0.5rr) + 2sm(0.7Jn1),
(d) i;4[nj = 3:-.in(1.3nn)- 4cm(0.3nn + OA5,"T),
(d .<-s[n j = 5 sin( 1.2;r n + 0.65Jr) + 4 sin({i$;rn) - co.s(O.~n 11 ).

(f} .in [n! = n moJulo 6.

2..22 Determine the funda'lJental penud of the sinusoidal£equencc ;: !nl = A co:.(M{'n) for the followmg value.,_ of the
angular frequen..·y w,,:
(a) 0.14Jr. {b~ 0.24JT, (cl 0.34n, (d) 0.6&:n-, (e) 0. 7~!L

2.23 A continuous-time sinusotdal signal xu(!) = cos.n,t j,; sampled at t = nT. -oo ~ n _::: ?o. generating the
diSCJete-lime ~que nee x[n} = x., (n T! = cos(Q0 n T). r-ur ._,.-hat value~ of T i~ x(n I a periodic sequence? What j_~ the
fumL:uw~ntaJ peritod of .tj_n] if n, = I~ radians anci T = n j6 .seconds?

2.14 (a) Express the sequence>: x!nL y(nl, and w[nlufProbleml.l as a llnear combinatiOn ofddayed unit.>ample
.S~UCnceli.

(b• Express [he ,;t;-qurnces x[nj, vtnj, and w!nJ of Probkm 2.1 as a l-inear wmbination of delayed unit step
~eyuences.

2.25 Slmw that the d~screte--dme sys1emo;. descri':Jed by the foilowing equations are linear sys~m.s:
(a) hj_ (2.14{" {b) Eq. (2.15), {cl Eq. (2.17), (d: Eq. (2.18). (e) Eq. (2.56). (f) Eq. (258), and (g)
Eq. {2.59)

2.26 Fvr e--.;.dt of !he folknvirtg dis-..;ret::-time systems. where vln J and xln l are. n::spe<.:tively, the Olltput and 1he input
;.equence~. -determine whether or not <he ~y;;tem is {!) linear, ',2) -c-aus;;\. (3j stable, and (4) shift-invariant:
(a! y(ni = n 2 xin],
{h) y\nl = x 4lnl.
2. ·to. Problems 109

-::::; ,.In] = fi + Li=o x[n - I'L P "'a nun.rero constan:.


1d) y[n] = fj + L~=-< 1·jn- r j. Pi~ a nonzero con~t.anL
c c .l ,·[n J = ur: -nj, rt :s a nonn~·''-' con !>tam.
il) .•·In I= x~•1 ·· 5j.

2.27 T!'"le >enmd derivative y[n] ul a ':equem:~ x!nl at time in~tant 11 is usua!;y appm~<imalcd by

\'[nj =.tin+ ll- 21"1nt- .tin- !].

lf Jlnl and x[nj denote the output and i:~put of a dis<::rete-ttme sy~tcm, is the system linear') Is it timc-lnvurialll 1 I~ it
:::au :<a!?

2.~8 fl>e medmn filter is often USt..'d for the ~mocthing ohignal;; corrupted by impuh-.enni;;e I Reg93]. It i~ implemente-d
~y ~hC.:mg a window of odd kngL'l uvd the input ~equem:e x[nl nne ~ample at a time At the nth ;m;tam, the input
~mples :nstde the wmdow are rank ordered from the largest to ;he smallesl m values, and the sample .at the middle is
:he med1ar. value. The output .\ ln J o:'the med.an filter is then given

vlnl = me~ lxf n - Kl- ... x [n - l;. x [n j. t[n + 1], .... xln ..,... K l}.

For e\ample, meJ{2. -3, 10. 5. -I! = 2. Is the medi~n filte;- a linear or nonlinear disc.Tet<!-time '>Y><em'? Js it
timc-mvariam? Ju~lify your answeL

2.29 Consider the dJscrete-time S)''>Lew :.:haractem:e.d by the input-output relation {Cad&?J

yjnJ=2
l ( y[n-:l+v[n-1]
';nl ) · \2.1781

wt,ere t [n] and :rlnJ are, tespectivdy. the mpU and ()Utp.:t sequenc:es. Show that the output y[nj uf the ,;hove ~ys.tem
f..:H an input xlnl = a~[n] WJth yj-lj = l converge'> tn ,/0! a~ n __,. oc when a is a pO:<iitive ~umher. I> the above
~y:>:C!":l linear o: nonlinear? Is 1t t!mc-im-aria.nt? Justify your answer-

22~ 4n algorithm for the calcttlat.iun of the square root of a numbe:- a- is given by [Mik92j

r{nl =.rfnJ ·· r 2[n - l l - Y{n -1;. (2. I 791

wl-ere .rfnl = O'ti[nJ with 0 < u. < l. If .~[nJ and :rln] are con~1t!ered as the mput 11nd outpc!t of a dis:::ret•>time
"')'~.tern, is the ~ystem linear Qf nonlinear? I;; it time-inv-.::ria.""lt? As ?J -+ oo, show that y[n] __,. ..;ri. Note that .v!-!]
is <l'>t:Jtable init:al appwx.;mat1on to ,_/;;.

1.2•1 Develop a ge:tcral ex;:nes<ion for :he oulpm y[nl of an LTJ c!i~;;:rcte-time system in tenm; of its input yfnJ and
th<: unil ,.tep re'uoOJse s[n.l of the ~y~~em.

2.32 A periodic se.quence .i"jn} with a period N "applied a;; an input to an LTI discrefe-tJme sy~tem characterized by
:tn impuhe response h[n j generatmg an ompul ::ln j. h ; [n] a periodic sequence? 1f it is. what 1" its penod?

1.~,3 Consider L':;e following sequences: (i) .q[n; = U!n- l]- O.SB[n- n
{;i) x2!n! = -3illn- 1] + S[n + 2].
(iii) hJin] = 2!i~n]4- Gin- I ] - J.1[n- 3j, and (iv) h2;n1 = -8in- 2j -0.5Jfn- ll + 38(n- 3]. Detennine the
followmg seque:x:e& obtained by a Linear convolution of a pair nf the above sequence~: (aJ ."1-'J {n: = x; fn l€h 1{n ], (b)
nln} = .Q!n!~:hz{n}, (c) ninJ = 1"!fn)@h2inJ, and {d) v.;.[nj = <;>Lnl@ht!nJ.

2..34 i...et gfnl be a fimtc-!ength sequence defincti fo{ N1 :::=: n :::=: N;>. with N2 > l•l]. LikewL«e. let h[nJ be a Jinite-
ler.gth ~equencedetiru;d fur M1 :'0 n ::: M2, wilh M2 > M1. D"'fme y[n] = gfnl@hinl. (a) What is the length of
vlnl? (b l \V::1at is the rang~ nf the mdex n for whicl: ylnl i~ defined'l
1"1 0 Chapter 2: Discrete-Time Signals and Systems in the Time-Domain

2.]6 LC"t gfnl = x 1[n](~)-Q(!r j@xlln] ;md h[n 1 = .q fn- .Y!]G-hJ:[Ii- Nzl@XJ[n- N-, }. Express h:nJ in terms of
g{•1l-

2.37 Prove tha: the convolution sum opcrat!on i~ comrnut<Jtive and distribtmve.

2.J8 Censi(Ie-rthe followmg three sequen..:-es-


tOrn =0.
x; (n J = A {a constam). ~3[n] =I ~I. forn=!.
otherwise.
0.

2.:19 Prove tha: the convolution ~ration 1s as<>ocidlive for srnble ;and single-sided sequence;;.

2.40 Show that the cnnvdutiun of a length-M 1>equence with a ler.gth-N sequence leads to ;; sequeace of length
tM+N-1).

2.4l Le-; xfnl be a length-;\' sequence given by

{~:
O::::nsN-1,
xjn] = ocherwise.
Determine yin 1 = x[n KVxfn 1and ~h{)W that it JS a triang-J!ar sequenc·c with a maximum sample 'l•alue of N. Dt:-tennine
L"J<, locatiom of the samples with the follo-.ving values: !.'/4. N /2. and N.

2.-42 Let xfnJ and lllnl be two >ength-N sequ~nccs givrn by

x[nl = j 0.l' 0:::: n .:;=: N- I,


otherwLS<:,
h[nJ=[n+l. O~n?-N-l.
0, otherv.-·tse.

Dttermine t..'le location and the ~alue of the largest pos::tJ= sample uf yfnJ xfn ]@h[nj without perfl•rmlrlg the
;:;o·wolution operation.

2.~13 Cunsider :wn real sequences. h[nl and ~[nl expressed as n s:;.m of their respn:live even and odd part-.;, t.e.,
h(11] = hL~_. fn) + hool n:l, and R[Nl = geA n 1 + goo[n l F:x ea;h of !he following -:requeoces, determine if it is even or
od-!.

2.44 Let y[n J be the S<:lJuenc-e obtained by a linear etmvo:urion ol tw•3 causal finite-length M!quenccs h[n] and x[n J.
1--'01·each pair of y[nj and k[nj listed btlow, determine xlnJ. The firo.t sample in eacll sequence lS its value at n = 0.
·:a) {y[nJJ=l-1. -1. 11, -3,30. 28.48J,jh[nl}={-l, 2. 3. 4],
(h) {y[njj ={I, 3, 6. 10, 15. 14. !2. 9, 5}, {h[n]J::: !I. 2, 3. 4. 5},
,·..:) LvrnH = i-14- jS. -3- j17, -2 + jS, -9.73-r J 12.5. 5.8+ j5.67}, !h[n!} = !3+ jl. -l + J4. 2 + j).
2.45 Consider a;:;auo;a[ disaete-time _,ys!em characterized by~ fin;t-order lint"ar, constant-coefficient difference eq\13-
tion gh·en by
yfnl = ay!n- lj-,--- bxlnJ. 11?:: 0,
wh!re v!nj and .r{nj are, respectively, the output and input sequences. Compute the expression l-Or the output sample
y!>< 1 in tt:ffiL<; of the initial condition y[- I] and the input samples.
.:::~.1 0. Problems 111

(rrJ is the syo;;tem time-invariant if y[ -!] = i? Is the ~y~t~m linear if y[ -1] = t"!
fb; Repeilt part (aJ if :rf --1] =fl.
(CJ Generalize the re.>ult;; of part;; (a) and ~b) to the l:ase of au Nth-order causal discrete-time system given by
Eq. {2.93,1.

2.46 ..\ c·ausal LTi discn-""tc-lime system h satd to have an ovenhool in its step response if the response exhibits an
oscilb.tory beha-..·ior with decayin_g amplitude~ a>ound a final corL~tant vafue. Sh.ow that the :system has IDJ overshcot
in it;; step re~pon'>e if th.e impulse re~ponsc h!nl of the system is nonnegative for all n :::_ D.

2.47 The sequence of Fibonacci number.-; fin )1s a cauxtl hl'l.jm:f'xe defined by

ffr.]= fin- i]+ffn -2). n :=::: 2

with flOJ = Oand f[ij = l.


(:ll lkvelop an exact forrr.ula Lo cakul.ate fl .., I dire:.:tly fo~ ar.y n.
(hJ Show that ff.r.J is the impulse respome of a causal LTI sy!.!em described by rbe differel'.ce equation [Joh89]

y(n! = y 1,n- 1\ + y!n -· 71 +>.In - l].

2.48 Con~1dcr a tirst-orcer <.:omplex digitnl filte~ c:haraclerized by a difference equation

y[nj =ay[n- 11-.:t"fnJ.

where x[n) is the :eal input sequeuce. ,-. [n J = Yrefn] + j Yim In 1 is lhe complex output sequence wirh Yre[n: and .hrn [11 J
deoot:ng its real and imagmary part~. and a = a + jb is a comple.'l con;;tant. Deve!op an equivalent two-output.
single--input real difference equation representatron of the above complex digital: filler. Show that the s!ngle-in(Wt.
smgle-ouqmt digital filter relating )'re [n J to x[r.] is described by a ~c-ond-arder difference equaticn.

2A9 De1errnine t.'Je expression for th<..o 1mpuJse response of the factor-of-3 linear interpolator of Eq. {2.59).

2.50 Der.crmine LI-te expn::asion for the impulse response oi the fuctor-of-L linear interpolator.

251 Let hl!Jl. hll ]. and h r2J denote the first three impulse respollse samples of rht: first-order causal LTI system
of Problem 2.56. Show !hat the coefficie!lts of the difference equation chamcterizing this syt.tem can be uniquely
detenninffi from these: impulse respon5e sample5.

2..52 Let.i causal IIR dlgJtai filter be described by the difference equation

N M
L d;;v{n- kj = 2:: Pkxfn -k]. (2.180)
k=O k=O
w!tere v[ I"! I and xjn 1 denote the output and the input sequences. :especlivcly. If h[n J denotes its impulse response,
show th<1t

pf( = L' Mnldk-1, k=O.l. ... ,M.


n~O

From the <~bo-.·e re-;ult, show that p,., = hfn]t_!::dn·

253 Consider <: cascade of two causal stable LTI systems charw...'teri.t:ed by impulse responses a" p,fn J a!'d fl" pt[n j,
where 0 <. u ·~ I and 0 < /) < I. Determine !he expres~ion for the im?ulse re1!.1XJOse h(n i of the cascade.

2.54 Uetermine the impulse re1>1Xmse. ;?(n] of the inverse system of tire LTI discrete-time sy>rtem of Example 2.28.
112 Chapter 2: Discrete·Time Signa's and Systems in the Time« Domain

2.55 Detennlne the impuh.e resptmse glnJ characterizing the mverse S)SI:em of the LTI discrete-4iJIK: system of
Problem 2.45.

2.56 Consider 1he causal LTI: sy;;tem -denTibed by the difference cq;.:ation

yinj = 1•1:t-rl_n)+ PJX\11- If- d;y{n- 1J.


where x[n 1 and _Y{Il] denote. respe.:t1vely, its input and output. Deter:nine the difference equation representation of its
inverse system.

2.57 Determ.im- the ex:pre.;~jm; fiu th"' impulse re-~ponse nf each of dle LTI syuems shown in Figure P2.2.

h 1{nj + +

.Jh5fnJ\
L.j h,ln] L__-..jgi---
(a) (b)
Figure P2.2

2.58 Determine the overaU impulse response of the sys\em of Figure P2.3. where

hlfnl = 2/:ijn- 2] -- JBjr. + i). h:?fn! =&In- l] + 25[n + 2].


h:~1r.1 = 5Jfn- 5j + 7&tn- 31 + 28[n- lj- Ofnl + 38[n + 1].

JI,ln]
FigureP2.3

2.59 Prove that !he BiBO stabihty t.:oodition ofEq. (2.73) also hoJ:ls foe an LTl digital filter with a compl~ impulse
response.

2.60- h the cascade .:onm:clion of twa stable LTJ systems also stable? Justify your answer.

2.fi.1 h the parallel coDJlet:tion of tw<> ~tahle LTI systems aiM> :.table'! Justify your answer.

2.62 Prove that L'Je ca.•Kade n.:mnet:tion of two passive (loss]ess) LTI.~ystems is alw passive (lossless).

2.63 Is the parallel COTmectKln of two passive (los~le.%) LT1 systems. also passive {lossless)? Justify your answer.

2..64 Consider acausaf FIR filter of length L + 1 with an impulse rc~pome given by {gfnH, n = 0. I, . _.• L. Develop
the difference equativn repr~cut~on of the form of Eq. (2..81) where M + N =
L of a caUsal finite-dimensional IlR
digital filter with an impulse response fh!n1l such that h{r.) = g(nj forr. R I, .... L.=
2. ·Jo" Problems
113

2.65 Compute !.he output of the accumulator of Eq. (2.55) for an input x[n] = n,'L[n] and the folk!wing initial
conditions: (a} y[ -I]= C, and (b) y[-lJ = -2.

2.1116 In !he rectangular merhod of munerical integration, the integral on the right-hand side of Eq. (2.98) is e;qn-essed

fnl x(r)d--r=T-.x{(n-l)T). (2.181)


J(n-l!T
Devebp the difference equation representation of the rectangular method of numerical inlegration.

2.t•7 Develop a recurs.iveimpiementation of the IJ.me-varyi."Ig linear discrete-time system cllanu;terized by

y[n: =
·
l ! L£=
0,
1 .r[C], n > 0,
n ::<:: 0.

Z.t.S Detennine the total solution fur 11 ~ 0 of the difference equation

y{n] + 0.5y[n- 1] = 2.u[n].

w:i·h the initial condition yr-1] = 2.


2.(>9 Determine the total solution for n ;:: 0 of the differeoce equation

y[n] + O.ly[n - 1J- 0.06y[n- 2] = 2" ,ufnJ,


wi.th the initial<:olldition y[ -1] = l, and y[ -21 = 0.

2.70 DetemUne the total solution for n 2::: 0 of rhe differeoce equation

y[n} + O.ly[n - 1}- 0.06y[n - 2] = x[n]- 2x[n- I],


wi-th the initial-oomfition y[ -1] = I, and y[ -21 = 0, when the forcing function is x[n] = 2" J.t{!l ].
2.-;'1 Detennine the impulse response hfn 1 of the LTI system described by the difference equation
y[nJ + 0.5y[n- I}= x[n}.

2.i'2 Detennine the impul..e response h!n J of the LTI system described by the difference equation

yfnl + O.ly[n- lJ- 0.06y{n - 2} = x[n]- 2x[n -1].


1.73 Show that the sum L~ InK (Ad' j converges if j).;l < l.

2.'i"4 (a) Evaluate the autocorrelation sequence of each of the sequencesofProbiem 2.1.
·:b} Evaluate the c~....-:orrelation sequence- r;;y[i] between the sequence:s x(n] and y[n], and the cross-correlation
sequeocerxwllJ between the sequences x{n] and wfn} of Problem 2.1.

2.15 Determine the autocorrelation sequence of each of the foJIO'.Io·ing sequences and sllm\-· that it is an even sequence
io 1~h .case. What is the location of the maximum value of the autocorrelation sequence in each case?
·:a) xlfnj =a" ;L[nj,
<b) X2{n} =II, 0:5: n :::' N- 1,
0, otherv11se.

2..76 Detennioe the autocorrelation sequence and its period of each of the following periodic sequen~.
114 Chapter 2: Discrete-Time Signals and Sys1ems in the TiiT'.e-Domain

(a) i1 fr.] = ros(n: n/ M), where M is a po~itive integer,


{b} i:[nl = ll modulo 6,
(c) i3[nl = (-l)".

z:n Lel X andy be two random variables. Show that E{X + n = E{XJ + E(Y) and E(cX) = cE(X), wberec b.
a constant.

2,';'8 Determine the valtM' of the constant K- that minimizes the mean-square error E([X - Kl 2 ), and then find the
minimum value of the mean-square error.

2.7' Compute the mean value and the varian;,:e of the random variables with the probability density functions listed
be.,o., {Pru-60].
{_a) Cauchy distribution: Px(x) = J:{"':,:z,
(b) Laplacian distribution: px(x) = '!e-"'1-~:,

(c) Bmomia,' distribution: px(x} = L~=O (i)pi (1 - p)''-eh(x -l).

(d) Poisson diatribution: Px(x) = L~o e-;_"t li{x- .f).


2 Za)
(e) Rayleighdistributkm: px(x) = ~c--" f tt(x).

In the above equations, J(x) is the Dirac delta function and tt\x) is the unit step function.

2.80 Shov..· that if the two random variable~ X and Y are statistically independent, then they are alro linearly indepen·
de11t .

.z.m Prove Eq. {2.124).

2.£:2 ld :r{n} and _v(n] be two statistically independent stationary random sig~ with means m~ and m,.. and
va~iances a} and af. retipectively. Consider the random signal c·in] obtained by a linear combination of xtn] and
y[n]. i.e., <-'[Jtj = ax[n] + b_v[n J, where a and b are cons:ants. Show that the mean m, imd the variance a; of vfn] are
7
gh·en by m, =am_. + bmy and a 1 = a 2JJ:---:- b 2 aJ,
respectively.

2.t::3 Let .rfnl andyi_n] be two independentzex.-mean WSS random signals with aumcorrelm.iomt/)u{nJ and ¢'yy[n].
respe<'tively. Conside£ the random signal v[nJ obtained by a linear combination of x[n] and y[nj, i.e., v[n] =
ax[n] + by[n ], where a .and b are c-onstants. Ex.press the autocorrelation and cross-correlations, <(J,.,[n ]. ¢,~ [nj, and
¢'vy[nl :n terms of .p..,.{nj and ¢yy[n]. What would be the results if either x[n} or y[n] was zero-mean?

2.1]:4 Prove the symmetry properties of Eqs. (2.166a) throogh (2.166d).

2.~5 Verify the inequalities of Eqs, (2.167a) tMough (2.167d).

2.JI6 ProveEq.(2.168).

2.87 Determine the mean aud variance of a WSS real signal with an autocQrrelatiou function ghen by

*xx£n-- 9+1Ie
+
2
+t.u4
+ I 3£2 2f4 .
2. 11. MATLAB Exercises 115

2.,11 MATLAB Exercises


M 2.1 Write a MArLAR program to generate the follow·ing sequences and plot them using the function stem: (a)
unit sample sequence 8[n}. (b) unit step sequence ,u[n J. and {c) ramp sequence np.[n]. The input parameters specified
by the user are the desired length L cfthe sequence and the sampling frequency Fr inEz. Using this program generate
tlw first I 00 samp-les of each of the above seqm:nce~ with a ~plirrg rate of 20kHz.

M 2.2 The :;quare wave and the sawtoorb wave are t·.vo periodic sequences as sketched m Figure P2.4. Using the
fw"ICtions sawtoot.h and square write a MATLAB program to generate the above two sequences and plol them
us,ng the function stem. The input data specified by rhe u~er are: desired length L of the sequence. peak value A,
and the period N. For the square wave sequence an additional user-sp«;ified parameter is the duty cycle, which is the
percent cf the period for which the signal is positive. Using this program generate the first 100 samples of each of the
abnve sequem..-es with a &amp!ing rate of 20kHz. a peak value of 7, a period Of 13, and a duty cycle of -60% for the
sqllaf-e ""-ave.

(a)

A
.
9
1' 1

"'
'

I N
'
'
I
'
lI l ,__, "

-A l1 ''
1
{b)
Figure P2.4

M23 (a} L"sing Pwgram 2_1 generate dw sequence> shown in Figures 2.16 and 2.17 .
(b) C~nerate and plot the .;omplex ex:ponellCal sequence 2 . 5ei -DA+j1r fSjn for 0 :-;=:: 11 :c:; 100 using Program 2_ f.

M2.4 :a) Wr.te a MATLAB program to genenue a sinusoidal sequence x[n] = Acos(Won +¢)and plot me
~equcnce using the s-:em functwn. The input daU specified by the user are the desired length L, amplitude A.
the angular frequency w.,. and the pha~e rp where 0 < w 6 < :r and 0 :::; ¢J ::5: 2n- . Using this program genecate
the sinusoidal sequences shown in Figure 2.15.
·:b) Generate sia"'.us.oid.al sequen<ee~ with the angular frequencies giv--en in Problem 2.22. Determine the period of
each sequer.ct" frum the plot ar.d verify the result theoretically.

M L5 Generate the sequences of Problem 2.2l(b) to 2.2l(e) usjng MATLA3 .

1\.<1 2.6 Write a MA TLAB pwgram h' plot a con:muous--time sinusoidal signal and its sampled vez-:.;lou and verify Fi,ure
2.19. You need to use fhe hold functmn to keep both plots. '<'
116 Chapter 2: Discrete-Time SignaJs and Systems in the Time-Domain

M 2.7 Using lhe program developed in the previous problem, venfy experimentally tlmt the family of continuou~time
sinusoi6 given by Eq. {2 53) kad lo i-dentical sampled signals.

M 2.8 Using Program 2_4 investigate the effect of signal smoolhing by a moving average filter of lengths 5, 7, <md
9. Docs the signal smoothing improve with an increase in the length? \\'hat is the effect of the length Gn the dd.ay
between the «moothed output and ti:te noisy input?

M 2.9 Write a MA Tt.AB pr-ogram implementing the discrete-time system of Eq. (2.178) in Problem 2.29 and £00...-
thut the output _vjn] of thh. syslem fw an input x[n] =
0"/1 In l with yf -1] = l converges to ../ii as. n - oo.

M 2.10 Write a MATLAB program to compute the square root using the algorithm ofEq. (2.179) in Problem 2.30 and
showthattheo..~tput y[n]ofthissystemforanmputx[n] = !l'J.t(II]with y[-1] = l cenvergesto ..[a asn-+ oo. Plot
th~ errnr a!. a function !J'f" for seVer<l1 different values of u. How would you -compute the squ!L""e-root of a number a
""ith a value greater than one?

M 2.Il Using the fuJKfion impz write a MATLAB pmgram ro compute and plot the impulse response of .a causal
finite-dimensional discrete-lime sy:o;tem characterized by a differer..ce equation of the form of Eq. (2.8 1). The input
datu lo the prop-illll are the desired length of the impulse response, and the constants !Pt.} and {dt j of the differenc~
equation. Generare and p~ot the first 41 samples of the impulse re>pon,-;e of the system of Eq. (2.93).

M 2.12 Using Program 2_7, .determine the autocorrelation and the l.TOSs-correlation sequences of Pr-oblem 234. Are
your results the ~arne as those delermined in Problem 2.74"'

M 2.13 Modify Program 2_1 \o determine the autocorrelatiOfl sequence of a sequeoce <."'OI"rup!ed with a uniformly
di~tributedrandom signal generated using theM-function randr.. Usmg tlle modified progmm demonstrate that the
autocorrelation sequence of a noise-corrupted signal exhibits a peak at zero lag.

M 2.14 (a) WriteaMA Tt-AB program to generate the random sinusoidal signal ofEq. (2.140) and plot four possible
realizations of the ;andom signal. Comment on your results.
'-b) Compute the mean and vllliam:e of :a~Ingle realization of the above random signal using Eqs. (2,174a) and
(2.!74b). How dose are your answen: to those given in Example 2.44'!

M !.IS Using the M-fuoction rand genenrte a uniformly dh1.ribul:ed !ength-1000 random sequence .in the range
(-I, I}. Using Eqs, (2.174a) and (2,174b), compute tliemean and variance of the random signal.
Discrete-Time Signals
3 in the Transform Domain
-·----------------------------------------------------------------
In Section 22.3 we pointed out that any arbitrary sequence can be represented in the lime-domain as a
weighted linear combination of delayed uni.t sample sequences {lifn - k H. An important consequence of
this representation, derived in Section 2,5.1. is the input-output chaTacterization of an LTI digital filter in
the time-domain by means of the convolution sum describing the output sequence in terms of a weighted
linear Combination of its delayed impulse responses. We consider in this chapter an itlternate description
of a sequence in terms of complex exponential sequence-; of the form {e-i<=J and {z- 11 \ where z is a
complex variable. This leads to three particularly useful representatio11s of discrete-time sequences and
LTI discrete-time systems in a lransform domain. 1 These transform-domain representations are revie\ved
hen:: along with the conditions for their exif;tence and their properties. MATLAB bas been used extensively
to illustrate various concepts and implement a number of useful algorithms. Applications of these concepts
are discussed in the folJowing chapters.
The first transfonn-domain representation of a discrete-time sequence we discuss is the discrete-time
Fourier transform by which a time-domain sequence is mapped into a continuous function of a frequency
variable. Because of the periodicity of the discrete-time Fourier transform. the parent discrele-time se-
quence can be simply obtained by computing it~ Fourier series representation. We then show that for
a length-N sequence, N equally spaced samples of its discrete-time Fourier tTansform are sufficient to
describe the freque-ncy-domain representation of the sequence and from these N frequency samples. the
originai N samples of the discrete-time sequence ean be obtained by a simple inverse operation. These N
fr;~cency samples constitute the discrete Fourier transform of a length-.N sequence, .a second iransform-
domain representation. We next consider a genern.liza.tion of the discrete-time Fourier tran~fonn, caned
the z-transform, the third type of transform-domain representation of a sequence. Finally, the transform-
domain representation of a random signa! is discussed. Each of these representations is an important toot
in signal processing and is used often in practice. A thorough understanding of these three transform.<; is
therefore very important to make best use of the signal processing algorithms discussed in this book.

3 ..1 The Discrete-Time Fourier Transform


The discrete-time Fourier transform (DTFf) or. simply. the Fourier transform of a discrete-time sequence
x(n J is a representation of the sequence in terms -of the cDmplex exponential sequence {e-Jwn} where w is
the real freque.'lcy variable. The DTFT representation nf a sequence, if it exists, is unique and tbe original
sequence can be computed fcom its DTFT by an inverse transform operation. We fust define the forward
transform and derive its inverse transfonn. We rhen describe the condition for its existence. and summarize
its important properties.
1Periodic sequences can be represente<l in the fnx;uency domain by mean~ of adi.sereu~ Fourier series (see Problem 3.34)

117
11 B Chapter 3: Discrete-Time Signals in the Tra~sform Domain

3:1.1 Definition
Th•~ discrete-time Fourier transform X(eJ"") of a sequence xfn] is defined by

=
X(ej"') = L x[n]e-1""'. (3.1)
n=-=

In ;~eneral X{elw) is a complex function of the real variable wand can be written in rectangular form as

X(eiw) = X,.,(t'l<->) + jX;m(el"'), (3.2)

where Xre(ei"') and Xnn(ei"') are, respectively, the real and imaginary parts of X(ef"'), and are real
furcetions of w. X (eim) can alternately be expressed in the polar form as

where
B(w) = arg{X(ei"')]. (H)

Th! quantity IX(ejw)l is called the magniludefunction and the quantity O{w) 11> called the phase function
wi1h both functions .again being real functions of w. ln many applications, the Fourier transfonn is called
the Fourier spectrum and, likewise, !X (ei"')l and tJ(~) are referred to as the magnitude spectrum and phase
spectrum, respectively. The complex conjugate of X(ei"') is. denoted as X*(ei<£J}. The relations between
the rectangular and polar forms of X(eim) follow from Eqs. (3.2) and (3.3), and ace given by

Xn:(el""J = IX(el"')!cru; 6l(w),


X;m(ei"') = IX(ei"'-')lsin O(w),
jX(ei"')j 2 = X~(ei"') + x;.,(ei"').
Xim(ej"')
tan8(w}= ..
· X~"<'{eJ"')

It can be easily shown that for a real sequence x[n], IX (ej"')l and Xre(ei"') are even functions of w.
wb.~ B(w) and X;m(ej"') are odd functions of w(Prublem 3.1).
Note from Eq. (3.3) that if we repl.ace &(w) with 8(w) + 2:rk, where k is any integer, X (el"") remains
un;;:hanged implying that the phase function cannot te uniquely specified for any Fouriertransfonn. U:1Iess
oth.~ise stated, we win assume that the phase function f>l(w) is restricted to the following range of values,

-Jr ~ B(w) < n.

called the principal value. As illustrated in Example 3.8, the discrete-time Fourier transfonns of some
sequences. exhibit discontinuities of2.Jr in their phase res-ponses. In such cases, it is. often useful to consider
an alternate type ofphase function that is a continuous function of w derived from the original phase function
by removing the discontinuities of2n. The process of removing the discontinuities is called "'un .....rapping
the phase'" and the new phase function win be denoted as Oc(Go) with the subscript ''c~ indicating that it is
a ccnlinuous function of w. 2
We illustrate the DTFT computation in the following two examples.

2 In some cases. discontinuities of~ m"-Y still be pre,em aft« phase unv..npping {see Table 3.1 foc an example),
3. 1. The Dfscrete-Time Fourier Trans:orm 11 ~

,---:- -- --~--- ----,


, /'\ i'\ f\
~ 1/ ', I \ ; \
J·r' \\./ \.~~
0
i
! \ ! ' !
/
-{)5: J
\_/
~
\JI
-2" "" a 1t
NormalizM ~uenc~

(a) (b)

Figure3.1: Magnitude and phase of X (efw) = 1/( 1 - 0.5e- jw)_

It should be noted here that for most practical discrete-time sequences, their D'IFrs can be expressed
in tenns of a sum of a convergent geometric series wh:id: can be summed in a simple dosed form as
iilustrated by the above example. We take up the issue of the convergence of a general DTFT later in this
sel:tion.
As can be ~n from the definition and also from Figure 3. I, the Fourier transform X (ej"') of a sequence
x[n] is a continuous function of m. It is. ruso a periodic function inw with a period 2.Jr. To verify this latter
property observe that

n=-oo n=-:oo
=
= L .x[n]e-.i"'t" = X(ei'"'t).
n=-oc.
120 Chapter 3: Discrete-Time Signals in the Transform Domain

Jt therefore follows that Eq. (3. J) represents the Fourier s<!ric--. rep:e~nration o( the periodlc function
X (eju>J. As a f('•ml£, the Fourier coefficients x!n I can be computed from X (eJa.") using li'le Fourier integral
given by

x[n! = -lrr1- /:r


_,.
X(el"')e-""'" dw, (3.7)

cal':ed the in;wrse discrete-time Fourier transfonn. Equations (3.1} and (3.7) cc>nstitute a discrete-time
f'Qurier transfonn paic for the set:uence x[nJ.
To verify that Eq. (3.7) is indeed the inverse of Eq. {3.1 J w<! suh:.titute the expres..<~.ion for X (el"'} from
Eq. {3.1) in Eq. (3.7) arriving at

x[nj = ~ '!"(=2::
2n: -:r
- i=-=
··)
xil]e-J<'-f

The order of integration and the summation on the right-hand ~ide of the above equation can be interchanged
if the summation inside the brackets converges uniformly, i.e., if X (e'w) exists. Under this condition we
get from the above

~ow,

sin:rr(n-f)=!L n=l,
K(n-e) 0, n#.t,
=8fn- fl.

Hence,
~= =
L x[t]
sinrdn- f)
' = L xLE}O[n- t] = x!n],
f=-= rr(n - f ) f=-a..

using the sampling property of the impulse function.

3.1.2 Convergence Condition


Nc·w, an infinite series of the fonn of Eq. (3.1) may or may not converge. The Fourier transfonn X (ei"')
of :r[nJ is said to exist if the series in Eq. [3.1) converges in some ser.se. If we denote
K
XK(ej"') = L xlnk-J"m, (3.8)
n=-K

lhen for uniform cmwergence of X (e..'"'), the absolute value of the error ( X(ejcv) - XK(eiw)} must ap-
proach zero fo: each va!ue of was K approaches oo, Le., -
3. 1 . The Dlscrete-Time Fourier Transform 121

Now. if x[n J is an absolutt>lv summable sequence, i.e .• if


~

L lx[n]l < oo, (3.9)


n=-=

IX(ejm)j = l..tx x[n]e-fwnl.:S "~~ Jx(nJI < oo

for all values of w. Thus. Eq. (3.9) is a sufficient l."Qndition for the existence of the DTFr X(elw) of the
sequence .dn]. Note the sequence x[n-] =a" Jl.[n] of Example 3.2 is absolutely summable as

= 00 l
L !a"lttlnJ = n=O
n=-=
L ia"l = I !al
< oo,

and its Fourier transform X (el"') therefore .converges to I ((I - ae- i«') uniformly.
Since

an absolutely summable sequence has always a finite energy. However, a finite-energy sequence is not
necessarily absolutely summable. The sequence x 1 [n J of Example 2. 7 is such a sequence. To represent
such sequences. by a discrete-time Fourier transform. it is necessary to consider a mean-square COIW£rgence
of X (ei«>), in which case the total energy of the error (X(ej"') - Xx(eiw)) must approach zero at each
value of was K goes to oo, i.e ..

(3.10)

I
In such a case, the error j X(efro) - X K(efw) may not go to zero as K goes to oo, and the DTFT is no
longer bounded, i.e.• the absoll.lte sumrnabilitycondition ofEq. (3.9)doesnothold. The following example
considers soch a sequence.

K!l'
122 Chapter 3: Discrete-Time Signals in the Transform Domain

HtJ>(f'.}:n)

• '
'I I I
m, IDe •'
ffi

Figure 3.2: Frequency response plol ofEq. (3.11).

-~-

The mean-square convergence property of the sequence hLp[n] discussed in the previous example can
be further illustrated by examining the plot of the function

(3.13)

for various values of K as shown in Figure 3.3. It can be seen from this figure that, independent of the
number of terms in the above sum. there are ripples in the plot of HLp(ei 41 } around both sides of the point
w = w,. The number of ripples increases as K increases with the height of the largest ripple remaining the
same for aU values of K. AsK goes to infinity, the condition ofEq. (3.10) holds indicatingtheqmvergence
of HLP,K(ei"') to HLJ•(ei 10 ). The oscillatory behavior in the plot of HLP,K(ejm) approximating a DTFf
HLp(ei"') in the mean-square sense at a point of discontinuity, as indlcated in Figure 3.3, ll;. comrnunly
known as the Gibbs phenomenon. We shall return to this phenomenon in the design of FIR filters based
on the windowed Fourier series discussed in Section 7 .6.3.
The DTFT can be defined for a certain class of sequences which are neither absolutely sunnnable nor
square-summable. Examples of such sequences are the unit step sequence of Eq. (2.37), the sinusoidal
sequence of Eq. (2.39), and the complex exponential sequence of Eq. (2.42) wltich .are neither absolutely
summable nor square-summable. For this type of sequence a ®crete-time Fouriertransform reprt>J;ent:ation
is possible by using Dirac delta functions. A Dirac delta function S(w) is a :fu.nct:km of t.U with infinite
height. zero width, and unit area. lt is the limiting form of a unit area pulse function PA(w) shown in
Figure 3.4 as tl goes to 0 satisfying

lim {oo pc._(w)dw = {'x S(w)dw. (3.14)


c.. ..... oJ_co J_oo
The discrete-time Fourier transfonns resulting from the use of Dirac delta functions are not continuous
functions of w.
3.1. The Discrete-Time Fourier Transform 123

,r·'--.__/ \
,------ ~-----·
:.~~u

f~~~.--
·•·: \
·--l .
~._,-,;
\ §-,
',
'
\
'\
. _ _ _ jJ__~ "-.___./------....;
uj -----·- <)2 {'4 Ob {).£
-·~~cm>,h...-0
··~ ""'~""~"
"' N<:m->i>~ hq~oocy

:..~ ,,, N~N

'
1> . . ----/ ,;
"',,
-g [_,{, ~ ~·-!
' I
i
ic\4 =·-~:
I
0 '

< ;
'
'I
l)_~~ --~:
''
td ,, i -~·._r...,--~
'- '
"
0
"'~'-'""''h'"'" "' "'
'" ·~'><'~
J<1gure 3-.3: Frequency rc'{lonse plots of Lq. (3 L3; for variom value~ of N = 2K.

______ L L . - - - - w
""
Figure 3.4: L'nil. an:a pulS-<;: function.

i;f;
124 Chapter 3: Discrete-Ttme Signals in the Transform Domain

Table 3.1: Commonly used discrete-time Fourier lran.'ifortn pairs.

Dlserete-Time Fourier Tnmstonn

L; 2It0(w + 2.d)
k=-'""-

~[n]
'
~-'-=:c.;:; + L= rr;)(w + 2rrk)
e-Jtn k=-oc
=
L- brJ(w- Wo + lrrk}
l=-=

ae i«>

Table 3.1 Jists the discrete-time Fourier transforms of some commonly encountered sequences.

3.1.3 Bandlimited Signals


A fuli-band discrete-time signal has a. spectrum occupying the whole frequency range 0 :5 lwl :5 1r. If the
spectrum is limited to a portion of the frequency range 0 :':. lwt :5 rr, it is called a bandlimited signa). A
lowpass discrete-time signal has a spectrum occupylng the frequency range 0 :5 l.wl ~ wp < 1r, where
Wp is called the bandwidth of the signal. A btmdpass discrete-time signal has a spectrum occupying the
frequency range0 < WL :S !wj :5 WH < Jf, where WH- WL is its bandwidth.

3.1.4 Discrete-Time Fourier Transform Properties


There are a number of important properties of the discrete-time Fourier transform which are useful in
digital signal processing app}ications. These are listed here without proof. However, their proofs are
quite straightfrnward and have been left as exercises. We list the general properties in Table 3.2, and the
symmetry properties in Thbles 3.3 and 3.4.
The following examples illustrate some applications of a few of the properties of the DTFf.
3.1. The Discrete-Time Fourier Transform 125

The expression for the DTFI' given above is a rational function in ei"', i.e., a ratio of polynomials in
ei"'. Tbe two polynomials are each of first order. In the general case, the DTFI's we shall encounter in
this book are ratios of polynomials of higher order and are of the form
P<l + P1e-jw + · · · + PMe-it.<IM
(3.17)
d.:J + dlt' jw + ... + dNe j~»N

3.1.5 Energy Density Spectrum


One important application of ParsevaJ's relation given in Table 3.2 is in the computation of the energy of
a finite-energy sequence. Recall from Eq. (2.26) that the total energy of a finite-energy sequence g(n] is
given by

n=-=
If h[n] = g(n], then from Parseval's rel.at.ion we observe

(3.18)

Thus the energy of the sequence g[n J can be computed by evaluating the integral on the right. The quantity
(3.19)
is called the energy density spectrum of the sequence g[n ]. Tbe area under this curve in the range -1f :5
w ~ 1r divided by 2rr is the energy of the sequence.
12£ Chapter 3: Discrete~ Time Signals in the Transform Domain

Type uf Property Sequence Hiscref.e.. Time Fourier lhmsfbrm

gfnl G(t:i"')
h!n! Htei'"-')

I
ag[n! + .Bhin I
e- ;,,,, G{ej"'!

l-'req~ncy-shifting .,;w .;; g{nj G (eJ(<V-wo))


D:ffcrcntia(i"n . dG{eiw)
ng[nl 1
in frequency dw
Convolution !:lnl®h\nl
Mmlulalinn g(n!hJni

----- ---------- ----------------------------

L gfnW•[nJ =

Table 3 ..\: Symt:Jeuy rda!ion~ uf the di~crele-ttme ro-arier lran.'i-fnnn of a c-omple" sequence.

Sequence Uiscrdc· Time Fourier Transform


-------- ------·---- ---------

--- -------------------- --

Rcf.dnll X,re'{LJ) = ~lXk-''~J + X*(e-1"')1


jlmt:r!n!} Xca(t·.i"") = i-{Xkju.;- .\'*(<'- jw)j

Xrek 1 u'!

Note: X.c.,(ei"'J and X.;...kj'0 ) :ore the conjugale-sym~ctric and cn:lju_g.ate-anti:;.ymrn:etric


parts of X re l'"), respectively. Likewi~e. Xc~!n j <lf'.d .1'.;:a\1l 1 are the conjugate-s.vmmctrk and
conjugalc-.antisymmetric pans of x[n t respectively .
3. i. The Discrete-Time Fourier Transform 127

Table 3.4: Symmctr;.• relulions of the C::~rete-Ln,e Fourier tran~fonn of a real seo;;:uence.

Sequence DiM:rctc-Time Fourier Transform

.r.-,[n~ X:d<"jw!
x,.,u[ .., l iX;,(eim;

Symmetry relation,; Xim\<'>'1 ) = Xmlk-j"')

IXIe'"<Jli = IX(c-J"'JI

Nmc· x.,_, fn l and ·'cd [r.] denote the even and odd paris nf xln j, respecll,,.dy.

Recall from Eq. (2.l05) that tbe aulocorrelation sequence rgg(ll of g{n] can be expressed as

rg!f] = L g[r.Jg!-{t- n)] = xi.CJ@g[-f]_ (UG)


~=-=

Now from Table 3.3, the DTIT of g[-£1 is G(e- 1"'). Therefore, using the convolution property of the
DTFf given in Table 3.2, we observe that the DTFT of g[t:']@g[-£] is given by G(ej'"'}G(e- 1"') =
IG(ef"-')1 2 , where we have used the fact that for a real sequence g[n], G(e-i"') = G"'(el"'). As a re!'ult,
the energy density ;;pectrun; Sx 5 (eJ"-') cf a n:al sequence g[n] can be computed by taking the DTFf of its
autocorrelation sequence rR.<: Ill, i.e.,

i=-=
'
" lf]ef<d' . (3.21)
128 Chapter 3: Discrete-Time Signals in the Transform Domain

Analogously, the DTFT Sgh.~ei"'-') of the cross-correlauon sequencergh (t] oftwo sequences. g[n] and
h[nl is called :he cross-energy density spectrum:
00

Sgh\e1 '") = L Tgi;[fjejwi'_ (3.22)


f=-00

3.1.6 OTFT Computation Using MATLAB


The Signal Processing Toolbox inMAn A.Si:ldudes anumberofM-files to aid in the D1Ff-based analysis of
discrete-time signals. Spec:ifit:a.lly, the functions thai can be used are f:::-eqz, abs, angle, and :a: wrap.
In addition, the built-in MATLAB fuoctions real and imag are also meful in some applications.
The function fre:::;;:z can be used to compute the values of the DTFT of a sequence, described as a
rational function in efw in the fonn of Eq_ (3.17) at a prescribed set of discrete frequency points w = Wf.
For a reasonably accurate plot, a fairly large number of frequency points should he ~lecled. There are
various forms of this function:

H ~reqz (n-Jro,der.,v..-), H = lreqz(num,den,f,FT},


[ E, Vi] Ereqz (n'-lrr.,den, k), freqz (num, den, k, ::CT) ,
IH.w] freqz (n;Jrn, d2n, k, 'whole') ,
i H, E] freqz(r:.--.lm,den,k., 'whole' ,F'I), freqz(nu_m,den) _

The function freqz returns the frequency response values as a vector H of a DTFf defined in terms
of the vectors mL'n and der: t:ontaining the coefficients {p;} and {d; }, respectively, at a prescribed set of
frequency point~. In H ~- freqz: r::um, den, \d). the prescribed set of frequencies between 0 and 2x are
given by the vector w. In H = f reqz I nu:n, den, f, FT' ~ the vector f is used to provide the prescribed
frequency points whose values mmt be in the range 0 toFT ;2 with FT being the sampling fiequency. The
lotal number offrequenc:· points can be specified by kin the argument of freqz. ln this case. the DTFT
values H are computed at k equally sp-aced points between 0 and n, and returned as the output data vector
w or computed at k equally spaced points between 0 and :CT /2. and returned as the output data vector f.
For faster computation, it is recommended that the number k be chosen as a power of 2, such as 256 or
512. By including 'whole' in the argument of freqz. the range of frequencies becomes 0 to 2n orO to
FT, as the case may be. After the DTFT values have been determined. they can be plotted eilher showing
their real and imaginary parts using the functions real and imag or in terms of their magnitude and
phase componer.ts using the functions abs and ang 1 e. The function angle computes the phase angle in
radi1ms. If desired, the phase can be unwrapped using the function unwrap. freqz ( nur.,, der. l with no
output arguments compules and plots the magnitude and phase response values as a function of frequency
in the current fi.15ure window.
We illuslrate the DTFT computation using MATIAB in the following example.
1
'

% Yitt&tf X?'· '\JL&. dinG \ rtLtf "'hFIH\iiiT )\ ; if ;f\\t'tp


z "' 1 f'<tf<~ t r · 'Mttrut:!ID r: o ¥ ;; rwt;;tHM:!ICV t:m+ (1 t..'fl j {
9 :!V¢Hrd lr; t;{)i!fr 10\\11-J\H w"~iA
A!J 101 t fiiitti{ t:. f "'!ilf1V0t.q :i/IH f{n; \\10ft i; 4A ;;; +ilil!T\'t.B "
3.1. The Discrete-Time Fourier Transform 129

' ''
.'l<n ·-~·" \ \ ..n (
,;;,,, 1. --"'}!'!

H '' "' {.
···;" :_;; "":.r ..::. ;1 ,;;

• 1: " ; '"- •

3.1.7 linear Convoiution Using DTFT


An important property of the DTFTis given by the convolution theorem in Table 3.2, which states that the
DTI-T Y(ei"') of a sequence y!nl generated by the linear convolution of two sequences, gl_nl and hfnl, is
simply given ~y the product of their respective DTFTs, G(ei'"'} and H(ei""). This implies. t:bat the linear
1:30 Chaoter 3: Discrete-Time Signals in the Transform Domain

//""

---~~
\ !
\1
-) ·--
"

u'·- -
c
"'
(c) (d)

··igure 3.5: Plots of the real and nnaginary parts. and the magnitude and phase spectrums of the D1FT of Example- 3.8.

_,
0 o.o

.Figone 3.6: Unwrapped pha.s<: spectrum of the DTFr of Example 3-.8.

;::onvolutwn }ln; of two ;,equem:es, .J?fn J and h[nl. can be implemented by computing first their DTFTs,
(J(e}""J and H(e 1 ""), fcnning the product Y (e 1"') = G(ej'"-')H (ej""), and then computing the inverse DTFr
of the product. in ;,nme appli~.:atioos, particularly in theca~ of infinite-length sequences, this DTFI-based
appmach may be more com·emcnt to carry out than t.lte direct convolution.
3-.2. The Discrete Fourier Transform 131

3.2 The Discrete Fourier Transform


In the case cf a finite-length sequence xjn]. 0 :S n s l'i - I, there is a simpler relation between the
sequen..:e and its discrete-time Fourier transform X {eJ«>;. Jn tact, for a. length- N sequence. only N values
of ·x (e.i 0 ) . caHed the fr~quem:) .1ample.s, at N di:;.tinct frequency points, u..> = wk. 0 ::::_ k _:::: N - 1. are
sufficient to Cetermine x[ n ]. and hence, X (eJ"'), uniquely. This leads to the .concept of tlk di.-:crete Fourier
transfo-rm, a second tiansform-domain representation that is applicable only to a finile-!englh sequence. Jn
this section we define the Jiscre!e Fourier trar.sform, usuall~· known as the DFT. and develop the inverse
transformation, often abbreviated as IDIT. We then summa:ri.ze its major properti:m; <~nd study especially
!woof its unique properties. Several important applica:Jons of the. OFT, such as the numerical computation
of the DTFf and implementation of linear convolution, llre also discussed he.re.

3.2.1 Definitjon
T'le s.impleM relation between a finite.- length sequence x!n j, defined tor 0 .::S n :5: N - I, anc lls DTFf
X (el''') is obtained by uniformly sampling X (ei"') on the cu-axis between 0 :S w ::= 271" at mk = 2nk/JY.
0-:::; k ::::_ N"- I. FromEq. 0.1),
N-!
X[k] = X(e'""·l:- w=2"'kiN = ~ xtnle-J~ ..,.kn/N.
L - · O.:::;:k.:::;:N-1. (3.23)
n=~

Note that XLKJ is also a fin-ite-length sequence in the frequency domain and is of length N. The sequence
X!kl is called the discrete Fourier transform (DIT) of the sequence x!nJ. 3 Using the commonly used
notation
WN = e- j2l</N' (3.24)
we can rewrite Eq. (3.23) as

N-l
X[k] = L x[nJW1",
,.,.{)

The inverse discrete Fourier transfonn (IDFI) is given by

] N-l
x[nJ = N L Xfk]W1~kn, (3.26}
k=fl

To verify the .above relation \'ie multiply both sides of Eq. (3.26) by Wf/' .and sum the result from n = 0
ton = N - l, resulting in

.'\'-l N-1 I j N-J '

~ x[n]W,~" ~ ~ ( N ~ XfkJW,~'")
l N-lN-1 .
= - ~ "'""'XikJH'-{r.-tm (327)
N ~~ - N .
k =0
n.,...J)

~ ". g.-nerahzatioo oj' the


Dvr concept ~~ '.he n<>nuniform duscl'i!U /'"nurh•t i>"<llbform (XDFT) obtained by -.a:mpling the DTI"·T at
ll<muntfnrmly '>\)lKed trequetH:y p<:>rnh [lhg%1. Tb" NDIT i~ illvc;;ugarcd m Problem 3.109.
132 Chapter 3: Discrete-Time Signals i11 the Transform Domain

An interchange of the order of summation on the right-hand s.1de of Eq. (3.27) yields

The right-hand side of the above equation reduces to X[ t J by virtue of the following identity (Problem 3.30):
N-i
~w-\k-fln_IN, fork-t=rN,rani.nteger,
L... N - 0, mherwiM:. {3.28}
od>

thus venfying 5q. (3.26) is indeed rhe IDFT of XlkJ.


The DFf computation is illustrated in the following two examples.

w III.
"Z"
J\ w.
:I'M +·
Mtleslkt\0
!¥ -· 1''
3.2. The Discrete Fourier -;-ransform 133

A>, t'llll be <.;een from F..qs. (3.25) ancl {3.26), (he compu!ation of d:.c DFT and th~.; IDFT require;., respec-
nC:y. appnnimately :V 2 complex mu~tiplicwions and N1 :V - l l complex additions.. HowcH~r. elegant
met!tods bavc been deveioped to reduce the compufatJorwl ;:omp!exity te about :'V{Iog 1 N) operations.
Thcc.c lt~dmique:- Jrc usually calkd fa1<>t Fourier transfnrm (fFf) aigorirhms :mC arc di<.;eus~d in Sec! ion
R_ \.2. As a rc,u!; ~·f !he '-1'1.-':ti!al:>itit) of lh.:-"Se- faSI -a!go1 iLhm:.. the DFT .Jnti the iDFr. w:d their variations..
an: of1cn :.;~d in dig ita! :.ig_na; proce~smg applLcation,; fm variou,.: purpose,;_

3.2.2 Matrix Relations


The DFr .~ampks dd;ned in Eq. (3.25) can he expn:>,-,c,.:J in matnx form as.

(J.::\'7)

,,·here X is the vcdor compnscd of lhe IV DFT samples.

X~ [ X!OJ X[IJ X!N- llJT,

x i:, !he vedor of N inpJt samples,

x=lxiOj xill
<!nd Ds i>. the N x ,y DfT u;a!n\- given by

D_~,,
r:
I I
w,v'
w;,.
lh}
W-'
.\
W
N-l
N
l

w2<:V--I)
A (3.40)

I A'--l 2'.N-1J iN-1\x(N-i;


l! WN WN WN
LJkewise, the JDFT relations can be expre-.~sed in matrix form a<;

r... xtDj
xllJ
X[O]

J
X [II
i (3.41}

L xiN-!J X(N-11

w_,--·
~r;;-~
. {3.42)

, 11 -:N-i!
n_\'

lr fol;nw;. from F....;:;;. 0.40) and 0.42) thdt

(3.43)
134 Chapter 3: Discrete-Time Signals in the Transform Domain

3.2.3 OFT Computation Using MATLAB


There are four built-in functions in MAru.B for the computation of the DFT and the IDFT;

f f t {x}, fft(x,NJ, ifftiX), i.Et(X,N)

All of these functions make usc of FFT algorithms which .are computa.tionally highly efficient compared
to the Jirect computation of DFT and the i1werse DFT.
The function f f t { x) compute.~ the R -point DFf of a vector x, with R being the length of x. For
computing rhe DFT of a specific length N, the function Eft t x, N) is used. Here. if R > N, it is truncated
to the first N samples, whereas, if R < N, the vectorx is zero-padded at the end to make it into a length-N
seq.rence. Likewise. the function iff t ~X) compute;;. the R-point IDFT of a vector X, where R is the
length of X, while iff t {X, )l} computes the TDFI of X, with the size N of the lDFf being specified by
the user. As before. if R > N, it is automatically truncated to the first N samples, whereas, if R < N, the
DFf vector Xi~ zero-padded at the end by the program to make It into a lengrh-N DFT sequence.
In addition, the function :5.f LIJ.tx (N) in the Signal Processing Tonlbo.x of MATLAB can be used tc
compute the N x N DfT matrix DN defined in Eq (3.40), To compute the mverse of the N x N DFr
matrix, one CUll use the function co:-.j l.dftmt {N\) /N.
We illustrate the application of the abo\"e M-files i:n the following three examples.

¥:- 1!'¥'\fY\87\"F>
'8 :LLLill&:LkUX AS'.t; .f J:sf'! ::;; ;;11;! tLm-\ t<>ti
t
llli 1m\:tttl Lrt Li:tit ,; tQt}Yj \ h t0 :::::; 0 010/'$;: Lf>~k' ILL<; ; .>"" '"'X \ ¥ +nf
'
It s HJ,t;t;t t 1 t L t l\D \ +!Gt)d; {\ /l t LL* "'"· · r. Hf 'v ~& i 0

4& " 1: f\0\J!. f r iT r f.pr ; A:l\."{R !'. Jt \.1\K J•"f""-


4 ·yq:.Ltl:t4i'!i:"' t.tr¥ f<-.·-..1i>·H + ;1ili0 -.ks;;:nt: h'"";Ht.(L'1i:
', '· 1 <"'Vi& 1: 1H! ~ 0

{\ ,•n':'}<'l:P }\.::1 !'!>!.< .. { \'.f!';"


• tY p;.fft:

d'S k0\ j 't


':.::: t iii ! .{; t ;n P & ,. ':. : wx· ··' t· 11''0:1 1:r :sr:p;::;'f:;snt::v. ·
q";;A'&FF} "'f.::,J!US :[t,'[,qy rf 0{=}. { ,,•1:" \

HHCI<<'' '" t ::: ' L lj


J : : , t ; ; ) l •.
"t 4'111\ r Y. N{"-" ;<r
3.2. The Discrete Fourier Transform 135

Origi;;al time-dornmn s.:qcence

Time indet n
(a)
Magnitude of the DFf $ilmples
10 2,-----~--~-----

~
• '
6
·o "
~ 4

" 2

0
9
I r Q Q . f f -'
-2
1
'-------,:------:c:-----___j
0 s :o 15 0
Frequency index k
(b)

Figure 3.7: (a) Originallength-N sequence of Eq. (3.44) and (b) its M-poinl DFI' for N = 8 and M = 16.

t!Hi:i'Ap lt/t ';:;_, {, ;vr


/(tAN0{\t j;j} {
t': jT'.r%it;;li;+'1i'P n$
+10\klii:f\ { ilii 'I v


0 1 %:. hfl L Ltd' 3:- Yth 7 U:H"'T \\'JIIQ!fhfif· 1\jf'
•'0 T'!l'»ti in tJt1!1' XtLrv;;i ::.::: 1:: ert x.tf:,s ~ o~u•a t.tb0fr
% r s-tt:;n:<>. 1# +r- ru:v·w
1C •• ; f<\1'4 C { :~:;:
/4 117ZHJ t.f lc: 3
'If <;e.1hm Y d\;; ¥ \ fHP ,f"0t:rt:1:;~¥,
if Y;;«,
t) ~ i!s<t ; J }(;.
136 C hapter 3 OISctrete-Time S.g"'tals 1n tfle Tr.fU'\5form Domain

?J Coq;.u t~ 11.:.9 N-poin t I rn·""T


u • if.ft:W. I ;
Plot. the DF'f .o.nd i ':".~ 1 OF'!'
: 1~···
!lt:em 4A-1,U.
x1abe1 4 • f~e<JU ncy iadex '}J Jlabel ( • hJl"C,lj t.;uoc:~· J
~itle£'Origin~1 OFT ~~lee·~
p~u!Se

sUbpl ot c?.. 1 . ~ 1
n • G: l ~ N-1 :
::~ ~:: £n . !rea 1 ( u J )
ti r.:.l <'Real Pilr of the time rlon~in ~~rr~l~~·)
l<1bel r "T-~ in- x n• ) ! ylabe!(•~iit~de')
ttul:Jplotl1.1,2•
atem~n . .1 g4u~ l
t:.itlet 'l -gJ.n.,r..,r p.'lrt of ':"toi'J" ti.:n<• -c.ci'Ttl!lln !l.sr~l~!S •J
xl~1C·~i~ 1ncir-x n'~r y ~be1(•A=pli~ude•)

A,o; ~ iPJDIT1liD u fur t.bl! mpul daD L:Ui.tiDIJfig CJf ttJt l tgth ur ~ DFf td lhc: l~glh of Lhi:- JDfT. h
II'Uil. ll crtlho
~he!! tOOmpurt;~ tilt
IDfT of the mnp DFi ~e of Eq. ll.4;5l 1nd pJnto~ c: oritiJl I :OFT ~eru:~: i~ IDFT
~ ill FiJift :!.&. Note l1uu e'\'Cn U!ou~h dx OPT &eqUt'rv;;l{: i1 rul. · s IDPT i"' .. I;Q(]'Ip'J~ uroe-rJnm:~.in
scquc:OQC u C:Jpcgcd.

3.:1 l...et .r :::11 ') 1111d N = I C1 ftll' tb~ fi11 ~ -1-!:!~th 'eqliCilQ: c In I nf F.q.. I ~.}..l.. mm &j. t :k'\6) 1L.,
16-PQinL OPT i tfle.refure gi'R!fl hy
i, fnrl\" • J •
.rt~ J = { 1. Cud: = 13,
V, lldle.r.o.ne.
We tklem\1~ me DTFT X (eJ•) ul the k~l.b-16 .... , ~· t.:O.Il'IIJ'IW i.l 512-put..'l[ OJ-"! ll~l"'l!; lh£ MATLA.B plO.!:;.I'Ilm
inditwed below'.

' Hoq1am 3_4


t; ~ • ri.;~l (;o.~~,;t..- t. ;.o n of DTI-'T U~ir.g 1)':"'J'

't
1c; •
Genero!it<e tile lenqth·l6
(1:15;
sinusoidal aE'Q'Uet!'('e

~a cost2*pi~k•l /16) ;
I to~put.P it.~ 512•po&nt DFT
)( • f f t Uti;
XE rtc.Cx,'512•r
Plot th ·reqaency r~sponse
L .. o~sl :
plot(L1512,a~e(}.~J I
hold
p!o'l: ( k I · 6, abo!: .i x > , • n • 1
xlabell. UO.t.l:!ai i2:ed l!l:'tQUlAr r~ :]UC11c:''f' f
yl~l I' Ma t'l t:ude•'

F~ 3.9 :sbo lht pJue uf Dl'T'T X(, "'') "" ~ill:! lh¢ UfT ~:t c~o .%'~ 1 A\ t:fldlc ed Jn 11tlS I&&Uft' IllY
OtT '11lut1 A"{.k]. 1nm~d by ci:rl;leG, are I'C s;dy the ftatuclll.")' \:truples u1 l.la! rrrrr )L ,,... ) nil Q; . . IT tIS.
0 :5o* .:S 1:5.
3.3. Rela1ion between the DTFT and the OFT, and Their lnverses 137

,,_ Origmd DFT :;a:nple~

0.31 ? j
~20fi '? I J'

~~:lJ-, __f_I· -r·


0 I 2 3
lJ l
4
Freqtrency inde'< A
5 fi
-
(a)

lmagirurry pan of the tune-domain sample~


·-- 0.2-

0 Q .
T ------o-~
6 I
"'
4
-~.-~,-~10~-
t
Time index n
(b) (c)
FiguTe 3.8- (a) Or.ginal OFT sequence of Jengtl: K = 8, and (b) Its :3-point IDFT.

Figure 3.9-. The nugmtudes of !he DTFT XkJ"-') and the DJ-<T K[k] of the sequenci.'! x[n] of Eq. {3.33) with r = 3
anJ N = 16. The DTFT i~ plotted a:- a solid lint: and the DFT samples. a-re :>hown by circle-s.

3.3 Relation between the OTFT and the OFT, and Their
Inverses
\Ve naw cx.amine the explicit relation between the DTf<T <:.nd the N -point DFT of a length-N sequence,
and the r-elation between the DTFr of a length-M sequence and theN-point DFT ob:ained by sampling
theDTFT.
138 Chapter 3: Discrete-Time Signals 1n the Transform Domain

3.3.1 DTFT from OFT by Interpolation


As ind~cated by Eq. (3.23), the ,".'-point DFf X[k] of a length-N sequence x[n] is s;imply the frequency
sample~ oflts DTFf X {el"') evaluated at N uniformly -;paced frequency poinls,w = Wk = 2::< kl N :cadians,
H _::: k _:::: N - 1. Given the N-point DFT X[k] of a Iengtb-N sequence, it is also pos~able to determine
its DTFT X (ej'~) uniquely. To thts er.d, we fust det!:rrnine xln I using tbe lDFT relation ofEq. (3.26) and
then compute its DTFr uslng Eq. (3.1}, resulting in

N-1 X-J
= ~i ~ X!k] L e;hkniN e- jmn, (3.46)
k=O n=ll

wlwre we have used Eq. (3.24). Kow the Iight-hand summation in the above expression can he rewritten
as

. sin ( 0-'N ;_ 2 "")


e Jtlcw\' :!r.lc)/2/\'] sin ( "'N2Jirlc)

(J.47)

Substituting Eq. (3 47} in Eq. (3A6} we obtain the desired relation expressing X (el"') in terms of X[k]:

. I N-t sin("'N 2,.-k)·


X(el'"J- _ """'X[i] 2 . e J!w-(2-"kf,V)If-(N-l)jlj (3.41!)
- - •\' k=O
L .- (wN2X2rrk)
:-.Jn

3.3.2 Sampling the DTFT


CoEsiCer a sequerice [x[n l} with a discrete-time Fourier transform (DTFf) X (ei"'). We sample X (e!"') at
N equally spaced points W.t = 2.nkj N, 0 __:: : k :S N - I, developing theN frequency samples {X (eJ'-'' )]_
The>e N frequency samples can be considered as an N -po:int DFT Y[kJ whose N-point inverse DFT is a
len~.th-N sequence {y(n J}, 0 .::;: n :S: N - L
:~ow, X (ejw) is a periodic function of w with a Fourier c,eOe.s re-presentatiJD given by Eq. (J.I ). Jt:.
Fourier coefficients x[n} are given by Eq. (3.7). It is instruct1ve tu develop the relation between .x [n] and
y!n;.
::crom Eq. (3.1).

Y[kl = X(ei"'*) = X{ei\2-'fkf_•\t)) = L .x[£]W~,!, (3.49)


~=-=

whe·:-e W,v = 2
e-J( "/Nl_ An inven.e DFf nf Y [kJ yield<:.

N--J
y[rq = -'2: Y[kJW.-:-"-".
. (3 50)
N ·'
k=O
3.3. Rela~ion between the OTFT and the OFT, and Their Inverses 139

Substituting Eq. {3.49) in Eq. (3.50), we get

1 N-J ::c
rl " " X[ 0) wNkl W',~,tn
yn~NL..L..' '
k=O .f.=-oo

= ['.\'-]
~~= x{£J N ~
J
W_,Vk(n.-£; . (3.51)

Recall from Eq. (328) that

N L
N-1
"'""'w-k(n-r)-
_.!:._
N -
I 1, forr =n +mN,
0, otherwise.
(3.52)
k=O

Making use of the above identity in Eq. {3.51 ), we finally arrive at the desired relation

=
y(nJ = 2.:.: xfn -t-mNJ, 0.::;:: n :S N- I. (3.53)
m=-=
The above relalion indicates thaty[n] isobtainedfromx[n} by addi:ng an infinite number of shifted replicas
of x[nJ to x[n], with each replica shifted by an integer multiple of N sampling instants, and observing the
sum only for the interval 0 s n :S N- 1. To apply Eq. (3.53) to finite-length sequences we assume the
samples outside the specified range are zeros. Thus. if x[n] is a finite-length sequence of length M less
than or equal to N, then y [n J = x in J :Or 0 :S n ::::; N - l, otherwise there is a time-domain aliasing of
samples of xlni in generating y[n], andx[nl cannot be recovered from y[n], as illustrated in the following
example.

3.3.3 Numerjcal Computation of the DTFT Using the OFT


The OFT provides a practical approach to the numerical computation of the DTFf of a finite-length
sequence, particularly if fast .algorithms are available for the computation of the DFT. Let X (ei"') be
the DTFT of a length-N sequence x[nJ_ We wish to evaluate X(ej«>) at a dense grid of frequencies
Wk.= 2:rkjM,O:::; k.::;:: M- l,whereM >> N:
14C Chapter 3- O;screte-Time SPgna!s in the Transform Domain

,,. -I A-
L-•:[nk-i'V' = I.:
''='-'
D.::iln<.:: J. n~~v, SCcJUl:!KC r, l•tl ,_lbtained hon· _\jnl hy «ugmcmmg With,'¥/- N zero-v:.;lued o.:;mpk>:

O:::_n.:::_.V-1.
.v .:::_If ,.'yf-1_
n ss~

MaJ. Jng u,;e of .t,.: 11] in Eq. I _{.54) wo.: arri\-T .at
JH I
X\ei'''') = L t,.J~:Ie r>dn.M
"=0

which L~ seen to he an M-point DI--T X,.(kl o-!' :he length-A! '>Cqucncc x,_-[nj. The DFT X,.jkJ c.;w be
cumputcJ very efliciently u:-;ing the fi-T algorithm if M is an i.:-ll::g'i:r power of 2.
·nlC r..-t,,.TJ.AH function f ~ eq:.:, described in Se\:IJ<m 3.1.6. employs !he above ;;:ppwach to evaluate
the ~'requency :t'>pon:-.e of :1 ra.t'~<mal DTFT exprc>.s<:tl as a rmi,mal function in e- ;,,, .at a pre:-cnbed '"'' of
dJscretc frL""ql!C!K·ie'-. ]t compute~ thl..' DFI\ of the rmm<Orator ;md the dcnqr:ninatorseparate!y by consiJenng
e.::.d: a~ Imite length setp.:en.;:e;,, .and lhen c:»prcsscs the ratio of the DFT samples at each fn:quem:y puu:t
to C'ia]uatc the DTFT

3.4 Discrete Fourier Transform Properties


Like· the DTFL the DFT abu ;.ali~fie;, a numher of nropertie;; !hat are useful i11 signal procnsmg appli-
cations. Some of t!Je,c pwpcrticc. are es~e:llially idcnt;cal to those of the DT}--<T, while ,ome otho:n are
-.;unewh:.H different. A summary of the DFf propertic~ <1rc induded in Table,; 3.5, 3,6, a:1d 3.7. Tb:ir
p;·oob ;1ro.: again gmte stra:ghtforward and have been !efl <IS ncn:ises. Most of the-.c pmpertie" can abo
he v~riiied U'-:.ing M -\1 LAR. We dHcuss next those pmpcrtie~ that are different from their countcrpar.:s for
The DTFL

3.4.1 Circular Shift of a Sequence


Thi,; pwpe-rty :;.; ana!ugou'> In the wne-shiftiog prope:ty' of <he DTf<T as given in Table 3_:?:, but wid: a
~ubtie differenc-e. ~~us con;,ider kngth-N sequences detincxl for U ~ n :::::; N- !. Such sequences have
.;,am;Jle values e4ualto zero for n < 0 anc n :::::_ N. If xlrl] j_;, such a sequence. then. for anj arbitrary
integer n,. the shifted sequence _t 1! n I = x!n ~ n" I is no longer defined for the range 0 _::: n _:: :-: N - l.
We therefore need to define another type of a _;,hift th:tt will ahvr,ys keep the shifted '-«Jtle~cc in the ra11ge
0:::; n :5: N -I_ This '1:.. ltehieved hy defining a new typ>: of shdt. called the circular shifi. using a mndulo
nperatir)ll acco~ding to 4
(357)
hJf n,, -_. \l (right circular si1ift). the <Jbove equation implies
x[n -· n,J. for n,_. ::-:: 11 S N ~ I,
A, In I =
I x!N- n,---;--- n]. for 0 ::=: n < n,_

Tnc cont:<Opl of a circular .,hift at a finite-length ..equcncc is Illustrated in Figure 3.10 Figure :uora)
show;, a kngth-6 sequence x :n ]. f'igure 1.1 0{h) shows ib circu~arly shifted versmn shifte-d ro the -right by
3.4. Discrete Fourier Transform Properties 141

Tab!£ 3.5: Gener;.l p.rupt·-r!ie,. •Jf \"he Dl--"T.


---- -----------
Type of Propert)' Length-N Sequence N-pointDFT

gl.nl G[k]
h[n j H[kJ
---~~~~-

Lmcarity ugjn I , f,hjnl aGjk~ + f!H[kl


Circular hme-~hiftir.g J?[(n- n,.,iNI wk""G[k1
N
Cin;;ular
W N·k~n R [ n!- Gf{k~k,)s!
frequency -shifli>Ig
Duality G[n] NL~(~k),..·J
.-.·-1
N -point cin:ular
convolutio;l
L g!mlh!(r.- m)x! Glk]H[kj
m=O
N-1
Mt>dula~inn g{njh[n] -t L
m=O
G(mJHf\k -m)vJ

Par~eval"s rela110n

Table 3.6. Symmetry properties of the DPT of a complex sequence.


---- --~~-

N-pohlt DFf

.rfrJ Xfkl
-----~~-

, .. )n.l X*j(-k)NJ
>:""f{-n)si X""fkl
Rc~.f!nH Xr<-;c{kj = 1_ [X[{kiNI + x•I\-kfNIJ
jlm{;[nl! Xp.:a[k] = 1£X[{k)NJ ~ X*[\-kiN]J
-';x~ it~ l Re{X{kJ.l
.tpn.!nJ j lm{X[kjJ

---- -- ~----- ----~-

Note: .tr<=~in] ~mi Xpc~ln] arc tl:.e pen-odic conjugate-symmetric and


per'.odic conjugatc-antisymmetric part.<> of xln\, respecttvely. Lik!wi.se,
Xpo.:,[k] and X pca[k] an; the periodic conjugate-symmetric and periodic
c<.wjuga.te-a.uti~ymmetric pa!U of X{k \. rcspectwcly,
142 Chaoter 3: Oiscrete Time Signals in the Transform Domaln
4

Table 3.7: Symmetry properties of thf' DFf of a Feal seque.lh-'1:0-

N-point DFT

Xfk] = Re(X[k]/ + j Im!X[k!J


---------------
Re{X!kl)
j !m\X[k]J

Xfkl = X.-({-k:,.,d
Re X[kJ = Re X{{-k}Nl

Symmctrj relationS- lmX[k/= -Irn.X[i-kLvl


IX(kl: = !X[{-kl,v]l

argXt.\::j = - argX((-k)N]
-------
Now X;;elnl and ..tpo[nj are the periodlce<-en and periodic odd parts
of;,_ ~nl. respectivel:r.

0
'
y
f r rI
j
n
4 5
n +--'--f--f--!--l
012345
d n
"' 2
' ' ' D
' 2 3

(:.) (b) {r)

Figure 3.10: lllustr.:ltion ()fa drcvhu shift of a finite--lengrh seqJXnce. (a) x!>~l. (b) x[ (n - 1/6} = xf\ti + 5)6]. and
(c)x[(n- 4}t.J = .ti.(n +2Jcl.

I sampte period or. equivalently, shifted to the left by 5 sample periods. LikeWL'>e, Figure 3.10(c) depicts
its: circularly shifted version sh1fted to the right by 4 sample periods or-. equivalently, shifted to the left by
2 sample periods.
As can be seen from Flg1.1re 3_ lO(b) and {c). a right circular shift by no i5. equivalent to a left cir.culaJ
shif! by N - n, sample period.>. It should be noted that a ~ircular shift by an integer number n 0 greater
than N is equivalent to a circular shift by (nn) .v·
If we view the length-N sequence displayed on lhe circ:m1ference of a cylinder at N equally spac-ed
p:)ints. then the ..:ircular shlft operation Lan be considered as a clockwise or anticlockwise rotalion of the
scqtJence by 11(, sample spacing~ on !he cylinder.
3.4. Discrete Fourier Transform Properties 143

'J g!nl
__'_,}'-,-:t--'~,.-,!!-'-h_[_n:_• n
--'p?!--!'--<io~'i'
l :! 3
__ "
Figure 3.11: l'<\·n length·4 '~l.JUt'r.ces.

Tnis property is analogom; to the iinear convolution of Eq. t2.64). but with a subtle difference. Consider
lv..-ulcngth-N -.equence-;, gfn J and hfn 1. respectively. Thetr linear convolutio;-. result:-; in a length-(2N- 1)
o.tquence ydn] gi\"en by
I
A-"
,:_In!= L xlmihln- mj. {} :S n .:5. 2N- 2, (3.59)
l'l=IJ

where we have assumed !hat hoth N -length sequences have been z:ero-padded to extend theJr lengt~s <o
2N- 1. 5 The longer length of '>'Lin! result,s from the time-rev·ersa.l of the sequence hlnl and its linear
shifting to the right. The first nonzero value of vdn] is .vdO) = g[O]h[O]. and the la,;i nonzero value of
yl.[nj is :rd2N - 2j = glN- ljh[N- l].
To develop a corwnlutlon-like operation resulting in .a Iength-N sequence :rein], we need w define
a circular time-reversal and tlten apply a circular time-shift. The resulting opemticn. callet:l a circular
convolution, 1s defined below: 6
.V-1
yclnJ = L g!mlh [\n- mJN]. (3.60)
m=O

Since the above operation involves two length-N sequences, it is often referred to as anN-point circular
convolution, denoted as
yc[n] = g[n1@Mn1. (3.61)
Like the linear c-onvolution, the cin::uhu cunvolulion is commutative (Problem 3.65), i.e.,

g[n](~}h[nj = hfn]@glnJ. (3.62)

We illustrate the concept of circular convolution through several examples.

5 A~ ind.tcale;;. in Section 2 5. I, the sum of tbe indices nf e<~<;;h sa. :nple prodU<:t in9de the summation is ~q!U!l to the index of th~
>.afil1Jle being generated hy the linea; <:c":wot .. tmn :opcranon.
f.Nrne tfutt here the sum of the indices of each sample product inside t~ summawm modulo N il;equa.l to tlre ii'K.iex of the sampl:e
being generated by the circular ccnvoiulion operation.
144 Chapter 3: Discrete-Time Signals in 'he Transform Domain

I ]' Yr
0
1
m I r\
0 I
]' I
2 3
m ?
0
1Y'! '?
2
!
m I? ?
0 I
II'
2 3
m
2 3
(a) (b) (c)
' (d)
Figure 3.12: 1be circularly wne re\<ersed sequence and m; circularly shifted venions: (a) hi { -m}4j, {b) h( (I - m)41,
(c) h{il- m/4], and (d) nr(J- m:'.d-

Yclnl
YL [n]
69 7r

r
' 5
'' 4
'

' n
'f ,
? ?I
n
0 2 3 0 I 3 4 ' 6
(a) (b)
Figure 3.13: Results of convolution of the two sequences of Figure 3-.l L (a) Ci....:ular convulution. and (b) linear
convolution.

nmc
14-- ii4Vll'*¢11©<'
3.4. Discrete Fourier Transform Properties 145

The circular convolution of the two Iength-N sequences can also be computed by fanning the product
of their N-point DFTs ami then applying anN-point lDFf, as indicated in Table 3.5. The process is
illustrated m the following example.

ftrhG\f M}")illf!\10 {\
:'ftjl\lt JJ•r A 0PIPt !Jf/7 td&; "' !M lww.W"k 'if'IJ~" Sl'mtf f!7:
t;\kf x;tt; ., slt!t ;;s;.<t + k ¥i Jft" fihtriJ£
"'j b /.t •*1 1 "'#'•:'11!

I r I I

[ z;;n:r '''' =
t

~
''
'' L
''
I'
'
'
1' [
I=
j'
II
'
i
mJ
I
I
"" I

'
I
I
I
%<Md0 hdM 4 "' 4 tJf'i''f MMt HL !~>\!wrw}MIHJibf 1Lflpml l,:W.:: f df ~

]['] [?:·}~ }.
w
y I I
l' .. / "" I I
;:
''" 1Li5t
]=r:
c I I
I I
"!
"l
: £
I"' 1

1114 tmrr t·"" t~tt =


!"

l
't
Wt: MV"T i10J tf111 tliiin~h0tGt0dLi 1>11JVoi'40#Yti ft%1itf
J=f
c
1' '
w
!i+
r I y l£
Y'\(1!}
h t:j
{{ l ;'i1 J
I ''
'''
'"
''
"'
,, J •t
I I I

I
I
"I
I
I
I ][ ]=[!] {11 70!

#110clt {t n:}E.+Dtyq( ill; tM ~\ji&;t, y;:H11)} 111 (l!?i'ii. t1J"M+1D 1.77{)1.

i{1[«1J1J,
4 11 Dr.
146 Chapter 3: Discre1e-Time Signals in the Transform Domain

fr{tty = '"'"''''"' "' {:. j 1


0 / rf 116,;[·11 ' ¥«11];''';71

¥eJ!A{:; ~ sl i \Yfl v fry(dt 'W't'Zfw§Jn:t\,


tf+! d\ , , , , , , , , , ' ,ytt +
H" "'"''"
vft1l """&{HA:<"\;' :fi
::! '> ! ""'" ri"ZArL 1

t; h Jttttb thv Gi?ri/\if ;!;Y~¥~ 'tu t; pi!X:JXXI.I 1/M:Ltlf'a!t Yf pr'" Aitdml/lill


Jt>Wrtv1hti¥+u ,,r v!« L0G11 I
TheN -point circular convolution operation of Eq. (3.61) can be written in matrix form as
h[O] h!N- 11 h[N- 2] hf I] g[O]

=
[
h[J'
h~2i

h'N-l]
h[O]
hf I]

hjN- 2]
h[N- l]
h[O]

h[N- 3]
The element>, in ea.;:h diagonal of the N x N matrix. of Eq. (3.82) are equal Such a matrix is called a
h[2]
h[3]

h[O]
g[l]
g[2]

g[N-
l
IJJ
'

circulant matrix.

3.5 Computation ot the DFT of Real Sequences


In l1}(».t practical applications, sequences of inter-est are reaL Jn such cases, the symmetry properties of the
DFI' given in Table 3.7 can be exploited to make the DFT computations more efficient.
3.5. Comp:.J1a1ion of the OFT of Real Seque:1ces 147

N-Point OFTs of Two Real Sequences Using a Singte N~Point OFT


Let g~nl and hlnl be 1wo real sequence~ of length N each with G[kj and H[k] denoting their rcspec[ive
N-poim DFfs. These two N -point DFfs can be computed efficiently U'>ing a single N-point DFT X[k]
of a comple:>i length-,V sequence x~nJ defined by

:r[n] = g(n] + jh!nJ. (3.83)

From the above, g[n] = Re{x[nH and h[nj = lm{x[nJi,


From Table 3.6 we arrive at

G[k[ ~ l!Xck] + X"[(-k)N i), (3.84)


H[k[ ~ ij I <'[k)- X'[(-k)N J). (3.85)

:f1S l:Yf\1' it ~

lu J"""rr·~;2
') (

[ J
_,?' '± '
F F
d
~· 7
!' I'
,; L
'~.
'
'

j
"
F
"' J l
l
J

3.5.2 2N-Point DFT of a Real Sequence Using a Single N-Point OFT


Let v[n] be areal sequence of length 2N with V[kl denoting its 2N -point DFT. Define two real sequences
g[n] and h[n] of length N ~has

g[n] = t:[2n], h(n] = v[2n + lj, O:;:n < N, (3.89)

wit!: G{k 1;md H fk] denoting their N -point DFrs. Now define a complex lengtt-N sequencex [n 1according
to Eq. {3.83). The DFfs G[kl and H[kJ can be computed frorri the N-poir:t OFT X(kJ of the sequence
x[nj by means of Eqs. (3.84) and (3.85}.
148 Chapter 3: 01screte-Time Signals in the Transform Domain

Now,
2.V-l N-1 N-l
V[k] = L v[nJWl't = L v[ln]W],~k + 2:: v{2n + lJWiJ;+l)k
N-1 N-1

= L g(n]W.Vk +L h[n;wftwtv
~=0 n=O
.V-1 N-1
= L g[nJW# + w~N L h[n]W.~;".
n=O

Note that the first sum on the last expressioo is simply anN-point DFT GlkJ of the length-N sequence
g[n], whereas the second sum is anN-point DFr H[kJ of the length-N sequence h[n]. Therefore. we can
express the 2N-point OFT V fkl as

0 ::S k :::: 2N- l, (3.90)

where we have used the identity Wj~ = wt


and used the modulo sign for the argument of the two N -point
DFTs on the right-hand side since k here is in the nmge 0 S k _::.: 2N - L

d·---
\l0ili"i!U%WJ} . . . .
w .,,.. ,,-ff :4. £r

--
m"
'
=¥*·" lf --r··
'Mi\ ' " .{>+ {ir • "' ··~

rn: ' iii. '* ' .v·rv r+ = 1 /f "'·


m I' } fb

,m
'•
4 ·•}&
- "}

-
"}

"'"
- II "'·.d'itrp4 =

--
17fli t{ =if "" !H'·\1

= ":";; '7' \}'


=!4 +· ,JJ ·+ x+ ·+ I
- ,,:;,
{!:;r,'¥ =·i.tl
"W ~ r'
'
3 6. linear Convolution Using the OFT 149

g[n] Zero-padding c'~·~["~l~ (N+M-I )-


Wlth 1
Length-N (M-1) zeros point DFT
(N+M-lJ-
point IDFT
Y!.fnl
Length-(N+M-ll
Zero-~ding he[n} ' (N+M-1)-
hlnl with p<oint DFT
Length-M (N-1) zeros

Figure 3-14: DFT-based implementation of the linear convolution of two finite-length sequences.

3.6 Linear Convolution Using the OFT


Linear comulution is a key operation ;n most signal processing applications. Since anN -point OFf c:m
be implemented very efficiently using approximately N (log 2 N) arithmetic operations, it is of interest to
mvestigate methodsforthe implementationofthelinearconvolution using theDFT. Earlier in Example 3.17,
we have already illustrated for a very specific case how to implement a linear convolution using a circular
convolution. We first generalize this example for the linear convolution of two finite-length sequences of
unequallenglhs. Later we consider the implementation of :he linear convolution of a finite-length sequence
with an infinite-length sequence_

3.6.1 Linear Convolution of Two Finite-Length Sequences


Let g(n] and h[ n] be finite-length sequences of lenglhs N and M, respectively. Denote L =M+N - 1.
Define two length-L sequences,

0 _:s n ::::; N - I,
8e[n] = { ~~n], N ~ n::::; L- I,
(3.91)

he[n] = { ~~nl, O::;n::;M-i, (1_92)


M:Sn:SL-l,
obtained by appending g[n] and h[n} with zero-valued samples. Then

n[n] = g[n]@h[nJ = ydn] = 8e[n]~ .. [n}. {3.93t

To implement Eq. (3.93) u~ing the OfT, we first zero-pad g[n] with (M- l) zeros to obtain ge[n],
and zero-pad h[n] with (N- 1) zeros to obtain h..[n].1ben we compute the (N + M- 1)-point DFfs of
8e[n] and h,.[n], respective1y, resulting in G,.[k} and H,.{k]. An (N + M- 1)-pointlDFr of the product
G., lk]H,.[kj results in yL[n]. The process involved is sketched in Figure 3.14.

_, -
The following example I!Ses MATLAB to illustrate the above approach.

II tDlu;; fi1101m 4~Wf?!i!l


w '
IIi V±
1f £;1 jt&fflk
Chapter 3: Discrete-Time Signals in the Transform Domain
150

,,>. m-"
-- -----~- ---
~I
T l
l' 1 Jo\-6~~
QL__l__ j _ _
0 1 }
__j__
4
__L__ t
~
$
_,I(-"~------<~__j
ll2-l456
Time ,,-,;~e, ~ Tim~ 1nde>. ft

(b)

Flgure 3.15: Plots of the output and the errm sequences.

'0 :tn t:knv TUF :&Y"4ii\ltlC¥\'Irs


:v * I ::';j,'fi4L ' if( tbzi 7 £ :ru:: 0 tiit;::: 8¥\t: r f;
f; ; <::f'\i\c l { ' 'fy'jilll\ LDVt J!>:<>J!T\,;;] iGR"'QIIf:P?"t ":""
1 Zftt LJSX'S!:t'tP! '::t:PV f r::r:7v 0; " \11t ft:hlhrZ;; i / .·.{}"?,}<{ t;
" 1vnx,;tA; {ill!"' fuji: <i::
4i \:\rF}h! :::+ t; }H :r!#'f'f \P'V "ifrtl '6''66606L;J,ci
'§\)f x tfL't.; A',];,
:1<11;; ' tY•L rt> r
11 !:iht•\ ·riL1k• hF v >+ { >,;p 1
;"{ " {f 'f\, \}t)E Yfi!:},j f
4 \Flt:i:t Lft0 YJJF{"- 7/4: H ;:'~
1 CL"r'HL"J;,z Li,:Qi\ f!Yt:¢ \ \@! VCT\/1 $'( \/jt\ -,%J I'"'" f llALtkH
$ ,f'\;1; tit"/ ( \, t L/H:
« >1tLl

A't &:Tid :ilLy] '


L!diJS:t:d t " X J liiiGi!Y Hi!>J ?Pii :t " t
z• +'" • [t&rs fd '' n r:
{l'F'*"· t\df++,> 1 • v -+Hlf :r 't:"::r ., \ !miL

"' CHDV

}1\)kq:<;,<<' ':',X,
< ttm::l f: mhc:i Lwrrne'
,!1\ :::rtkq;L L; ::Atv: : iltf:DV nt' ; y J<nMri \ r A::Pif;:i H 'Jth'' '
1 ; { jim< ; ' fr t tTh? tr"'"',f\u;:o;:,'\1;, 10 '

3.6.2 Unear Convolution of a Ftnite-Length Sequence with an Infinite-


Length Sequence
We consider now the DFT-ba.>5ed implementation of
M-'
_y(n] = L h[t:]x(n- £1 = hlnJ@x[n], (3.94)

·~
3.6. Linear Co'1volution Using the DFT 151

where hLnl is a finite-length scqt.:encc of length M and xE"] is of infinite length (or a finite-length sequ~-nce
of kngth much greater than ,+f). There arc two diffcren< approache~ to ~lving this problem, a,j d~scnbcd
beiow rsto66j.

Overlap-Add Method
ln rhis method, we first segment xfnJ, ~L~sumed to be a causal sequence here without any lO&-: of generality.
into a. set nf contiguous finite-\englh subsequences x,,, [n 1 of l.cng!:h N each:

=
x[nJ= Lx,[n-mNJ. (3.95)
,-_!)

where
xrn +mNj, 0'.'2nsN-L
,-,.,!nl = [ 0. otherwise.
Subs.tituing Eq. (1.95) in Eq_. (3.94J w.c get
oc
)'[nj = L Ymln- mNJ. (3 97)
m=O

whe-re Ym[n:J = hfn ]@xm!nj. Since h[nl is oflength M aad Xm(nJ is. oflength N, the linear convolution
il{n]EJx,.,tnl j~ of length (N ..:,-- M- 1). As a result, the desired linear eom"Dlution ofEq. (3.9-4) has been
hrok.:n up into a ;;urn of an infinite number of short-length l:nearconvolutionsofiength (N -t- M- 1) e;1ch.
Each of these short convolution.<, can be implemented using tbe method outlined in Figure 3.14, where now
the Df<Ts (and the IDt-T_) are computed on the basis of (N + M - l) points. There js one more subtlety
to uke care ofbcfurc we can inplcment Eq. (3.97} using the OFf-based method.
Now the !Jr-;! short convoktion in Eq. (3.97) given by h[nj@ xofn ], which is of length (N + M- 1 ). is
Jefined fOrO S n ~ N + M- 2. The second short convolutinn in Eq. (3 97), given by h[r.JGxr{n J, i~ als\l
<.f kngth (N +AI - l) but is defined for N S n S 2N + M- 2. This implies that there .is an overlap of
U- I samples between tl:.cse twe< short linear corcvolutiom. in the range N S n :::; N + M- 2. Likewise,
the third convolution in Eq. (3.97). given by hrnJ®x2fnJ, i.~ defined for 2N _::: n ~ 3N + M - 2, causing
an uverbp between the samples of hfn lG xt[nJ and h[n]§x::fn l for 2N :S n _::::: 2N + M - 2. 1n general,
rhere will he an m:eriap of M -- I samples between the ~nples of the short <:onvolutions hln JQ)x,. __ ,lnl
;md h!n ;<±LtAN l for r N ~ n -<_!--- r ;\' ---:- M - 2.
This prcx:e\<.> is illust:'ated i<J Figure 3.16. Figure :1.16(b) shows the fiN three length-7 {N = 7)
segments x,,,:[n: of the sequem:e .r[nj of Figure 3.l6(a). Each of these segments is convolved with a
Je.ngth-S (M = 5) sequence h[n 1. resulting in 1engtl:-ll (N + M - l = 11) short linear convolutions
_>',_,.fn J shown in Figure 3.l6(c). A" t:an be seen from Figure 3.16(c), the last M - I = 4 samples of yoln]
>Weriap with the ilr:-;t 4 samples nf Yl in 1- Likewise, the last M ~ I = 4 samples of y 1fn] overlap wilh the
first 4 samples of Y2fn], and :;o o:1. Therefore, the desired sequence y[n] obtained by a linear convolution
Hf xln) and h:nJ IS given by

_\ (nj = Yo{nl, 0:S:ftS6,


_\ in I = :;o[ n] +- y·, [ n ·- 7!. 7:S:n'.'2l0,
vln] = ydn- n. II .:::_ n S 13,
v!nl=-'·l[n-71+y:dn-14f, l4:::_n:;::17,
_\[nl = -'·z!n- 141. US :::;n S 20,
1S2 Chapter 3~ Discrete-Time Signais in the Transform Domain

(a)

x :In]

---~Iui~"~T~~~. ~~ ___ _
-
0 0
lll~ "
(b)

Y][.l'lj

n
:n
'--r----'
M-1- 4
OV<'rlap
: 0 ! r r [I
7
l p !l n

u
10

y [n]
2
·~
M-1~4
cw:dap

(')
Ftgure 3.16: (a) Original xfn l. (h_) segments X min J of xlnJ. and (c) linear convolution of Xm [I!) with hln J.
3.6. Linear Convolution UsJng the OFT 153

I stnl ' :
:~

T:.me lode>. n

FigUre 3.17: Uncorrupted input ~ignal si_nl (l!hown with solid line} and the filtered noisy signal y[n:] (shown Wlth
dashed Ime).

The above procedure is called the overlap-add metlwd since the results of the short linear convoiutions
overlap and the overlapped portions are added to get the corre-et final result.
The M-file : f t f ~ l t can be used to implement the above method. There are rwo different forms of
this fuoction:

y ~ fEtfilt\h,x,~)

where h is the impulse response vector of the FIR filter, x is the input vector segmented into successive
sections, andy is the filtered output. In the first form the input data xis segmented into successive sectlons
of length 512 each, whereas i:n the secor.d form the input is segmented into sections oflength r specified
by the user. We iHustrate its use in the following example.

1r +· tAJi:rs·Ag& 1 _ 1:
¥:- I 1 •;v+;t·wt ·>tnn 0t

jt ., 1611:
-.1 1 di;rdi

±:«iJ: ;ru A Z+E:tJiiy


1+'\'JiX '* 4."' t& <f t"' ( H,L 1 \I'V fnt,z j t 1
:t.'fillf M ¢i t:i!;p ' ,, 4.«1; f

]Jif'f!D;f: h: <>f C!%0 ltJ./0(.SUj. fP:tP.C:II%10¥ #). T:ttbf


{xW'l{?Xb <rf ttttM rJ'w% JH!%1£1$\10: X t ,(XAim( « r +
1f ·::ur;;'1G:Ptlf#r t ,(XY" HHI dVWT&Lft· :fj {!;Jtr <t:·PfhXX'W0{W
\pp.y j 2 'J»:f! f\{
o/ 1/ry;,t £ ti#Wir t.!%0 \:tY'%1'7; WJJ' fthi 'if t,. t.q;r{fit;l ·::tM:tb¥7 );<;f\
"'
k > Lttt Ltt0 w'i&AHti to
(-·f>t. {ft"A, q- , 4:, jt. ·!>--"
1Ei4 Chapter 2: Discrete-Time Signals in the Transform Domain

O.rerlap-save Method
ln .implementing the previous method usi:-tg the DFT, we need to compute two (N + M- I)-point Df-<ls
anJ one (N + M- 1)-point IDFT since the uveralllir_ear ;:onvolutmr. of Eq. {3.94) -was expressed a~ a
sum of short-length linear convolutions of length (N + M -· I) each. His possible to implement the linear
convolution of Eq. (3.94) by performing ins:ead circular coTlvolutions of length shorter than (N + M - 1 ).
To thi:. end, it is necessary to sc:gment xin~ into overlappir~g blocks x,_fn I, keep the terms of the cJrcular
convolution of h!n] with xm!n] that corre.~ponds to the terms obtained by a linear convolution of h{n J and
x, Ln J, and throw away the other parts of the circular convoh.:tion.
To understand the correspondence betwee-n the li.near ar.d circular convolutions, o_'Un~der o ler.gth-4
sequence x[nj and a lcngth-3 sequence hlnJ. Let ydn] denote the- re~ult of a linear convolution of x[ni
wi'lh hfn]. The six samples of )'r_[n] are given by
niOJ ~ h[Olx[O].
yL[l] = h[Olxll] + h[l_lxfOl.
yL[2] = h[O]xf21 + hfl_lx[l] + hf2]x[O},
YL l3] = h~OJxf3l + h[l ;xl2J + h[2Jx[ !].
YL [41 = h~ 1 ]x!3J + hf2jx{2:,
yL[S] = hf21x~3j.

[f we append h[n] with a single zero sample and convert it mto a length-4 sequenc:e h,[n], the 4-point
circular convolution ycfn] of h,.[n] and xfn] is given by
yc[Ol = h[O]x[O] +hi :]x{3] +h[2]x[2].
yc[l] ~ h[O]x[l] + h[IJ-<101 + h[2t>[3].
yc{2] = hfO)x[2J + hllJxfl J + h(2Jx!OJ.
yc[3J = h[O]x{3] + h~1]xf2] + hf2Jx[l]- (3.99)

Comparing Eqs. (3.98) and {3.99), we ob:>erve [hllt the fir:->t two tenus of the circular convolution do not
correspond to the first two tenns of the linear convolution. whereas the last two tems of the circular
convolution are precisely the same as the third and 'fourth tenn;;. of the Iinear convolution, i.e.,
y!_[O] =F- yc[Ol. c
yL[ #- yc[lJ,
_n[2J = yc[2]. _vrJ3J = yc{3].
In the general case of anN -point circular convolution of a length-M sequenceh[n] with a length-N sequence
x[nJ with N > M, the first M- l samples of the circular convolution are incorrect and are rejected, while
the remaining N - M +I samples correspond to the correct .>amples of the linear convolution of h[n] and
xrnJ.
Now consider an infinitely long or a very long sequence x[n ]. We break it up a<; a collection of smaller
length (length-4) sequences xmfn] as indicated below:
Xm[n] = x[n + 2mj, 0 ~ n ::=: 3. 0 5 m 5 oc. (3.100)
Next. we form

or etJulvalentJy.
w.,!O! = h!O]x,.,!O]--+- h[l]x,[3] + h[21x,.,[21,
w.,{l] = h[OJxm~ II+ lrlJ Jxm !OJ + h[2]_x..,[3],
i).JOJ)
Wm[2l = hi0Jx ... f2J + h[llxm[ll + !J:[21x.,[O],
Wm IJ] = h{OJxm [31 + hiJ Jx,,l2J + hi2~.,JlJ.
~;.7 The z-Transform
155

Cor1putm.-: tile <ibove form = 0. l. 2. 3. _ . , amJ substitunng the values of x,., lnJ from Eq. (3. IOO), we

w 0 f0l = h!OixiO.I + h l.lxf3l L l:if:1!x[2j. ~ Reject


u';\[1; = ir!OJ_\!: I;+ h l1xf0j-+- h\l]x\3!. ~ Reject
!i';o[21=hl0]1\2] +-h l!x/ll--'-h/2Jx!O!=y/2!. ~ Save
u·: 1[J-: = h!O/.•. [3; +h l]xf2] ~- h[21x[l] = y!Jl. ~ S"aue

u';[O; = hf0!.\l21 +hl_ljxf5\ +- h[?:]xl4!. ~ Rejat


w;ll; = hjO]_\ PI + hl l].t [21 + h[2.1x[5]. ~ Reject
wlf2j- h[O!x[41 + h[ljxf31 + h/2jx:1/ = y[41. ~ Save
w;l3-i = hi01x!5] + h! !jx[-+J +h/'2.~x~::q = yf5], ~ Sat.e

w2!0l = hi0)_._-J4] + h[ i}x/5] + hl2ixi_6j. ~ Rejccl


w 2 !1 J = h /H]x j5] + h!l Jx\41 + lt"[1jxf7i. ~ Rejecl
w2121 = hi<J]x[6l + h{ l !x\5} + 1![2Jxf:t; = yt6i, ~ Save
u·z[Jj = h[O]x 171 + hlllx{6l + 11[2 fx[5! = y/71. ~ Sa1.1e

h should be n01.ed 1\-mt tn determine _vjOJ und _vin, we ne-d lo form X-J fn]:

X- ,[0} = 0, x _10l = 0. J. -112/ = r[OJ, x _ _~{3j = xfll.


:md compulc ti.'-11n l = hjn j@_\ --I [n l for 0 :S n _:::: .\ reje..·t u· -J LOJ and !!L d !j, and :-.ave W- d2] = y[Oi
and te iJJl = ·d lj.
Generahzi1~g i!hove:. !el h[n] he a sequ<:P.ce of length .H, and x,., In j, the mrh sectior; of an infinitely
long M.:qucnc~ _\jnj defined !Jy

x,!nl = xjn +mUV -- M + l)]. (3.102}

be 02. length N \vith M -:5 N. If Wm[nJ denotes. theN-point circular convolution of hlnl and x,.[nL
<.e., ~-·m [n 1 = h'~n !@.\, ~n ], lhcn we rejed the first M - I sample'> of j and "abut'' the remaining w,.rn
N - M + l saved sample,; of Jt:,. [nj to form ydn], the linear convolution of h[nj and x[n j. If we denote
tho: S4\'~d pnn:ion of ll'mln1 as v,~ln!, i.e.,

J
J 0, o:::::n:~M-2,
(3.l03j
I
-\ ' m l ' l = [ IL'm [n' ] , M -· I :S n _:: : : N- l,

then,
M-l:S:n~N-1, (3.104)

The above process is 1\lustr~ted 'in rigure 3.1 8. The approach i.s called the overlap-save method since
th<"- input JS ,.,cgmentcd into overlapping :-:cctions and part of tte results of the circular convolution are saved
an:l abutted 10 determine the hnear c-onvolution result.

:t7 The z-Transtorm


Th·,; div:rete-timc Fourin transfmm provtdes a frequency-domain repre,;entation of discrete-time signals
and LTJ :-;y~lems. Because of the cnm ergc-uo:e condition, in many cases, the discrete-time Fourier transfonn
of a seq_uenco::: may not exist anci ll'-' a result, it is not pos!;ible tn make ui>e of such frequency-domain
chLracteri7ation in these cases. A gcncralizatmn (•fth.: discrete-time FounertF<>.IJsfonn defined in Eq. (3.1)
iea-:ls 10 !he :-trans.form. which may exist for many sequences for which the discrete-time:Fouriertransfonn
Hi6 Chapter 3: Discrete-Time Signals in the Transform Domain

x 0 fnJ

r
~0~~~_,-,-L-L-L-L~~---n

0\Tilap

rI
rr ? y w1 tnl
rcje<:t
rI 1
: ;t
'
£ I
! I 1!
1 10
6 n

.. _______,

l r 1r r r
JJ-1~4

""Je<:t co !
! p
n

I m

(b)

. . . .. + .....
n

{C)

Figure 3.18: lllusu-ation oft!:e o'>·erlap-~;;ve mctbxL (a) Overlapped >egmcnt,;. of the sequen~-e xl_n} of Figure 3. l6(a ),
•- b l SC411Cnc~~ generated by an 11-point ci-rcular c,>nvolutioo. antl (c) ;,cquern:c obtained by rejecting the first f(llJ-r
,oamples uf w, ln l and Mutt1ng the rcmllining: samples.
3.7. The z-Transform 157

does not exist. Moreover. the use of z-transfoml techniques permits simple algebraic manipulations.
Consequently, the :-transform has become an important tool in the analysis and deslgn of digital filters.
We first define the z-transform of a sequence and study its properties by treating it as a generalization
of the discrete-time Fourier transfornL This leads to the concept of the region of convergence of a z-
transform thai is investigated in detail. We then describe the inverse transform operation and point out
two straightforward approaches for the computation of the inverse of a real rational z-transfonn. Next, the
properties of the z-transfonn are reviewed.

3.7.1 Definition
For a given sequence g[n], its z-transfonn G(z) is defined as

=
G(;:) = Z{g[n]} = L g[n]l-"-, (3.l05)
"=-=
where z = Re(z) + jlm(z) is a complex: variable. If we let z = rej"', then the right-hand side of the above-
expression reduces to

Glre}"') = L= g[n]r-'le-f«>n, (3.106)


11=-=

which can be interpreted as the discrete-time Fourier transform of the modified sequence {g[nJr-n}. For
r = I (i.e .. lzl = 1), lhe z-transfonn of g[n] reduces to i:sdiscrete--time Fourier transfonn, provided the
latter exists. The contour lzi = 1 ~~a circle in the z-plane of unity radius and is called the unit cin:le.
Like the discrete-time Fourier transform, there are conditions on the convergence of the infinite series
of Eq. {3.105). For a given sequence, the set R of values of z for which its z-transfonn converges is called
the n:gion ofcotrVergence {ROC). It follows from our earlier discussion on the uniform convergence of the
discrete-time Fourier transform that the series of Eq. (3. I 06) converges if g[njr-" is absolutely wmmabie,
i.e., if
=
L: is[n]r-"1 < oo. {3.107)
"=-=
tn general. the region of convergence R of a <:-transform of a sequence g[n] is an annular region of the
;:-piane:
(3.108)
where 0 _:::: Rg- < RgT ::: oo. It should be noted that the z-transfonn as defined by Eq. (3.105) is a fonn
of a Laurent series ar:d is an analytic function at every point in the ROC. This in turn implies that _the
::-transform and all its derivatives are continuous functions of the complex variable z in the ROC.
158 Chapter 3: Discrete-Time Signals in the Transform Domain

Table 3.8: Scme comfl'lOflly u-;ed ~-transform pairs.

Sequence ::-Transform ROC

8f nl All values of z
lzl > l

lzl > Ia I
u;:
l - (r co~w,J:: 1
:zl > r
J- (2rcosw,).:: l - ,-2z 2
(rsin&.oo).::- 1
(2rcosw0 ;;: I , r2z 2

The z-tmns.form ;.t{.::) of the unit step sequence _u[nl can be obtained from Eg. (3.! 10) by setting a = L

P,(ZJ = I - Z I' for:z- 1 1<l. \3.111)

The ROC oft.t(z) is thus the annular region J < lzl :=: oc. Note. that the unit step sequence is not absolutely
summable, and, a_s a result. its Fourier transfonn does not converge unifonnly.

7
C1t0iWN:I;;r itm ~21ttM£1 N!iltprllit r tlJIJ l '"" «<J0 ¥ r «f4 Ji ijrrn'' il<l<+ JCi< 1 - :~sny 4f ;;hr-
rx:p;&1101iu& fJW wn ;;dxii&d)JElfd

It should be noted ttat in both of the above examples. the z-transfonns are identical even though their
parent sequences are different. The only way a unique sequence can be associated with a z-transfonn is
by specifying its ROC. We shall discuss further the importance of the ROC in dre following section.
It follows from the above that the Fourier transform G(ej"') of a sequence g[n 1 con•,;erges uniformly if
and only if the ROC of the z-transfonn G(z) of the sequence includes the unitcirde. On the other hand. the
existence of the Fourier transform does not always imply the existence of the z-transform. For example, the
finite-energy sequence hLP[n] of Eq. (3.12) has a Fourier transformhLp(ej""} given by Eq. (3.11) which
converges in the mean-square sense. However, this sequence does not have a .;-transform as hLprn 1r-"
is not absolutely summable for any value of r.
Some commonly us-ed z-transform pairs are listed in Table 3.8.
3.8. Region of Convergence of a Rational z~Transform 159

3.7.2 RaHonal z- Transforms


In the ease of LTI discrete-time s.vstems that we are concerned with in this. text, all pertment ::-transforms
are rational functions of z- 1• i.e.: are ratios. of two polynomials in z- 1 :

P{z} fJO + PlZ-l + · · · + PM-!Z-{Jl-l) + PMZ-M (3.113)


G\:::= D(z) = d 0 +d1z '-:-···+dN-1Z iN li-+-d,vz N-

where the <kgree of :he numemtor polynomial P(z) isM and that of the dertominator polynomial D(zj is
N. An alternate re~sentation of a rational ;:-transform is as a ratio of two polynomials in L

,) <N-M1POZ M +p1Z M-'';--··


· + PM-IZ + PM
G \~ (Hl4)
J
- 7'

~ -~ dozN +dtzN 1 +···+d,\'-IZ 'dN

The .above equation can be alte.rn:ltely written in factored form as

Gr ) _ Po nt-1
(l - ~tz-
1
) _~:N-M!P'J n~l (.2- t.t)
. z - dont,>~O-Ae:. I)-.., don;=
1
(z->..t).

At a ruot z = t;i of the numerator polynomwJ, G (~t) = 0. and as a result, these value.s of z are kno'Vffi as the
zeros of G(z}. Likewi;,e. at a root z =At nf the denominator polynomial, G('J..;) -co. and these points
in the ::-plane are called the poles of G(z). Observe from the expression in Eq. {3.1 15) that lhere are M
finite zeros and N finite potes of G(z). It also follows from the above expression that there are additional
(N - M) zeros. ar z = 0 (the origin in the z-plane) if N > M or additional (M - N) poles at z = 0 if
N < M. For example, the z-tmnsfonn p.(z) of Eq. (3.111) can be rewritten as

p.(zl = ~.. for !zl > l, (3.116)


z- 1
which has a zero at z ~ 0 and a pole at ;; = L
A physical interpretation of the concepts of poles and zeros can be given by plotting the Jog-magnitude
20log 10 jG(z)l. Now 20 log 10 IG(z)l i~ a two-d:imensiona.l function of Re(zJ and lm(z). Hence its plot
wEI describe a surface in the complex z-plane as illustrated in Figure 3.19 for the rational z-transform

G . l -2.4z~ 1 - 2.88z-2
(z)= l-O.Bz 1+0.64:: 2 .

lt can be seen from this figure that the magnitude plot exhibits very large peaks around the points z =
·).4 ± }0.6928 which are the poles of G(z). and very narrow and deep wens around the location of the
.reros at z = 1.2:::::; jl.2.

:3.B Region of Convergence of a Rational z-Transform


The ROC of a z -transform is an important concept for a. variety of reasons. As we shall show later.
•YiL'lout lhe knowledge of the ROC, there is no unique relationship between a sequence and its z-transform.
Hence, the z-transform must always be specified with its ROC. Moreover, if the ROC of a ;:-transform of
~equence includes the unit circle, the Fourier transform of the sequence is obtained simply by evaluating
the .z-transform on the unit c:ircle. In the following chapter, we shall point out the relationship between the
ROC of the .::-transform of the impulse response of a causal LTI system and its BJBO stability. It is thus
of lnrerest to investigate the ROC more thoroughly.
160 Chapter 3: Discrete-Time Signals In the Transform Domain

Figure 3.19: The 3-D plot of201ogl0 jG(.;:)! a'><~ fum,-tion of Re(Z) and Im(;:).

Figun3.20: The pole-zero plor and the region of convergence of Z{.l.t[n]J.

Now, the ROC of a rational z-transfonn is bounded by the locations of its :;mles. To understand this
relationship between the poles and the ROC, it is jnstructive to examine the plot of the poles and the zeros
of a .::-transform. Figure 3.20 shows the pole-zero plot of the ;-transform p.(z) of Eq. (3.116), where the
location of the pole is indicated by a cross ·•x" and the location of the zero is indi-cated by a circfe ''o". In
this figure the ROC, shown as the shaded area, is the region of the z-planejust outside the circle centered
at the origin and going tl>.rough the pole at z = i, and extending ali the v.ray to lz I = oo.

-
3.8. Region of Convergence of a Rational z-Transform 161

i ;

u;,

Figure 3.21: Pole-zero plot of Z{(-0.6)" J-1-[nJ}.

Tn general, the ROC depends on the type oftbe .sequence of interest as defined earlier in Section 2.1.1.
We examine in the next four examples the ROCs of the z-transforms of several differen: rypes of sequences.

-
111 ;: 'Wt t) :u;Ji!M fit {' %:
stw
/l,.l,
162 Chapter 3: Discrete·Time Signals in the Transform Domain

For a sequence with a rational z-transform, the ROC -of the z~transform cannot contain any poles and
is bounded by the poles. This property of such z-transfonns can be seen from the z-transf-onn of a unit
step sequence given by Eq. (3.116) and illustrated by the pole-zero plot of Figure 3.20. Another example
is the sequence defined in Example 3.24 and its z-transfonn given by Eq. {3.117), with the corresponding
pole-zero plot given in Figure 3.21.
To show that it is bounded by the poles, assume that the z-transfonn X(z} has simple poles at a and {f,
with jo:x i < IPI- lf the sequence is also assumed to be a right-s.ided sequence, then it is of the form

(3.!23)

where N 0 is a positive or negative integer. Now the z-transform of a right-sided sequence (y)" t4n - N,]
exists if
=
L ](y)•z-"1 < "'

forsomez.ltcanbeseenthattheaboveholdsfor!zl > jyjbutnotforiz:::::: :YI- The right-sided sequence


of Eq. (3.123) has thus an ROC defined by IPI < lz! -s oc. A similar argument shows that if X (z) is the
z:-tram:form of a left-sided sequence o-f the form ofEq. (3.123). with ,u[n- N 0 ] replaced by ,u.[ -n- N.,J,
then its ROC is defined by 0 :::; lz:l < Ia 1- Finally, for a two-sided sequence, some of the poles contribute
to terms for n < 0 and the others to terms for n :::_ 0. The ROC is thus bounded on the outside by the
pole with the smallest magnitude that contributes for n < 0 and on the inside by the pole with the largest
magnitude that contributes for n ~ 0.
Figure 3.22 shows the three possibleROCs of a rational z-transforrn with poles at z =a and z = fJ and
with each ROC associated with a unique sequence. In general, if the rational z.-transform has N po~ with
R di:otinct magnitudes, then it bas R + I ROCs .and, as a result. R + 1 distinct sequences having the same
3.8. Region of Convergence o1 a Rational z-Transform 163

!' ''"'"'"' d
""!T"V/{;"\, '

F1gure 3.22: Tile pole-zero plot of a rational ;:-transform with three possible ROCs corresponding to three different
sequences. (a) Right-Mded sequence, (b) two-sided sequence. and (c) left-sided sequence.

rational z-transfonn. Consequently. a rational z-trnnsform with a specified ROC has a unique sequence as
.its inverse z-transfonn. A rational z-transform without a specified ROC is thus not meaningful.
MATLAB can be used to determine the ROCs of a rational z-transform. To this end, several functions
need to be used. The statement ~ z, p, kJ = t=2zp (nu::n, den) determi.JeS the zeros, poles, and the
gain constant of a rational ;:-transform expressed as a ratio of polynomials in descending powers of z as
in Eq. (3.113). The input arguments are the row vectors num and den containing the coefficients of the
numerator and the denominator polynomials in descending powers of ;:. 7 The output files are the column
vectors 2 and p containing the zeros and poles of the rational z-transform, and the gain constant k. Tbe
statement f nuT., Ger.] = zp2 t f. \ 2 , ;J, k) is used to implement the reverse process.
From the zero-pole description. the factored form of the transfer function can be obtained using the

given as an L x 6 matrix sos. where

sos =
., .,
function sos "' zp2sos ( z, p, k}. The statementcomputesthecoefficientsofeachsecond-onlerf.actor

[ "" .,,
~.
~m
au
'"" ""
am
"''
an
. ]
'

bm. b~L b,L ace all. an


where the kth mw contains the coefficients of the numerator and the denominator of the kth second-order
7 Or, !'<jlliYa1ently. 11'. a;,cendmg powet~ <Jf! I.
164 Ct:apter 3: Discrete-Time Signals in the Transform Domain

factor of the z-transfonn G{z):

The pole-zero plot cf a rational z-transfonn can also be plotted by using theM-file zplane. The
z-transform can be described either :in terms of its zeros and poles given as vectors zeros and poles or
in terms of the numerator and the denominator polynomials entered as vectors nu..-rn and den containing
coefficients in descending powers of;;::

zplane(zeros,poles), z;Jlane (m..:rE, de:::1)

It should be noted that the argument zeros and pel es must be entered as column v--ecrors, whereas the
arguments num and de:1 need to be entered as row vectors. The function zplane pJots the zeros and
poles in the current figure window, with the zero indicated hy the symbol "o" and the pole indicated by
the symbol "x ." The unit circle is also inc!uded in the plot for reference. Tile automatic scaling included
.in the function can be overwritten if necessary. 8
The fo11ov.ing two examples illusrrate the application of the above functions.

4i 11\J; Chtt344
'% ~: ~H:::iv h;;rkt: ),,;w;v <<* ;;JMS 4'"1\i<iC fHL"rll<i
'0 17'\ 0 f't ,&'t 0/S:111Llkl B:'"'":'T"!ATVG$>lfrRji'ii
¥

0JVw;tv;Ht AU' iii' Act' ; £


'" ·~) 1 ~ ~ f '1R' AIJ:lllt 11'1:: "' J £
'{hA)> \Ntifptl;f{n'}\
:H: J&i?;' tJ:f, h'!ir I 4
*""

8
See !he manual for lhe Siglwl Proce5sing Toolb= fO¥ details [Kra94].
~that mMATLAB, the usual notar.ionfor .;-1 is- "i'' inskadof "J,"
.c. Reg•cn of Convarg 11C ot a Rational k Transform 165

ze ro!$ 811:'0 n t
- . 0000
- . 0000
-1 .0000 ~ l.OOOOi
-l.OCOO - l.DGGOi

P<.tle~ tt.rc
- .":l.Jd)1
1 .': H6l ..
o.sooo + o.e6~c~
0 . .,DOD - D. B(i{;(li

G.sin c:ons~c
0.61567

Pacllu.!l of polt:!3
! . ZHil
1.3361
1. 0(1 00
1.1!l000

~cond-ord~r ~~CELon~
o.66~7 o.~ooo o.~333 l.OODO :•. 0 0 0 0 -4 .toO DC
1.6ooo ~.oooa 2.tooo 1 • (1000 -1.0000 ,.0000

f(J 666? +0.41'! - 1 +O B13~- 2}(1 O+ 2 o.z-• ~ 2.0~ -:z,


Gl~~ = (1 .0 + :t.Oi:;-1 --4.0=-ll{l.O - l Or-l + I.Q;r-2}
-0 6661
(J + fil; - 1 + 8:::-:il ){1 + :![ I + 2-e:-1) •
r H k - • - h ·-ll(l - .t 1 + z-l)
The pole-u:u, pJot .de\teJoptd by 111e pros:mm ~ Jhqt,q i.n f·~ l:ZJ. Frum Bq. ( llJ) !be fCHir ~o.na tJI
rum asence thus-"' In be.
"R1 : tc' t:. k:l > 3.2l61,
1il.z: 3.2.:161 > rd > L,:l6l .
'R 3 = L2J.61 > l.tl > I.
"R., . I > 1.:1 ~ 0.

:I!PLE .JJO W~t: rliJ\Ir cnm:rdcr Llw:-determin.:w!On.orthe nWOJW z•IJiJisform (rem ltJIJX'roand pc~re tncu.ioas...
~ llll!~ a.re ar ~~ 0 "'I, h • 3.1-4, ~J · 0.) + JO.!io, t"4 = -U,:) - JO.:S: llz pt:~l~ .arc a.l. l.1 • -OA.S •
.\.:1! = CJ.67• .),l = 0.8J ;"0.72.. )~ 0 .81 - jrn2;. ~~~ IlK! gain Lit n 2.2.: 'tbe MA.TL.\8. propmn thin is
r mploycd ilD oompuL.; 11m t:r..m~,;porldlng r.ttiC'fl[l) --.nur:-~rrmu h .1:'1 follQWS:

PrQ9 r·am .3_8


1 O•t ~1nollon OI Lh~ Rat i ort l ~ u ~rdnbfn~
\ ttom its Pole~ ~nd 2cro~
1
166 Chapter 3: Discrete-Time Signals in the Transform Domain

c X

0
.
_J'
0
X j

-2L__-c----~---_j
-4 -3 -2 -1 0
Roal Part

Figure 3.23: Pole-zero plot 0: the z-transform of Eq. (3.124).


3.9. Inverse z-Transform 167

3.9 Inverse z- Transform


We now derive the expression for the inverse .:-transform and outline two methods for its computation.

3.9.1 General Expression


RecaU that, for::: = reJ=, the ::-transform G\z) given by Eq. (3.105) is merely the Fourier transform of the
modified sequence g[n]r-". Accordingly, by the inverse Fourier transform relation of Eq. (3.7), we have

(3.126)

By making the change of variable z = reP", the above equation can be converted into a cont-our integrul
given by
l .
g[n] = - . 1 G(z)/'-' dz. (3.127)
brJ 'fc_.,
where C' ls a counte~c2ockwi.se contour of lntegmtion defined by 1::1 = r. But the integrnl remains
unchanged when C' is replaced with any contour C cncirdlng the ?Dinl z = 0 in the ROC of G(;:). The
contour integral m Eq. (3. i 27) \:an OC evaluated using the Cauchy's residue theorem l.Chu90] resulting in

gLnl = L [r~idues of G\dz"- 1 at the poles inside C J. (3.12S)

1'\ote that Eq. (3.128} needs to be evaluated at all values of n and is not pursued here. Two simple methods
for the 'tnverse transfmm computation are revlewed next.

3.9.2 Inverse Transform by Partial-Fraction Expansion


The expres1>ion of Eq~ CU27) can be computed in a number of ways. A rational ;::-tramform Gl_z) with
a causal inverse transfonn gfn] has an ROC that is exterior to a circle. ln this case it is more convenient
to express G(z) in a partial-fraction expansion form and then detennine gfn] by summing the inverse
tramfonns ofthe individual simpler terrr.s in rhe expansion, A rational G(z) c:an be expressed as

G(:z) = P(z). (3.129)


D(z)

where P(z) and D(z) are polyoomials in z- 1 as indicated in Eq. (3.ll3). If the degree M of the numerator
polynomial P(::) is greater than or equal to the degree N of the denominator polynomial D(z}, we can
divide P(z) by D(::;) am.l re-express G(z) as-

M-N -
"' _, . P, (zJ
G(::) = ~ TitZ ;--- D(""), (3.130)
i=O _.._

where the degree of the polynomial Pl(z) is less !han that of D(z). The rational function Pt (z)/ D(Z) is.
called a proper-fraction.
168 Chapter 3: Discrete-Time Signals in the Transform Domain

*!4JI~;iq:,_ Gf1tx ~ 14 ::t


tt~!M&tre
U\lm!f1ttf 1!7

Simple Poles
In most practical cases. G(z) is a proper fradion with simple poles. Let the poles of G(z) be at:;= At;,
0 :::;: k :::0 N, where At are distinct. A partial-frac-tion expansion of G(z) then is of the form

N
" Pr (3.l31)
G{z} = ~ ~,-'c,'-f~=r,
i=l

where the constants Pt in the above expression. ca1Jed the residues, are given by

P< = {1- Ac- 1)G(::l[ .. (3.132)


<.='"<
Each term of the sum on the right-hand sde of Eq. (3.l31) has an ROC given by z > IAe I and, thus, an
inverse transform of the form Pt(Ad" ttlnJ. Therefore, the inverse transform g[n] ofG(z) is given by

N
g{n] = LPe(At)'',u[n}. (3.133)
i=l

Note that the above approach with slight modificarions can also be used to determine the inverse z-transform
of a noncausal ~quence with a rationall-transform.
The inverse transform computation via partial-fraction expansion of a rational .:-transform with simple
poles is considered in the following example.
3.9. Inverse z-Transforrn 169

Multiple Poles
If G(z) has multiple poles, the partial-fraction expansion is slightly of different form. For example. if the
pole at z = v is of multiplicity L and the remaining N - L poles are simple and at z = J.e, 1 :::;: f. .:::;: N - L,
then the general partial-fraction expansion of G{z) takes rhe form
M-J; N-L

G(z) ~ L .,,-z + L I
f=O t=l

where the constants y,· (no longer called the residues fori i= l) are computed using the formula

" -
i- (L t)!(
l dl.-i
v)l- 'd(z l)L '
[
. (1- vz-')'Giz)
ll ;:=v
. Isi:::;L, (3.138)

and the residues Pt are calculated using Eq. (3.132}. Techniques for determining the inverse z-transforrn
oftenns like yJ(l - vc 1}1 are described later in Example 3.4L

3.9.3 Partial-Fraction Expansion Using MATLAB


TheM-file res:;_duez can be used to develop the partial-fraction expansion of a rational z-transform
and to convert a ;:::-transform expressed in a partial-fraction form to its rational form. For the fonner
case, the statement is [ r, p, k) = residuez l num, den), where the input data are the vectors nun
and den containing the coefficients of the numerator and the denominator polynomials, respectiveJy,
expressed in descending powers of;::, and the outpt;.t files are the vector r of the residues and the rnunerntor
constants, lhe vector p of corresponding poles, and the vector k containing the oonstants Tfl· The statement
[num,den] = residuez\r,p,k/ isemployedtocarryootthereverseoperatioo. The applications
of these functions are considered in the following two examples.

f,, fl
*' t ''% t"'t 'i :x ! ""?i:',t_,c:r:; Js, ¥::Wif+lll:t+¥40!f'Ji r't f ft:m £- ; 'sh&i

'
71.) CtJFipler ;.j , 0 .,r..;fo!!LU. T me t-itgn IS Ill I re rtat Silarm Dornarn

-., ~ inpu~l''lflJC tn dnQ:-:~in c-or coe.t.:fieH~JH~ •)•


Lr.p.k l , n:t!lo~dur.~-:dn m,d1;i'n.•~
di!:pl' Re:~nduc!!! •): di!lp I r:-• I
uia ('P~leB'lrais (p'•
o isp ~'CQnB~n~&'):rllsptk)

Duriag ualdio11 I1Jr prog;rollfll aiL'I for the iop• i 11vt1 .u'C the veclnrJ n 8J!KI nul ~ ~i=ru ,lf the
nu~mtor Gill! ttr lkllOIII.i fl PQ'.)'Mlm.uah. '~~pecll~ly. in d~ndm8 PJ1'o't"f'l; nf z. ~ daLt lll"l! cJ:~tcnd in§jdt
$q~Ure ~~ Wlow•~

num • (lEI I
den - £18 ) -4 -11

The lpUI urc lk n:: 1..Juu 1111.:1 Ehc ~·n · LS, Ilk: pole nf d1 e.lp~UU~Un t.•r G(ll in :bt furm nf Eq. 0. 1:-11n..
f'mr Dllr ~1! ~ ampll!, ilic:sc .art" ou. SJ V'l!:ll bel(JW

Ra!Jidu
0 . 3600 0.2400 o. 1a oo
Fol~s
0.5000

constant~

l'l."otc tlw lhlc ~-IJ"anSt'artn of Eq. c1 ll9l dnuhl~ ar 1. - L/J = -0.3JlJ. The tuu 'I."TlltJi)' m 1 tk
ruJdi..to; und PQin ILW!l J:buye ooctn)ILlrvf' ro the .s.unple pc;-le [ lll ( 1 - 0.5:=- 1 ). lht t.ec:tmd crW)' corr~
!~;~llv::Slmpk pc.>hdacrur(l OJ)JJ;e-t). :111 1 ~lbi.Jda.try~OOJ!rothe~fl +O.J.JJJ::- 1)2 lllllle
JWii!ial Fnlc ·ern QJWblan. lbu.~ ~ dc1ii«d n~c-"' IS ih~D b)
0.36 0 24 0.4
Gc z: ) = + +H
...,...--~-~--'"11;
I- G..s~-• I+ UJJ;J,;:-1 +0 3

\\'e now CCJCSi:dl!ll 1~ dc«:arn.tliiLIJlll of the aorul rnrmof 11. "t·I:J'aru;fGnD fiu.n •L• 1-
fnu:t)111! ~ LCili :Je~'"l!tian U _s,J\II:Sl by E.;J (~. J40). 'The " 1 L\lJ ~m &:haL be ~ 11u lind the
, o.aJ.ronn is&~"'~:~~ ~t..,.,..

I Progt"31T1 3_10
Pa~t.i~J •F::act..Lon I:.JCI)olln&!Oll to Rn~Ional z-Tr nsfou1

t" • input ( • 'JYp in th~ ret~ i d'l.I~!S • ' I ;


p '"' .ln:;lu r; L • 'l'yp. 1n L.: h.e pol '!B •l r
k ~ npu~['Typ~ nth& eon~t nts- '};
[llUr.l, ~ro.) .t~-.du~:o:~r.-p,io:r~
diiSpf'N r~tor polynomi coef!lc:i~nL:!:t'): diep(JJu:::::a)
M

disap~ 'De-noo.inatoc p::.lynum1n'l ~~!f1ci nt~:~ 'l ~ ditlprde.nl

Dormg c~X~~Lioo, •tw: pr~ req!A'5r.s [.bt. m 11"' d u \ol!\.'101' r nf idtatS. tt.e 'ltl:Q_Qf pur pole
\le.!l~1r of COIU.I;.rnr~ wi cadi eplen:d wan \ql.l' b~.k.t:u ~ follD'Jo\1•:

~ • 10.4 0.,~ O.J6;


• l-O. ·l.:n -.(l.J.!-33 o.s]
3.9. Inverse z- Transform 171

! ; ;:p4r ;rvn> {]p,r t "~wt/ r,,Lu;; , ,j ;;,,; tJLiikN\11; n anJ :<;t dnA ""'''" ,;,,w;wl;swn'''"'~'"'''w''"''"'" "' '~::~':~::::
&A )\"abd:hrd h;;;Lt;; ti <' '"' A'ltW tHE \!Aii; ,~Mi!bc;wPJ> \; \~hit 0dll!At Zk W: ~ '"; I "J't; ;("'\AD
l"JI\/JI Chf"jfl; rp;;v i" f ' 'It

3.9.4 Inverse z- Transform v~a Long D-iv~s!on

For causal sequences, the .:-transform G(z) can be expanded into a power series in z- 1 In the series
expansion, the coeffiL-ient multiplying the term ::-" i..;; then the nth sample g[n]. For a rational G(z),
a convement way to detennine the power series is to express the numerator and the denominator as
polynomial" in z- 1 , and then obtain the power series expansion by long division.

' '
,, ,,
'
' '
172 Chapter 3: Discrete· Time Signals in the Transform Domain

3.95 inverse z- Transform Using MATLAB


The inverse of a rational z·tran~>fonn can also be readily calculated using MATLAB. The funclion impz
can be utilized for this purpore. Three versi:ons of thls function are !:l.."'l foUows:

[ic, l] irepz (num, ::len), lh, tl i rr:;::;z. ( num, den, L/ ,


[h, t.] impz!num,dcn,L,FT)

where the input data consist of tbc vectors num and den comaining the coefficients of the numerator and the
denominator polynomia1s of the z-transfcrm given in descending powers of z. the output impulse response
vector!:, and the tUne index vector t. In the first fonn, the length i.. of his determined automatically by
the computer with t = 0 : L- L whe~Us in the remaining two forms it is supplied by the user through
the input datum L. In the last fonn, the time interval is scaled so that the sampling interval is equal to the
reciprocal of FT. The default value of FT is 1.
Another v•ay of arriving at this result using MATLAB is by making u..o;:e of theM-file filter as indicated
below:

y = filter(nurE,den,x)

where y is tbe output vector containing the coefficients: of the pov.:er series representation of H(z) in
increasmg powers of z- 1 • The numerator and denominator coefficients of H(z) expressed in ascending
powers of z-l are two of the input data vectors num and d2n. The length of xis the same as that of y,
and its elements are all zeros except for the first one which is a I.
We present next two examples to illustrate the use of bo!h functions.

Blt011 , 1di t
({ f 1: h# ~t' !VHT j 0' P
3.10. z-Transform Properties 173

; !

> ': 1 >1

:fL'lA MJ'IL +:: "fr,jf W+: de iCJ !Au n:L t{IL Att vyt:C "tt1nsstit!11B off £1 t Y { 34 j fM MA 11 ""'if ptzt,ijHl(!t f!'tlll PQ) 1\:w W:M"k]
'' t'ltSt>1iWH::w tkw Yt1VVPIV ir •{YQF\f\APt{ ! if 4 { 1f1:i'fJi( r Uktjktt\t{1& 14 jlfw{(f\ ,f\l:&JI1\:w,

1 >+£4 1) 1( i'Jt
'r YG {"<0 ,"0/:if/ltf f.</

L
{ ; ! :;
9 ;,;:/:ti::L&:J t; f/ ' Lk "r ' >t"• '} t/L:rt4>+
1\ '"tqf ¥;;;;"It'?~ 0
]/ {¥"}\""'\ :"

A",YiJt !
'"< v-jd \"

-
3.10 z- Transform Properties
We stJmmarize in Table 3.9 some specific properties of the z-tnmsform. An understanding of these prop-
erties makes the application of the z-transfonn techniques to the analysis and Ge.Ygn of digital filters often
easier. Tile reade:r is therefore. encourage<: to verify these properties. We consider the applications of some
of these properties in the next several examples.
174 Chap!er 3: Discrete·Ttme Signals In the Transform Domain

'Thble 3.9: Some useful properties Df lhe ...:-transform.

Property s..,....,. z -Transform ROC

gfn] G{z) 'Rg


h[n] H(z)
""
Conjugation g"'[n] G*iz*) R,
Time-revetsal gl-nJ G(l/z} I{R.g
Unearity ag[n] + tlh[nl aG(z} + .BH(z) Includes R.8 n R.n.
Time-shifting g{n -no] z-""G(z) 'R.g. except possibly
the point z = 0 or oo
Multiplication by
an exponential a" g[n] G(z/cx) !ai'Rg
'"'uence
Differentiation dG{z) 'R8 , except possibly
ng[n] -z--
of G(z) dz the point z = 0 or oo
Convolution g[n]@h[nj G(z)H(z) Includes 'R.8 n 'Rh

Modulation g[n]h[n) ~ fc G(t•)H(z/v)v- 1 dv Includes "Rg Rb


00
Parneval'srelation L gin1h'"[nJ = ~ §c G(v)H"*(l/v*)~>- 1 dv
• 00
Note: If 1?.8 denotes the region Rr < lz! < Rg+ and RJ, denotes the region Rh- < !zl <
Rh+, then I(R8 denotes the region I/Rg+ < h:l < 1/Rr ood n 8n,. denotes the region
R8 - Rh- < !zl < R 8 + Rh-1-.
3.10. L·Tra.nsform Proportes 175

f:XA.."\IPLH J.JJ ~rm ne lbe ~.tr.lll,(urm X(d of~ cau n::als;cq,~CPec 1[11] .r'ta.est.L~PI'I)Jol.fnJ
ROC'~ ~ 'lilo"e eqJreli~ ..r(11] a:. a wm urI 0 ~s.d e.x:pootnli.al 5(':()

l']n) = ~,JI~J._,~fn] ~,.tr,.-r..,....,i["']


..." l = Ill' [/f) + t.··(rl •• Mic:ll";
1911111 ~ Q •sJLnl •

.n!Jl ao = nJ . 1'lv! :<·•nmsrorm V , ..::) 1:1 t•l 11] From Tllb :U i,; lri~ lh:.
I J I
V(~l = J · ~- • ,
'I I -or;;;- I 2 I - r,-JOiv.;::-1

I I
l - l ----~.........,.
V'Cl:'") • l2 ·
I - a•:- "' I- ,,.-J '!-•· 1~1 >leal= r .
lbcrdort:, by~ 1~..tnl)' fH'UliEri..Y of ~he :i:-transf~nn. we: oblrnn

.Y: (::;) = 11(:, + v ... -l.,


1.:1 :Jlr r.

AMFr . J .40 r>r:tcJJJl.l Lllc ::- iJOI'Jn ri.~~ uJ lhi: ROC ~;~f ~~~e~ ~["] = ll'l + l}a,.~[nl. Lc:t
~IIIJ =
a"',w.ll'l!] 'Ti1ei:IV."C"4,:'1n 'lilofi!C .>ll'll = ~Ltln t+ do~~J. Fro.u Eq. ().I 10), lbe :::•IJZri!I;F: or ["] . St'W:D b)'
L
.Vld = , 1-d > lfrJ .
J -~~~ - 1
"Na1. in lbt .tiff~utbuun prupcrr:r of ll"h.bU= 9 , v.~ .nrri'\'C nr :the ~-'IIU arm ol 11 .llJn~l as

dX(.z) a.c 1
-.: ~~ ( I - a:c l)i'

I
Y4Z J = 1-l!r:t- 1

lzl • !.
lu. I ~ROC is C~tcrior ta &!Cltt.lt: tit~ 1 n.
lbc iMcne l.fal\:;.foun Ill;. nglll·~ckd $Cqlltl10e 1bG pan.W.:fnlcti'on
CK~ of G I:;:) l:w ~r:l ~tcrrNDICd J fu.~twpl.co J.J) and i~ liYa1 b)' (3.1-'W). J-lrum TiltH~ 3..3. the lll'lmiC:
:t·~llak•rm of 1/t I - 0 ..5::-~' J i~ (0:~" r.t [111, aDd lk m~ne ;::-~o;bifl or 1/ll "t r IIJ].! -I' 111 4- 1,11)... l'll"J
176 Chapter 3: Discrete-Time Signals in the Transform Domain

The time-reversal property and the convolution theorem of Table 3.9 can be employed to develop the
expression for the z-transforrn of the cross-correlation sequence rgh[l] of two sequences g[n] and h[n]
in terms of their z-transforms. Let the .<:-transform of g{nl be denoted by G(z:) with an ROC 'Rg, and
let the z-transfQilll of hg[nj be denoted by H{z) with an ROC Rh- Now, recall fromEq. (2.106) that the
cross-correlation sequence r gk[f] can be expressed in terms of convolution as

rg>[l] = g[f]@h[-l].

Using the time-reversal property, we observe that the z-transform of h[ -l] is simply H (z- 1). Therefore.
using the convolution theorem we obtain from the above equation thai

(3.143)

with the RCX::: given by at least Rg n Rh.


As in the case of the discrete-time Fourier uansform, the Parseval's relation fm the z-transform given
in Table 3.9 can also be used to C<lmpute the energy of a sequence. To establish the required formula, we
let g[n] = h[n), where g{n} is a rea1 sequence, in the expression given in Table 3.9. This leads to

(3.144)

where C is a closed contour in the ROC of G(z)G{z:- 1). Note that if the ROC of G(z) includes the unit
circle, then that of G(:C 1 ) will also include the unit circle. In fact, for an absolutely summable sequence
g[n), the ROC of its z-transfonn G(z) must in<:lude the unit circle. In this case. we can let z = ejw in
Eq. {3.144}, which then reduces to Eq. (3.18}.

3.11 Transform-Domain Representations of Random Signals


The notion -of random discre1p-time signals was introduced in Section 2.8 along wid! their statistical
characterizations in the time-domain. These infinite-length signals have infinite energy aru;1 do not have
transform-domain characterizations like the deterministic signals. However. the autocorrelation and tbe
autocovariance sequences of stationary random signals, defined by Eqs. (2.147) and (2.150), are of finite
energy and, in most practical cases, their transfonn-domain representations do exist. We discuss these
representations in this section.
3.11. Transform-Domain Representations of Random Signals 177

.3.11.1 DiscreteM Time Fourier Transform Representation


The discrete-time Fourier transform of the autocorrelation sequence ¢xx[l] of a WSS sequence X[n],
defined in Eq. (2.147), is given by

:w1 < -:r, {3.145)


i=-x

mtd is usually referred to as the power density spectrum or simply, the powe-r spectrum 10 of X[n]. It is
denoted by Px x (w). The above relation between the autocorrelation sequence and the power spectrum is
more common1y known as the Wiener-Khintchin£ theorem. A sufficient condition for the existence of the
power spectrum Pxx(w) is that the autocorrelation sequence ¢xx(l] be abwlutely summable. Likewise,
the discrete-time Fourier transform of theautocovariance sequence rxx[ C} of a WSS ~en'-"eX[n ], defined
inEq. (2.150). is given by

=
rxx<e 1(t)) = L Yxx(ne- 1 ~. (3.146)
i=-oo

A sufficient condition for the existence of rxx(eiw) is that the aurocovariance sequence Yxx(C] be ab-
~olutely summable. Applying the inverse discrete-time Fourier transform to Eq. (3.145) and using the
notation Pxx(w) = ~xx(ej""), we arrive at

¢xx{f] =- ll'
2Jr - -]f
Pxx(w)el"'~ .. dw. (3.147)

It follows from Eqs. (3.147) and (2.162) that

E (;X!nJ! 2 l
}
~ <lxx[O] ~ ,_1
.t..lt
1'
_,.
Pxx(w)dw. (3.148)

Thus, .¢-xx [OJ represents the average power in the random signal X [n]. Similarly, the inverse transform of
Eq. (3.146) yields
Yxx[l] = -I
2H
!'
-rr
. ""
f'xx(.e 1w)eJ dw. (3.149)

From Eqs. (3.l49) and (2.163) we get

O"i = rxx(OJ = -2JrI j' r xx(ei"') dw


-"!<

I
~-
2.-r
j'
-][
Pxx(w)dw -lmxl·. ' (3.150)

Appjyingthediscrete-time Fouriectransfurm to bothsidesofEq. {2.1600:), we can show that the pow-er


spectrum 'Pxx(w) of a WSS random discrete-time signal iX[n]J is areal-valued function of w. lnaddition,
if {XlnH is a real random signal, Pxx(w) is an even function of w, i.e., 'Pxx(w) = "Pxx( -w). We shall
dc:monstcate later in Section 4.13.2 that for a real-valued WSS random signal, Pxx(w) 2: 0.
178 Chapter 3: Discrete-Time Signals in the Transform Domain

Likewise, the discrete-time Fourier transform of the cross-correlation se-quence t/Jxy[.f] of two jointly
slationary mndom signals IX[n]} and {Y[n]j. given by
oc
<t>xy(el"') = L ¢xy[l]e-i"'t, fwl < H, (3.1Sl)
f=-00

is usually referred to as the cross-power spectral densit)· or cross-power spectrum. It is denoted by


Pxy (w) and, in general, it is a complex function cf w. A sufficient condition for the existence ofPxy(w)
is that the crrn;.<,-correlation sequence 1/Jx y[i] be absolutely summable. Applying the di~te-time Fourier
transform to both sides of Eq_. {2.l66c), we can sbow that Pxy(w) = "PYx(w). Similarly. we can define
the discrete-time Fourier rra.nstOnn of the cross-covariance sequence Yxr(t] as
=
rXY(e}"') = L yxy[iJe-JWt. lwi < :r. (3.152)
i=-0<.\

A sufficient condition for the exisrence of rxy(el.,) is that the cross-covariance sequence Yx.r[t:] be
absolutely summable.
The relation between the discrete-time Fourier transforms of the autocorrelation sequence and the
alltooovariance sequence can be derived from Eq. (2_J 61) and is given by

rxx(.eJ'') = Pxx{w)- 2rr :mxf' 8(m). lwl < ;;r, (3.153)

where we have u~ed the notation Pxx(w) = 4>-xx(ej""')- Likewise, the relation between the discrete-
time Fourier transfonns of the cross-correlation sequence and the cross-covariance sequence follows from
Eq. (2.i65) and is given by

lwl < lt, (3.154)

where we hm:e used the notation Pxy{w} = <Pxr{efw)_

3.11.2 z- Transform Representation


As can be seen from Eqs. (3.153) and (3.154), the Fourier transforms of the sequences rxx[l) and yxy[l]
contain impulse functions. As a result, their z-transforms do not exist in general. However. for zero-
mean stationary random signals, the z-transform of the autocorrelation sequence, 4>xx(z), and that of
tfle cross-conelation sequence, $xy(z). may exist under certain conditions. Since the autocorrelation
and cross-correlation sequences are two-sided sequences, their region of convergence must be an annular
region of the :onn
I
R1 < lzl < - . (3.155)
R,
We can generalize 5ome of the results of the previous section if the ::-transforms exist. For example.
fr,:nn the S}IDI:letry properties of the power spectrum P x x (w) and cross-power specttum Pxy (w). it follows
that 1>xx (z) = 4>"ix0/z•J and cf> xy(z) = 4>Y.x01z"). It also follows fromEqs. (3.150) and (3.153) that

1
oJ = _1 4>xx{z)z- 1 dz.
2rr; fc
(3.!56)

where C is a dosed counterclockwise contour in the ROC of¢ xx(z).


3.12. Summary

2
"x r ~P-xxli_!

t
0
1a)

~-xfW)

"-~

w
1Jt 0 2-n

{b)

Figure 3.24: (a) Autcx-orrelation sequence and (bl power ~J>e('t:-.>.1 density of a white noise.

3.11.3 White Noise


A random _precess {X[nJJ for which any pair of samples, Xlm land X[n I areuncorre1ated with m -=/=- n. i.e..
E(XfmlX[n]) = E(Xfm])E(X[n ]), is called a white ramlmn process. For a WSS while random process.
the autocorrelation sequence is given by

.PxxiO = a{Jlfl + m1. o. !57)


and the -eorre~ponding power -spectrum is. given by

(3,158)

A zero-mean white \VSS random proces..<:. has an autocorrelation sequence f/;_y x[tl that i;; an impulse
sequence of area o} and a power spectrum Pxx(w) that i:-. of constant value a; for all values o-f tJJ, as
indicated in Figure 3.24. Such a random pnx:e:s..-; is more commonly called -...-·hire noiu and plays an
important role in digital signal processing.

3.12 Summary
Th-ree different frequency-domain representations of an aperiodic discrete-time sequence have been in-
trodm:ed and their properties reviewed. Two of these representations, the discrete-time Fourier l.ransform
{DTFT) and the z-transfonn, arc app-licable lo any arbitrary sequence, whereas the third one, the dis-crete
Fourier tmnsform (DFT), can be applied onlj' to tioite-length requences. Each of the>:e representatiOr!S
co:1sists of a pair of expressions: rhe annlysL<> equation and the synthesis e-qualuJn. The analysis equation
is used to convert tl-om the time-domain representafion to the frequency-doma:n representation, while the
synthesis equation is used for the reverse process.
Relations between these lhree transforms have been established. The chapter ends with a discussion
on the transform-domain representation of a random discrete-time sequence.
For future convenience we summarize below these three frequency-domain representatmns.
Chapter 3: Discrete-Time Signals in the Transform Domain
18J

Discrete-Time Fourier Transform {DTFT)


Analysis equation:
X

X(e_i"') = L xfnle-Jwn.. (3.159)


n=->X

Synthesi~ equation:
(3.160)

Discrete Fourier Transform (OFT)


Analysis equation:
N-l
X[k] = L x[n]W!~, (3.!61)
.~

Synlhesis equation:
l N-1
x[nj = ~ LX[k]WN(r., (3.162)
Nf=iJ
where
' -
N- ,-j?::rjN
. (3 !63)
''

z-Transform
A:1alysis equation:
=
X(_z) = L x[1ljz-". (3.164)
n=-=
Synthesis equation:

x[n] = - 1-. 1 X(z)zn-l dz. C in ROC of X {z). (3.165)


2:r} k·
We apply the concept~ developed in jUs chapter to the repre5entation, analysis, and design of the
so-::ai\ed linear. time-invariant (LTI) discrete-time systems in the following chapter.

3.13 Problems
3.1 Let X (ej"') denote the DTFT of a real sequence x(n]. Show that the real part Xre{eJW; and the magnitude function
JX (ei"')f of X (elw) are even functions of w. and tbe imaginary part X im {ei"') and the phase funtion argf X {eiw J} are
odd fu;x-tions of w.

3.3 Derive the DTFTs oft.i.e following sequences given in Table 3.1: (a) ,'.L[n]. and (b).ejw.,n.

3-5' Pro\le the fo[)owing pruperties of the DTFT listed in Table 3.2: (a) Time-shifting. (b) frequency-shifting.('-')
differentiation m frequent-)', (d) convolution, (e) modulation. and (fl Parsevai's relatioo.
3. 13. Problems 181

3.6 Let }{ (rl'-") dewJle the DTFT of a ct:m;dex. >;equer...:e x!r.]. Express ~he DTFTs of lhe follnwing sequences in
term~ of X(,)~'J: (a) ,;[-nl. (h;.r""[ -nl, (c) Re!x[nJJ, (d) jlm[x[nJ}. (e}x,;,[nj. and (t) :t..;a[nj.

3.7 Let X (L'J"'} denote the DTJ..T of a rea! sequence x(n"!. Det~rminc the inverse of the fotlowing DTFfs intenns of
xfnj; (a) Xre(ef""), (b) jX,m(ri·~).

3.8 Let X(ef"') denote the DTFT of a real sequence ..:{nj. J>r,J,.c the fclk>wing symmetry rdatlons: (a) X (ef'u) =
X*(e~jrv), (b) Xrek;"') = X,e(e~J"-'). (c) X;m(e 1"") = -X;mk-j"'), (d) IX(ei"')l = ;x(e-j"'ll. and (ej
argj X (t•jw)) = - argfX (e- j'")J.

3.9 Lei X(ef"'-i dennk the DTFf of a real sequencex[nj.


ia) Show that if x[nJ i!S even, then it can he cowpoted from X {ej"') using: x(n "! = ~ J: X tei""} coswn dw.
!b) Show that if xlnl is odd, then it can be computed fron1 X (ef"'} using: xlnJ = -;!:- f!: X {e~"') sinwn dw.

J.lU Detennine the DTFT of the causal se<zuence xfnl = A a" ~os( w 0 n -7- ¢' )p:-fn }, where A, :t, w 0 , and¢ are real.

3.11 Determme the DTFT of eat..il of the fui:owing sequences..

Ia> xlln]=a";~fn+lJ, :al < l, (h) _{2'n) = na"pc{n], !al <


.'
{c) .q{nl = I
al"l '
0.
:n! :2M,
otherwise,
(d) x4[nj =a" tJ:[n ~ 3]. Ia< < j,

(e) .t5[n] = na" tJ:fn + 2!, I• I <I. {t) _t6fnj=a·'~p[--n~ IJ, !aj > I.

3.ll Determine the DTFf of ea<.:h o; tlk folluv.ing scquem:cs:


:,
-N < n < ,V,
Yt{r.J = I 0. othe;,'isc~

(b) '"'"'lnl
- ""
= II -x.
0,
i'" -N ::= n::::
otherwise,
/\,'

(c) nlnJ
--
=I cos{;;rn/2N),
O.
-N :S _n ~ N,
otherwzs.e.

J.IJ Show that the inverse DTFf of

is given by

3.14 E\'aluate: the inverse DTFf of each of the following DTFTs:

( J.) (bJ

l -1 -..,,~N ·-·"
(<.:)
X', e,•w J = ..::....£=Un:>s~. (d)

J.I5 Determine the inverse DTr"T nf eacb of the fo\lov.ing DTFTY

(a) H; (ei"') = l + 2co~ ( u - 3 co;, 2M, (h) H2(eiw) = (3 + Zcos0 + 4cw 2w) cos(wj2)e- j<4 1 .
(c) H3(eJ"') = j {3 + 4cosw + 2cos 2w} s1n w, (d) H4(eJ'v) = j (4 + 2 cosw + 3 co~ :w} sin(w;'2)efro/2.
182 Chapter 3: Discrete-Time Signals in the Tr2nsform Domain

J.Hi Determme the invers-< DTf•T <:>1 each of !he following DTf'T~

!al '
!/ 1(ei"'J= 1 +2co•,,u+_:\;.;o~-,:,>, (b)
(c) u,,.t'-'-")-= jr:'-+4cos<u+ 2co:;::':w)~inw. (d)

J.l 7 LC'I Xi ei"'} denote t~ DTFI" oi a real sequence xfn]. Exp c~-' the inver-.;e DTFf _r!n] of Y (c.!'" J = X ref-'''") in
tcr~sof tln!.

3.18 Let x (r·jw) denote the DTFT of areal sequence .tlnJ_ Deline i"tei'-"l =! {xeej«~/ 2 ) + X(-("i"'i 2 }f. Deter-
.-nirn! the inverse DTFT _>In] of Y{e 1 '"J.

'
3.19 \\'nhout n'mputing the QTFf. Jetermtne which om:,. of the in: lowing scquem:es have rea!-n!lued DTF-Ts. and
whi•:h one'> have imaginary-villued DTFTs:

,, (.
I
-N _:: n .::: Jli,
n · -N ::;n _::: N,
(o) r
x; n J =
I O, otherwise.
(b) --t2!nl =
()
()
otherwi:>e,
for n even,
(c) <;[nl = (d) x.J-11:] = _1__
:;n· for n odd,

x~lnl =
Io. 'n><,-,
.
n = 0,
!nl "" o_

3.20 With-m1t "omputing the mvenc DTFC dtkrmme which nnt>,~ of the foll{lwing DTFTs have an inverse that :san
<OVel' scqut•ncc, and wh1ch one~ have an inverse that;,; <Ill oCd sequem:e:

0 S 1'•'1 _::: "-'c,


w~ -< ~Ni :::=: rr.
-::;r < w < n,
iU
0-<d-<-'l.

J.2l \l/1thout .;omputins: the inven.e DTFr, detErmine which one c-1' the DTFTs ofFig1.1re P3.l J--.a;; an mverse that is
an C"'ien ~equence and which one ha~ an inver:-:e that is an tltid .;eqiKilC~.

(a> (bj
Figure P3.l

J.22 Ld X{e-'"') den<>te [he DTFT of a real seqw:'"Ik'e x[11]. Deten-:-jne the DTFT Y{e,-"') of"lhe ~e4uence y!ni =
dltHE>-tl-n] m term~ of X(PJw) and .;hew that:! i~ a real-valued fwx:tion of tv_

3.23 A Kquem:e--1lnJ has a 7Cr<J---pha~e DTFT X(ejM) <B sketched in Figure P3.2. Sketd; the DTFT of tire sequence
\"fnk-j:-;:n_i}_
3. 13. Problems 183

X(ej")

'
X .,, /k 0 1ti3 '
X
w

FigureP3.2

:~r· tn I
3.24 Using Parwval's relation evaluate the following integrals: (a} fo 5+4"'easwdw. (b) JO 3.25 Jcuswdw. and
{c} ft t5 4!~w}2dw.
3.15 Let xlnl be a length-9 sequence given by

{x[n/J = {3 0 -2 -3 4 1 0 -1/
t
with a DTPT X(eiw}. Evaluate the following functions of X(~"') without computing the lrallsfonn itself:

(a) X(el 0 ), (b) X(ei'"),

J::_;r IX{ejw)~
2
(c) j!!_n X(ej"') dw. (d) dw.

(e) j!!. .. ldxj:,'w> !2 dw.


3.20 Repeat Problem 325 for the length-9 sequence

{x[ni} =1-2 4 ~1 5 -3 -2 0 4 3).


t

J.27 Let G 1(eJ"') denote the discrete-time Fourier transform of the sequence Kl {n} shown in Figure P3.3{a). Express
tile DTFTs of the remaining sequeBCes in Figure P3.3 in termsuf G1 (efw). De note-.'a.luate GJ(ej"').

g l[n]
I g2[nJ

I
¢ !
'i 2

n
0 I
"(a) (b)

'? 8;{n} If ginl


t

n
I
-9
2 ! I n
"" 3(c) .j. 'i 6 7 01234567
(d)
........ 1'33
Hl4 C:,apter 3: Discrete-Time Signals In the Transform Domain

3.~~ Let y fn-1 denote the linearcoiJVOlutionofthreeseque•x:-e'>, x ;In]. .-z[nJ, andr;>(n], i.e., y[nJ = x;ln }@.l:2fn j@x3fn].
Shttw thal

(a)
0 ~00 y(n]~ (J=x,(nl)l~=X}[n])(J=x;l•l)
n~ (-1)"y[n]~ (~00 (-l)"xl[n]) l~00 (-l)"x2[r.]) l~00 (-l)'x3 [n]).
(b)
00
3.29 Let x-[n] be a causal aru.i absolutely summable real sequence with a D1FT X (ej"'). lf Xre{el"') and Xim (ej"')
de·rwte the real .and lmagjnary parts of X (ef"'), ~how tha: they ace related a;;

{3.J66a)

(3.166b)

The above equations are called the Jiscrete Hilbert lron~form relations.

3.:10 Verify the identity of Eq (3.28).

3.:11 The perimiic convol:4tion of two periodk sequences, itnJ and k[nJ, of period N each, is defined by
N-1
Yin]= 2.::: i[r]h[n- rj. (3.167)
r=U

Show that j[n] is also a periodic sequence of period N.

3.::02 Determine theperiOOic sequence j(n] ohlained by aperiodic convolution of the followiogtwoperiodicsequences

r r
of period 5 each:
forn=0.2, ii:xn =0.
fmn = 1, ' 0 for-n=1,3,
i[>l] = ['
-2, focn = 3,
h[n] =
1: forn=2,
3, forn=4, -2, forn=4.

3.~·3 Detennine the peri odie sequeme }[nj obtained by aperiodk convolution oflhe following two periodic sequences
of period 5 eacll:
2, forn=0,2, [, fmn =0,
-1, fum=!, ' 2 forn=J,
i(n] =
{ 3, for 11
-2, forn =4,
3,= h{nj = _: ,
0,
{
3 forn=2,4,
forn=3.

3.34 Let .i[n] be a periodic M:qUetlee with period N, i.e., .i[nJ = i{n + lNJ, where tis any integer. Tbe sequence
~fnl can be_ r~re~ted by a Fourier serie« given by a weighted sum of periodic complex exponential sequences
'(lg[nJ = e 1 2."' "II\. Sbov.· mat. unlike the Fourier series re~cntation of a periodk continuous-time signal, the
Fourier .sen!'s representation of a periodic discrete-time sequence requires onl)' N of the periodic complex exponential
sequeDCes Wk[nJ,k =0, I, .... N -l,andisofthefunn
N'
itnJ = "~ t ifk]eiZ"kn(N' (3.l68a)

·~
3.13. Pro~ems 185

where the Fourier coefficients i[k! are given by


N-l
X[ki = L i.'[nje-/?.rrl;;nJN- (3.!68h)
k~

Sl-m\-\- that i[kl is also a periodic "Sequence ink with n period N _ These: of equations in Eq. {3.147) l"t"present lhe
discrete Fourier series pair-

3.35 Determine the discrete Fourier series coefficient~. defined in Eq. (3.168b), of the ful!.)wing periodic sequences:
(a) i1tnJ = cus(.-<n/4),
{b] .izlnl = >in(Jt"n/3} + JcoS{Jt"n/4).
3.36 Show u~ing
'
Eqs. (3.J68a) and (3.168b) that the periodic impube train

=
p[n 1 =- L .l[n + tNl
i=-o-.;

can be t:~pres.~ed in the furm

3.37 Let x[nJ be an aperiodic sequence wilh a DTFT X(el"'). Define

-)k) = X( e
X ;m'l
t tt=2xkjN = X( e jhk!N,,. -oc < k <: = .
Sbow that ifkJ is a periodic sequence ink with a period N_ [..e{ Xik] be the discrete Fourier series coeffidenls of the
periodic &equerrce _i[n]. Show using Eqs. (3.; 68a) and (3.168b) that

=
ifn} = L .'t"[n +rN].
r=<-=

3.38 Let ifnl and _"fin] be two pcriodic ~eque:nces with period N. Denote tht:ir diS<.."fete Fourier series coefficien~.
ddined in Eq. O.l68b), as X[k] and f[k ]. respectively.
(a} Let iinJ = iln JY!n] with G[kJ dernxing its discrete Fourier series coefficients. Show using Eqs. (3.168a) and
(3.168b) that G[kj can be expressed in temls of k[k] and i(kj as

N-'
Gtk] = ~ L .."itqfik- tj. {3.169)
£=0

(b) Lel H[kl = X[kJYfkl denote the discrete Fourier series coeffidenrs of a perioilic sequence hrnJ. Show using
Eqs. (3.l68a) and i3.l68b) that hlnJ can be-expressed in renns of i[n] and Hnl as
N-l
il[n] =L ifrLV[n ~ r]. {3.170)
r=O

3.39 Prove the following general properties of the DFT listed in Table 3.5: (a) linearity, (b) circular time-shifting, (c)
circular frequency-shifting, (d) duality, (e) N -;JOint circular convolution, {f) modulation, and {g) Pru"seva!'s relatioo.
186 Chapter 3: Discrete-Time Signals in the Transform Domain

3.40 Let xjn) be a !ength-N complex sequcnce with an N-poirt DIT X[k]. Determine theN-point DFTs: of the
following Jength-N :.equences in terms of X[k ]: (a) x*[n], (b) x "1< -n} N]. (c) Re{x[tl ]), (d) jlmlx!nll. (e) x;x:sfnj,
and ifJ Xpcafn].

3.4l Let _:r(nj be a kngth-N real :-;equence with anN-point DFr Xfk]_ Determine theN-point OFT's of the following
!ength-N sequences in ter.ns of X[kj: (a) .t"pe[nj, and (b) x?"[n].

3.42 Let x[nj be a Jength-N real sequence with anN-point DFT X[kj. Prov-e the following symmetry properties of
X[kj: (a) X[k] =X*[ ( -k) N ], (b_l ReX[k] =-ReX[(-k)N ], (c) lmX[kJ =-
lmXr(-k)_;v]. (d) fX[k]( = [X[(-k)N ]j,
and (e) argX[kj = - argX[ (-k)N ].

3.43 Ct>n<Jider the following length-8 sequences detineC f= 0::::: n ::S 7:


la) {.q!n])=J! f 0 0 0 i IJ,
(bJ[x.zfnlf={l 0 0 0 0 -J -1},
(c) t q[n!J = {0 0 0 0 -l - 1},
(LlJ (.c.fnll = {0 0 0 0 I/
Which sequence~ h.ave a real-valued 8-point DFT? Wh.1ch sequences have an imaginary-valued 8-point OFT?

3.4' Lctxfnl. ()::;: n::: fi- 1. bealength-N~equencewithanN-pointDFT X[k]. (}_s:k:::; N- L


(a) Lf x[nl. i~ a symmetric sequen~ satisfying the L~ndhion x[n! = x[N - 1 - n], ~>how th.at XlN j2} = 0 for N
even.
(b) If x[nl is ~ antbymmetric sequence satisfying the condition x (n] = -x[N - l - nj, show that X{OJ = O_
(c) ff :r[n] is a sequence satisfying the condition xfn] = -xln _..._ M] with N = 2M. show that Xl2£] = 0 for
l=O,! ... -,M-1.

3.45 Let x[nj, ~ :S n ::: N- I. be an even~Jength sequence With an N-poiot DFT X{k], 0 s_ k < N- L lf
Xi2mj = 0 forO :S m ::::0 4-
t, show !hatx[nJ = -x[n+ i'-1·

3.4ti Le1x[nJ. {) :S n::: N -1, beakngth-NsequencewithanN-pointDFT X[kj, 0_::: k::;; N -1. Determine the
N-roint DFTs ofthe following lengtb-N sequences in terms of X{kj:
(a) w[n I = ax[\n - mtl N 1 + tlx!t.n- m1)N j. where m 1 and m2: are positive integers less than N.
(b) gf'l] = /x[nJ. forn even,
0. fO£ n odd,
(<:) _1-'ln] =X[n]@x[n].

3.47 LctxlnJ, 0 :S n ::0 N -1, beaneven-lengthsequencewithat~N-poirn:DFf Xfk}, 0::;; k _::s N -I. Determine
the !V-puinl DFfs of the following lengtb-N sequences in terms of X!k]:
ufnj= x[n!- x[n- -1-1.
dnl = xlnl + xf,.- -j-1.
ylnJ=(-l)"x(nj.

3.48 LetxfnJ. 0 ~ n ::5 N- 1, be.alength-N sequence with an N-:::>0-intDFT X[kJ. 0 ~ k _::s N -1. Determine the
N-point inven;e DFTs of the following lengtb-N DFfs in cerms of _r!nJ:
{a) Wjk I= a X[~.{ - m1L-v] + I3Xf(k - m2iN 1- where m; and mz are po1-itive integers less than N,
(b) G[kj = / X!kJ, fork even.
{}, fork odd,
{c) Y{k] = X[kl\3: X{k~.
3.13. Problems 187

3.49 Let x!n f, 0 ::S n ::S N - I, be a lenglh-N sequence with an N-point DFT X[.t]. 0 ::S k S N - 1.
(a) Shov. that if N is even and if x [nj = -x[n + ~ j for o;)J n, then X{kl = 0 fork even.
(b) Show that if N is an integer multiple of 4 and if xfnl = -x!n + ~ 1 for alln, then X{Aj = 0 fork = 4-L 0 S
t:::~-1.

3.50 Let xj.n]. 0 ~ :n ~ N - I, be a length-N real sequence w1th an N-poinl OFT X[k], 0::: k ~ N - L
(a} Show that X[N - kj = x•{k].
(b} Show that X[O] is reaL
(c} If N is even, iillow that XjN /214 is :eal.

3.51 Let GjkJ and H[t] denote the 7-point DFfs oftwo length-7 sequences, g{n.] and hfn], respectively.
(a) If G[k] = ~I-+ 12 -2 + j3 -1- jl 0 g + )4 -:! + j 2 + j5} and h[n] = g[\n- 3),J, determme
H{l:] without computing the OFT.
(b) Jf ginJ = {-3.1 2.4 4.5 -6 I - 3 71 and Hfkj = GW:- 4}J}. determine h[n] without computing
the OFT.

3.52 Let Y[k] denote theM N-prnnt DFr of a leagth-N sequence x[n] appended with (M - l)N :zeros. Show !hat
theN-point DFT X[kl Qn be simply obtained from Y[k] as foHows:

X[.t] = Y{kMJ, O:s,k::.;N-1.

3.53 Let xfn]. 0 :S n :::; N- I, be a Wngt.'l-N sequenee with an MN-point DFT X[J::}, 0 ::5: k ::;: M N- 1. Define

y_fn] = x[(n)Nl. O~n:s,MN-1.

How would )'QU compute tbe M N-point DFr Yfk] of y[n] knowing only X[k]?

3..54 Com;ider the !eil.gth-12 sequem;e. defined for 0 ::;: n ::::_ 11.

{x[n]J = {J - l 2 4 -3 -1 0 1 -4 6 2 51.

with a 12-poinl DFT g:ven by X{k). {}:: k .:S I L Evaluate the fnllowing functions of X(k] without computing the
DFr:
u
(') L"
H
(a) X[O], (b) X[6}. (c) L Xfk]. (d) Ee-jl4di6Jxtkl. IX[kJI'·
k=O 1=0 k=O

3.55 Let X[k] be a 14-point DFf of a length-14 real sequence A[n}. The first 8 samples of Xlk] are given by

XfO] = 12, X[l]=-l-+j3, X12J = 3 + j4. xm= I-JS,


Xf41 = -2 + j2, X[5J = 6 + j3. Xf6.l = -2- j3, Xt71 = 10.

Determine the remaining samples of X[k]. Evaluate the following fum;:tions of x[tt I without cmnputing the IDIT of
X!k]:
lJ 13 lJ
{a) x[OJ. (b) x[7J, (c) I>·{n], {d) Lef(4nn/1}x[n], (e) L jx[njf.
"=0 n=O
"""'
3..56 Let g{nj and h[nj be two finite-length sequence& of Jeogth 7 each. If yLfnJ and yc[nl deBOte the linear and
7-point circular convolutKms of g[nj and hf<t ], respectively, express yc[nJ in terms of nfn.].
188 Chapter 3: Discrete-Time Signals in the Transform Domain

].57 The even sample>. o£ the ll-point OFf of a lenglh-11 real ~uence are gi~ b)' X[O] ~ 4, X[2] = -1 + j3,
Xl4] = 2 + }5, X[6; = 9- }6, X[8] = -S - j8, and X[IOI = ,J3- j2. Deternune tbe ffilSS.Ifi§ odd samples of the
DFr.

3.58 The following six samples of the 11-point DFT X[kj' of a real length- II sequence are given: X{Ol = 12,
X[21 = -3.2- ]2, X[3] =
:U- j4.l. X[SJ = 6.5+ j9,X[?] =
-4.1+ j0.2,and X[IO] = -3.1+ }5.2. Determine
the remalning five samples.

3.59 A !eJOgth-10 sequence xfnl has a real-valued H)-point DFT X[kj. 1be first six samples of x[n] are gh.--en by:
.-~:[0] = 2.5.-o;[lj = 0.7- jO.OS. x[2] = -325 + j U2.~!:3! = -2.1 + j4.6,x[4j = 2.87 + }2, and x[5] = 5. Find
the remaining four -.amples of xln].

3.60 A 498-point DFf X[k] of a real-va!ued sequence x[nj has the following OFf samples: X[Ol = 2, X{ll] =
7 + j3.1, X[kJl = -2.2- )1.5. X[II2] = 3- }0.7, X[k2l = -4.7 + jl.9. X{249] = 2,9, Xf309j =
-4.7- jL9, X[k3] = 3--t- j0.7. X[4.12] = -2_2 + jl.5, and X[k4] = 7- j3-.l. Remaining DFT sample'> are
ass;umed to be of zem value.
(a) Determine the values of the iodices k1. k2, iq, and k4.
!b) Wh.atis!hedcYalueof!x[n]J?
fc) Determine the expression for lx[n H witlmut oomputing the lDFr.
(d) \\'hatistheenergyof[.1:[n]}?

3.61 A316-pointDFT Xlk]ofareal-wluedsequencex[n]hasthefollowtngDFfsamples: X[Oj = 3+ ja. XtJ7l =


U, Xlktl = j23. X[k2l = 4.2, X{l IOJ = - j L1, X[l58] = 13 + jfJ, X[k;>] = y + j L7, Xl179] =
4.2 + )J, X/2101 = tf - /2.3. and X[k4] = 1.5. Remaining DFT samples are assumed to be of zero value.
(a) Detennine the values of the indices kt. k2. k3. and~

{b) Determine the val.m.-..; of a, fj, J, and E.

(c) What is the de value of {x[nj}"


(d) Determine the expr=ion fur {x[n]} without-computing the lDFf.
(-~) What is the energy cf {x[n]}?

3.62 A length-& sequence is given by {x[nU = ~-4, 5, 2, -3, 0, -1. 3, 4}. 0::;: n 57, with an S-point
DFT given _by X[k]. Without computing the IDFI', determine the sequence y[nJ whose 8-point DFT is given by
Y[kj = wJk X[k].
3.63 Let X[k] deJWte the 6-point DFT of the length-6 real sequence x[nJ shown in Figure P3.4. Without computing
the lDFf. detenni.ne the length-6 sequence gln J -w-hose 6-point OFf is given by G[kl = Xfk]. w;t
'
'
'Y' n
' -' 2
' ' j

Figuh! P3.4

3.64 Let g[n] and h[n] be tv.o finite-length sequences as given bciow:
(g[nl! 1-3 2 4). ~2 --4 0 1}.
t t
3.13" Probiems 189

(a) Detenninc yLI_nj = gFn:J@h!n].


(b) Extend g[n J to a Jeng:th-4 sequence g,..[n] by u:ro-pad.:lmg and compute -"C tnJ = ge(n ]€)h[n].
(c) Detemliru: yc[nj using 1be DFT-based approach .

.).65 Show that the circular convolution is rommuta:ivc.

3.66 Let ylnl = .q(n]t:SJx::dn!®xJ{n] where the seqocn:es .t; lui. l ~ i :::; 3, are defined furO :S:: n ~ N ~ l. Prol.e
the f-ollowing equalllles:

N-1 )
~{-l)11 XJ[nl forNeven.
(

3.67 Let Xtk! denrn:ethe N-pointDI--1 of -a length-.\' s~\Jence.x;n;. Determine dre N-point OFT 'l'!k] ofilie N -pomt
sequence y[nj = cos((2rrln}/N)x[nj.

3.68 Let x[n] be a length-N sequence ?.il.h an ,V -point DFI gnrn by X(kj. Assume N :is divis1ble by 4. Define a
sequence
_l'{nl = .r[4nJ,
Express the (N/4)-poinl OFf Y[kl of _rfn; in terms of X[kj.

3.69 1be &-point DFT of a length-S complel S<::.jUen<"e vlnJ = xfnl + jy[nJ is given by

Y!OI = -2 + j3. V"fll = 1 + )5, V[2J = -4+ J7. V[3] = 2 + J6,


Vf4J = --1 - J?>, itf5! = 4- J. V/61 = 3 + 18. vm = J6.
Without eomputmg the-IDFf of l' fl. L determine :he IS-point DI--Ts X[k] and Y[kJ of the rt:al seqt;ence.<; xfn-l and y[n},
res~tively.

3.70 Compule the 4-point DFis of g,.lnl and ltin} of Problem 3.64 using a single4-~XJint DFT.

3.71 Detemtir.e the 4-point DFfs of !he following pair of length-4 >equences by computing a single DFT:

lg{r.;; ~ (-2 -3 4). ~hfnn = l! 2 ~3 2}.


j t

3.72 Let Plk) denote the \M + l )-point DJ-<J of the numerator ..:oeificiet1ts and D{k} denote the (,'"' + l )~p<:linl Df-<T of
the denominat-cr coefficients of a rational discretc-hmc Fou:ric translorm X (e-'"') of the fonn ofEq. (3.17/. Dctem1Lne
the exact expressions of the DTFf X (ef"') forM = N = 3 if the 4-poinl OFT's of its numerator and denominator
coefficients are as given below:
(a) {P[kJl = {4. l + j7. 2, I- j7}. JD[k]J = {4.5. 1.5--'-- J. -5.5. L5- jl,
{b) lP!kll = {7. 7 --t- ;2. -9. 7- ]2/, lDikH = (0, 4 + J6. -4, 4- j6j.

3.73 Conltider a length--N sequence x[nj with a DTFT X(el"'). Define an M-point DFT X!kl = X(-t)"'~ ). where
Wk =2nk/ M. k = 0. l. -.. , M- L Denote the inverse OFT of Xtk:J as i[n]. which is alength-M sequence. Expn:'~
r[n] in ten:ns of .i f<rl and show that r[r.j can be fully recovered fmm i{nl ooly if M 2: N.
190 Chapter 3: Discrete-Time Signals In the Transform Domain

3.74 Let X(e}"-') denote the DTFf of the sequence

{x[n]J = (1 lj.
1

(a) For the OFT sequence Xlfkj obtained by sampling X(ej"'} at uniform intervals. of :Jr/5 s.truting from w = 0,
determine the IDFT xt[nl of X 1[k] without computing Xie 1"") and X 1(kj. Can you recove.. x[nl from .q[nl?
(b) For the OFT sequence X2 [kJ obtained by sampling X(eiw) at uniform intervals of n/3 starting from w = 0_
determine the IDFT x2!nJ of X2!k] without computing X (e.iw) and XzLk]. Can you ruover -t"[n] from t2Inl'~

3.75 Let x{nj be a length-N s.eqtJence with X[kJ denoting ils lV-pointDIT. We represent the DFT operation as
X(kl =
T!xln}i. Determine the sequence y[nJ obtained by applying the OFT operation 61imes to xfnj, i.e,

y\n] = FI.Fi.:F{Ft.FI.:Flx[nl}}}}H.
3,.76 Let.~[nJ and h(n] be twc length-40sequences defined forO~ n :'5 39. It is k:nown th~ h(n] = 0 forO_::: n s ll
and 28 :S n :5 39. Denote the 40-point circular convolution of these two sequences as uf liJ and their linear con'l'ulution
ID' y[n]. Determine the range ofn for which y\nJ = u{nl.

3. 77 The linear convolution of a. length-55 sequellCe with a length- J 100 sequence is to be computed using 64-point
DFrs and IDFrs.
{a) Determine the smallest nlliWJer ofDFrs. and ID"FI's neederl to compute the above linear convolution using the
overlap-add approach.
{b) Determine the smallest number of DFT.s and fDFfs needed to compute the .above linear convolution using the
ove:rlap-save .approach.

3.78 (a} Coosider a lengtb-JV sequence x[nJ. 0 ~ n .::: N- l, with anN-point DFT X[kl. 0 .::: k .::: N- L
Define a sequence y[n] of length LN, 0.::: n .::: N L - 1, given by

ylnJ=I.t[n/L], n=O,_L,2L, ... ,{N-l)L,


0, otherwise.
where Lis a positive integer. Ho;;press lhe N L~ut DFT Y[k] of y{ni in tenns of X{k).
(b) The 7-po.int OFT Xfkl of a length-7 sequence x[.11j is shown in Figure P3.5. Skercl1 the 21-polnt DFf Y(k) of
a leogth-21 sequence y[nj gcnernted. using Eq. (3. 17 1}.

FigureP3.5

J.~ Cor.sider two real, symmetric length-N sequclleel!, x[nJ and y[n], 0 :::= n ::S N - 1 with N even.. Define the
length-(N /2) sequences.:

xofn] = x[2n + 1] +xf2n], .XJ[n] = r[2n +I] -x[2n].


YO!nl = y{2n + l] + y[2n], Yl [nJ =y(ln. + lj - y[2nj,

where 0 .::: n :::;: ~ - L It can. be easily shown that xo[n} and ,m[R) arc real, symmetric sequences of len,gtb-(N j2)
each. Likewise, the sequenc.esxt[n] and Yl [n} are real and antisymmetric sequences. .Denole the (N /2)-point DFfs
3.13. Problems 191

n!· Join J, x1ln ]. Yofn ], and J 1fn I by X o[kj, X rJkj, Yo[k ], and Y dkJ, respectively. Define a length-{N /2} sequence
!t Itj 1:
ufnl = .t{l[nJ + y:lnl + jix;lni + YolnJ).
Oetermine X o! kJ, X 1fk ], Y()]k ]. and Y l [k 1 in term.<> ofthe (N /2J-poillt DFf U [k] of uinJ.

3.84} Le~ X[k l denote the .V -point DFT of a iength-N sequence x{n 1 with N even. Define two length-(N /2) sequences
g:ven hy
g[n} = 1(x(2nj + --d2n +I;}, h[nj = ~(x(2nl- x(2n + l]), 0 :S: n _:s:Jf - L
tf G{kj arul Hlki denote (N /2)-polm DFh of g[nj and hlnl, respectively, determine theN-point DFf Xfk} from
these tv,"\) { N/2)-point DPrs.

3.81 Let X~i:l denote the N-p'->intDFf of aJength-N sequencexin}with !'.'even. Define two length--(.¥ /2) sequenc-es
given by
g[nJ=a,x[2nj+u:2x[2tl+lL hfnJ=a~t{2nJ+a.Ff2n+l}, O:::::n:s:i-1,
whcreall4 # a:;ra3. If G[k} and H!kl denote (N /2)-point DFfsof gin1 andhfn]. re~-pectively,determinE theN -point
DFT Xfl:.] from these tv.o (N /2)-point OFT;;.

3.82 The genna!i.zed dis.creu- Fourier transform (GDFf) is a generalization of the conventional DFT to allow shifts
in e;ther or both indices of the transform kernel [Bon76]. The N-point genernlized discrete Fourier transform
XGoFrlk. a. hj of a length-N sequence x!nl is defined by

XonFr[k,a,bl= L
:V-l
x[njexp
('
-j
br(n+a}(k+b})
N . (3.172)
n=O .

Sho"'· :hat the inver~ GDFT is given by

N-l ( 2rr(n+a){k+h))
N
1
x[n] = ---;- L XGDFT[k, a, blexp j
N
. (3.173}
- ko70

3.83 Show that fm a causal <;equen.:e _-r fn.l defined fur n :::: 0, anO with a z-transfonn X (z),

.tiOl = lim X(rJ.


~-cc

The ahove result is known as the imticd value rheor·em.

3.<i4 Consider the ;:-tran:;fonn

(z + 0.4)(;::- 0.9l)(z1 +0.3z +0.4)


G<zJ = - - , - - . (3.174)
(-r- - O.fu + 0.6)(;:2 + 3z + 5)
There a>e three pvss1ble nonoveriapping regions of co.rrvergence (ROCs) of this z-transform. Discuss the type of
imerse z-lrnnsform (left-sided, right-sided. onv.u-sided ~quences) associated with each of the three ROC's. It is not
neces$ai)' lo compute the exact im"erse transform.

3.85 Consider the following sequence<;:

(i) x:(nj = WA)"p:~n), (ii) x2ln] = (-Q.6)"~tln}.


(iti) X3(n1 = (0.3i'\•(n- 4], {iv) -4fnl = (---0.3)" ,u~-»- 2).
(a) Determine the ROC:> of the z-transfunn of each of the :above sequeuoes.
H.l2 Chapter 3: Discrete-Time Signals in the Transform Domain

(b) From the ROC's determined in part (a).determine the ROC~ of the following sequences:

(i) YJfn]=.qfnJ+xz[n}. (il) Y2[n] = Xt[n] + x3{n},


tiii) _r:;lnJ =Xtfnl + x.o.[n]. (!v) Y4[r.j =X2[n] + XJ[r.J.
(v) yo;ttt! = x2fnl + x:,.lnJ, (vi) Y61n~ = -'3[n] + .q[n]

3.1:16 Derive the ;:-transform~ and the ROCs given in Table 3.8 of the following sequences: (a) 5[n], (b} a ,uin }, (c)
11

(r'' cn<,<n0 nJ.tJ.In j, and (d) (r" sin w.,n),u [nl

3.fi:7 Show that the following three sequences have the f'arne <:-tran'>fonn: fa) :q [n] = 6 {(0.5/' - (0.31') p.[n], (b)
x2::nl = -6(0.3) 11 ,u{u]-6{0.5J'1 Ji[-n -- 1]. and(c)x5[11] =6((0 3)"- (O.S)"),u[-n- 1].

3.E:8 De!ennine the .;:-trnnsfonn of each of the following sequences and their reo;pective ROCs. Assume t8l > ;aj.
Stl'JW their pole-zero plots and indiGJte clearly the RCX:: in these plots.
(a) XJ(nl=ta"+#"J,u{n} .
.:bJ t:2fnJ=ct 11 p.r-n-ll-tl 1tp:fn],
(c) XJ[!II = a"p:jnl + f;Y';,~!-n- ~].

3Ji9 Prove the following general properties of the ::-transform hsted in Table 3.9: (a) linearity. (b) time-reversal, (c)
time-shifting. {d) multiplication by an exponential seque:~ce. (e) differentiation of the z-tnmsfonn, {f) convolution, {g)
mcdu!atmn, and (h} P.Msevar., relation.

3.'!10 Let the .;:-transform of a sequence x[n] be X(z) with 'Rx denoting its ROC. Express the .z-t:ansfonns of the real
and imagmacy parts of x[nJ in terms of X(z) Show also their respective ROCs.

3.91 Tile .;:-transform X(4) ofthe length-9 sequem:e of Problem 3.25lli samp[ed at six pointiH!Ik = rrkj3. 0 ~ k 55,
on the unit circle yielding the frequency samples

O::<:;k::;:5.

De·:ennine, without evaluating X[kj, the periodic sequence i[n] whose discrete Fourier series coefficients are given
byit_kj. Whali$theperiodof.i-{nj?

3.9~ Lel Xi.;:) denote the z-nansfonn of the !ength-12 sequeucc x[nJ of Prob!em 3.54. Let Xolkl represent the
samples of X(.;:) evaJuated on the unit circle at 9 equally spaced points given by z = ej(brt,/9"), 0 5 k ::5 8, i.e..

O::<:;k::;:8.

Detennine th.e 9-point lDFr xoln 1 of Xol:k] without computing the latt« function.

3.93 Let X(<:) denote the :-transform of xfn.l = (0.4)" ,u[nj.


(a) Determine lhe inverse z-transform of X(::1 } ""ithout .:omputing X (z).
(ll_l Determine the inve!'.>e .;:-tmm,fnrm of (1 + ;:-t )Xf::: 2 ) withou! computing X(<:).

3.94 De!ermine the ~-transfonns of the sequeLces ofPrcblem 3.11 and their ROCs. Show that the ROC includes the
unit circle for each :--transform. Evaluate the 4-transti:mne-.'aluated on the unit circle for each sequence and show that
ll ~~ preci,~ely the DTFf of!he respective sequence computed in Problem 3.11.

3.9!i Determine the ;:-tranrlonns of the sequences of Problem 3.12 and their ROes. Show that the ROC includes the
unitdrcle for each z-tramfomt. Ev-aluate the z-tramformevaluated on the unit circle for each sequence and show that
it is predsely the DTFf of the respe.;:twe sequence computed in Problem 3.12.
3.13. Problems 193

3.96 Dctermine the z·transfonns ofL':Ie followlngsequer.ces and !heir respective ROC:.: (a) xi(n] = -a"' pl-n- lj.
(h) .Qfn] =
«"' ;L{n + 11. and (d XJ fnl = a·" tt (-n 1-
;:;,_97 Determine the z-!ranstOnn of the two-Side.J sequence vjn j = a!"l. 'WOat is its ROC?

3.98 Evalu:<te the mve:r~>e ::::-tran5fur::ns of !he following z-transfurms:


- ;::(z-1)
(a_) ¥ 1(;) = 1 . izl > l.
i::+l){;::+:;-1
;:_(.;: - ll I
\b) r2c~• = ------ 1~1 "" ]'
\:::+lH::: ...... j-1'
I
< \z.~ < l.
3

3.9~ Determine the inverse z-tramfurm of the foliowiag :-transforms:


4- 3z- 1 + 3:::- 2
(aJ X,.t<:} = . izl > 3.
(::+2;(<- 3}2
4-3z-l +Jz-2
{b) Xi;.(Z) = -:--::-'>-· I::: I< 2.
(_ + 2)(z 3)-
4 -1--l +3--2
{c;X,.f;:)= ---. '- . 2 <: 1-Ci < 3.
- (z +2)(.-: .n 2
3.100 Cons.idrr a rational z-transfonr. G(<:) =
P{z),'Di_z) where P{z) :md D(z) are pofynvmials in :::- 1 _ Let {Jf
denote !he :residue of G(;) at a simple pole al z = Ae. Show that

= -1.- P(z) i
N - t D'(z) 1__ • •
1,--~t

L
w;·J<Ore D'l ZJ. = dDi~)
d;[:_ •

3.101 Consider !he .:-transform G(z) of Eq. OJ 13) with M < N. [f G(.:) has only simple pole». show that Pofdo is
equal to the sum of the residues in the partial-fraction expansion of G(z) [Mit98b].

3.102 Show that the inverse .::-transform hfn; of the following calional z-tnmsfo:rm

!-z<>r>O

is ;~iven b-y
r" sinfn + l)tl
h(nf= .- ·,'.L[nj.
smO

3.103 Prove the following properties of the .:-transform listed in Table 3:9: {a) Conjugation, (b) time-reversal, {c}
time-shifting, (d) mu!tiplicnliun by an exponentiaf sequence, (e_l differentiation. (f) convolution, (g) modulation, and
(h) Pa.~val's relation.

3.104 Let the z·t:rans:form of sequence x[n] be X(;:) with an ROC R..-. Show that
I
Z{Re(x[ni)J = :zP((::) + X'"(z*)},
Z{lm(xlnHi = ~{X(z)- X"'-iz"'}j.
194 Chapter 3: Discrete--Time Signals in the·

3.105 Determine the inverse .:-transforms, .q fn l and ..tz[n}, of the following rational z-trarnllo
l
XJ(Z) = I-' 3' lzf > r,
I
(b) Xz(z) = l - z 2' lz: > 1,
by expanding each in a power series and computing the inve-rse ;;-transform of the individual ter
Compare the results with that obtaJned using a partial-fraction approach.

3.106 Detennine the inverse z-tr.ansfonn:s of the following z-transf-orms:


{a)X:(z)=log{l-aC 1). \zl>!o:L
1
(b) Xz(z)=log(<>'-[ ), !zl > ljjal.

' 1
(c) X3(Z}=log(l-rr.:·· ) I.:~ > Ia!.
,

(d) X4(<:) =log C.-:-1), !:I> ljjal.

3.100 The :-transform of a causal sequencex[.nj is given by .:n:- 1 /(l -az-1)2 . Using Tables
x(n}.

3.108 The z-transfonn of a right-sided sequence h[nj is given by

z-2
H(z) = .
(.;:: +0.4){::; 0.2)

Find its inverse z-transfonn hl_n} via the partial.-fraction approach. Verify the partial fraction u

3.109 A genemlization of me DFT concept leads to the nommiform discrete FOurier tronsfo.
defined by tBag98]
N-1
XNDrrlkl = X(zl!) = L.xtnJzk", O.::;:k::;: N- I,
n=O
where zo, z1 . ... , Z.N -1· are N distinct points located arbitrarily in the z-plane. The NDFT t
effident design of digital filters, antenna array design, and dual-tone muhifiequeoey detectim
can be e>~:pressed in a matrix form as

XNDFT[OJ x[OJ l
rL
XNDfT[l] .r[lJ
=D,v . '

XNoFriN- 11
] [
x[N°- lj J
where
-(N-1)
l 'o-(N-l)
1
,,
'1
-(N-l)

r-(N-1)
N-1
3.13. Problems 195

is theN x N NDFTmatrix. The matrix D.v is known as the 'J.tmdennondemarrix. Sbow that it is nonsingularprovided
the N sampling points 1:1: are distinct. In which case, ire inverse NDFT is given by

x[O]
x[J] ] -I [ XNDFT[O]
XNDFI[I] J (3.178}
[ x(N;- lj = DN XNDFTIN- l]

3.110 ln general, for large N. the V.andermonde matrix is usually ill-conditioned (except fill" the case when the NDFT
reduces to Ole conventional DFT). and a direct inverse computation is not advisable. A more efficient way is to direcdy
detennine the z-tnmsfonn X (z),
N-1
X(z) == L .>:[n]z-n, (3.179)
.~

and hence, the sequenoe.xfn J. from the given N -point NDFT X NDn{k l by using some type of polynomial interpolation
method [Bag98]. One popu!M method is the Lagrange i:ntetpOlation formula. whicb expresses X(z) as

N-l fk(Z)
X(z} = E ~)x!\'DFT{k],
k=O k~Zk
(3JS0)

/t(Z) = n
N-1

i=O
( I - liZ- 1 ). (3.181)
i ... k

Consider the z-transfonn X(z) == 4- 2z-1 + 3z- 2 + ;;-3 of a length.-4 sequence xfnj. By evaluatng X(<!:) at
zo = -1/2, z:t = l, Z:2 = l/2. and Z3 = 1/3. detennine J:b(' 4-point NDFT of x[n.] and then use the Lagrange
interpolation method to show that X (z} can be uniquely detenn.iaed from these NDFT samples.

3.111 The discrete cosiliC tronsftmn (DCT) is used frequently in image-coding applicatioDS based on thecompn:ssion
of transfonn coefficients [Lim90]. We develop here the DCT computation algorithm via the DFT. Let x{n] be a
Jengtll-N sequence defined fa.- 0 .::; n .::; N - I. First ;t[n] is extended to a length 2N by zero-padding:
x [nl={x[n]. O:=:onsN-1,
e 0. NSnslN-l.
Then a length-2N sequence yfnl is fonned from .l€[n] according to
y[.n] = x.,[n] + x,.[lN- 1- nj,, 0 S n s 2N- l,
and its 2N-pomt DFT Y{kj is com~XJted as
2N-t
YlkJ = E y[n]W~, O.::;::kS2N-l.
=<J
TheN-point DCT Cxlk} of x[n] is then defined by

Cx[kl = ( W:.fY[k], 0::::: k::::: N- I,


Q, otherwise.
Show that
((2n+ l)krr·
Cxlkl = L
N-l

"=0
2xlnjc-os
2N
) , O;Sk:SN-1. (3.182)

NQte from the above definitiQD that the OCT coefficienn of a real sequence are real, whereas in general the DFT of a
real sequence is a.lwayscrunplex.. The DCT defined by Eq. (3.1&2) is often referred to as the even .symmetrical DCT.
196 Chapter 3: Discrete-Time Signals in the Transform Domain

J.il2 To form the inverse discrete cosine transform (IDCT) of anN-point DCT C.o;[kj, first a 2N-point DFT Y(k] is
formed according to
w2~:12c,JkJ, O:S:kSN-L
Y[kJ = 0, k =
N,
{ -w -~j 2 c... t2N -kJ,
2 N+l:5k:5:2N-I,
and its 2N-point IDFT y[nj is computed as

2N-t
y[nJ = - -
1
L Y[kJWit", 0:;:: rr::;: 2N- L
2N .t=O

Tite /\'-point IDCT of C...-[kJ is then given by

-I
x In J - y[n}, 0 ::: n::;: N- 1.
O, otherwise.
{a} Show that

l ~N-l a(k)C [kjcos (<2n+lbrk)


x[n] =
lN L..t=O
0,
x 2N ' (3.183)

(k)=ll/2, k=O. (3.184)


~ 1, l:5:k:SN-L

(bJ Prove that x[n] given by Eq. (3.183) is indeed the inverse DCT of the OCT coefficients C_,;[kl given by
Eq. (3.182).

3.113 Let g[nJ and h[n) be tW<J-length-1\' sequences with N-point DCTs given by Cg[k] and Ch[k], respecti"."ely.
Show that theN-point DCT C.,[kJ (!(the sequence y(n] =
a:g[n] ....._ _Bh[n] ill gi"¥-en by Cy[k} =
aCg[k) + ,8C.~,{k}.
where a and jj are arbitrary cofistants..

3.114 lf theN-point DCI of a tength-N sequence x[n] is given by C.,.-lk), show that lbe N-point DCT of x•tnJ is
gtvtm by c;[tj.

3.115 if the N-pcint DCT of a lengtb-N Kquence .x{n] is given by C_.-(k], show that

N~l J >~~l

L jx[n]!2 = 2N L a(k)!C_.-[k]!2,
1'1=<1 i=O

wherea(k} is ghen by Eq. (3.184).

3.116 The N-point discrete Hartley transform {DHT) XoHT[k) of a lengtb-N sequence x[n) is defined by [Bra83]

Xmrr[k] = 'I:1x[n] ( cos(2K;k) +sin(2K:.~)).


.~
k= 0, 1, ... , N -1. (3.185)

As can be s.een from the above., the Dt·IT of a real sequence is also a real sequence. Show that the inver8C discrete
Hartley transfonn (DHT) is given by

l N-1 ( ( 2 ~nk)
x[n)= N ExDm[k] {:OS~ +sin~)
(bmk ) , n=O,I, ... ,N-1. (3.186)

"""'
198 Chapter 3: Discrete·Time Signals in the Transtorm Domain

lihfn]

x[nJ -~x h[n] X

h[nl = v-" 2 '~'7


Figure P3.6

3.122 We wiRh co compute the L-pointchirp-z transform (CZT) samples X(ze), i = 0, 1. 2, .... L- I, of a kngth-N
se>.Juence x{nj acl:ording to Eq. {3.190), where ?.f = Av-l with A = A,e_if;l, and V = V"c}<i>o_ What are the
values of A 0 , 00 , V,,, and ¢G if the CZT needs to be calculated at points {zt J on !he real axis in the ;:-plane such that
zt = al ,0, C:: t _.,;: L- l.fma realandQ: # ±1?

3.123 Let yL[n} denote the length-(2N - l) sequence obtained by a linear com.nlution of twD iengrh--N sequeJK~
h[nj and x[n J. i e., yLfn J = x[n l@h[n ]. From the COD'Volution the.orem of z-tTansfonn (see Table 3.9) it is kDQ\o\.'111h~
YL{Z) = X{:.:::)H(z), where Yz.(z). XCz), and H(z) denote tbe z-transforms of the ~uences. .YdttJ, x[nl, and h[n].
respeethtely. This implies that the samples of the sequence YL [n] are simJHy given by the coefficients of !he product
of the two polyllornials X (z) and H(z). Now,let Ycfn i denote theN-point circular convolution uf x[n J and hin ), I.e ..
Yc[n] = x[n]@:hfnl It can be shown that the samples of the sequence yc[nj can be obtained from the coefficients
of thepol.ynomial Yc(z) = r·L(z) modO - l~N). Verify the above result for N = 3, 4, and 5.

Note: (y[N + mlz-N-m) mod(l- z-N) = y{N +m]z-"' where 0 :::::= m < N.

3.124 Consider a sequence x(nl wlth a z-tr.msfonn X(z). Delbe a new <:-transform i(z) given by the complel
natur.~llogarithm of X{;:), i.e., .i(z) = lnX(l).The irrveTse ;::-tran.sfunn of i(z) to be denoted by i(n.l i<;called the
complex cepstrom of x{n] [Tri79}. Assume that the ROCs of both X (z} and i(z) include the unit circle.
(a) Relate the DTFT X(eiw') of xfn} to the DTFr i{ei<») ofot'i comple:li cepstrum X[n].
(b) Show that the comple~ cepstrum of a real ~uence i~ a real-valued sequence.
!e) Let ie~~[nj and .ioo{n] denok, n:.s-pectiveiy. the even H.nd odd pans of a relll-valued complex cepstrnm X{n}.
Express i'ev[nJ and ioo[n] in tenns of X (e 1""), tile DTFf of xfnj.

3.125 Determine tile complex cepstrum .f[n) of a sequence x[n] = a.!i[n]-.;. M[n ~ l], where jhjal < I. Comment
orr your results.

3.12fi- Let x[n l be a ~uence with 11 rational z.-transform X(;:) given by

X(;:)= K [] k'~ I t 1-a,~;z -' ) n·'iy


k 1(1-yg)
Np
[] k=t(l -1hz -l I []N'
1<=1 (1-Jg}

where IX.J: and fJk are the zeros and poles of X (z) lbat are strictly .irn;ide the unit circle, and I iYk and 1/lik are the zeros
and poles of X (z) lha!: lU'e strictly OUl'iide the unit circle [Rab78].
{;l) Detennille the exact expression for the complex cepstrum _; !nl Df x:[n j.
(b) Show that i[nJ is a decaying tounded sequellce as :nf -+ oc,
(c) lf Ufc = fJk. = 0, show that .i[n} jg an antkausal sequence.
(d) if Yk. = bk = 0, show that .i [ n; is a causal sequence.
3.13. Problems 197

3.117 Let Xrnrr[kj deuote theN-point DHT of a length~N seqc~encc xlnl.


(a) Show that the Dl!T of x;{11 - no{ N J :s given by

(b) Determine the N~pojnt DHT of x[(-n}N j.


(c) Prove the Pa.-scval"s relation;

(3.187)

3.118 Develop the rclation between the N -point DHr Xm-rrlkl and the N -point DFT X!k I of a lengih-N sequence
xtn].

3.119 Let theN-point DHTsofthe three length-N &equen'"'esx[n]. g[nJ, and yfnl be denoted by Xrnn[kJ, GDHTl.kJ,
and fDHT!kJ, respectively. If y[n] = x~rrl@g[n]. show that

YnHT!kJ = ~ XnHT{k](Grnrr{kl + GnHTH-k)N 1}


{3.188)

3.UO The Hadamard transfonn Xtrrfkl of a length-N sequencex[nj, n. = 0, L ... , N- I, is given by [Gon87l
N-l
1 "'"' ')l-1 b ' \b·<k!
XHT[kJ=;;; L..-x[n](-i)~•~ll :\n, '··. k=O.L ... ,N-1. (3.189)
.~

wilere 171(r) is the ith bit in the binary representation of r, and N = 2i. In malri.l: form, the Hadamard transfonn can
k represented as

where
Xm = [XHT[O] Xmrtl -· XHT[N- i]j T ,
x = [xi.O! .x[lJ

(a) Determine the fonn of the Hadamard matrix HN fm N = 1. 4. aJ'!d 8.


(b) Show that

(c) Determine the expression for the inverse Hadanurd tram;furm.

3.121 In this problem we consider the computation of a limited number nf NDFf samples X (.z.e) of a lengtb-N
sequence x!.n J ai. Zl = A y-.f, 0 5 t 5 L - I where A = A 0 ej-6', and V = V0 eJtP,. with Ao and V0 being positive
real numbers. a.'!d in generar L < N. The contour spin.Js toward lhe origin as t increases if V, > f and it ,;pirah
uutward if V., < l. Thechirp-z transfimn {CIT) is then defined by [Rab69]
N-1
X£ze}= Lx[nlA-nv{n, 0.:-;:.f::-::L-1. (3.190}
"d)
Convert the ahove expres$ion into a convoluticn :mm using 1h.e identity

bt = 4ze2 + n 2 - tf- n) 2 ),
and show that the discrete-time system of Figure P3.6 can be employed ro compute the CZT.
3.14. MATtA6 Exercises 199

3.127 Let xfr.l be a sequence with a rational z-tnmsforrn X(;:) with pc.fes and zeTos strictly mside the unit cirde.
Show th;n the ~.:omplex cep~trum i'fn] of x[n] can be computed usmgtherecursion relation [Rab78J:

3.128 A :a:m-mean, white noise se;_juence x[nl with a variance a} is fed into an LTisystem w;th an impulse response
h[n] = (0.6')" u[n] generating tire oU!pu! t>[n]. The signal vl_nl is then fed into a second LTI system with..m impulse
re~ponse g[nj = (0.8)"J-i(nJ prodlic:ng the output y(nj. Determine the variance, o;J and o; of the s1gnah v[nl anJ
y[nj.

3.14 MATLAB Exercises


M 3.1 Using Program 3_1 de«=nine and pl~;~t the-real and imaginary parts and the nugnitude and phase speclra of the
foHowing DTFT for various vaiues of r-and 6;

0-<r-<1.

M 3.2 Using Program 3_I determine and plot the real and imaginary parts and the magnitude and phase spectra oft be
DTFTs of the sequences of Problem 3.12 f{Jf N lO. =
M 3.3 Using Progr:am 3 _I determine- .and ~ot the real and imaginary parts. and the magnitude and phase spectra of
!he following DTFTs:

. rw 0.076f(t -0,763le-i2"-'+e-_i4w)
(a)Xk- )= l+:.355ej2w+0.6I96eJ4w'

jw- o.os1s- o.tssx-;w + o. ;ss3e-1:>.w + o.Mi&- i 3"'


{b)X(e }= 1+1.2828e jw--+--1.0388e j2w+0.3418e j3w

M 3.4 Using MATLAB veriiy the- !ollnwing general properties ;rf the DTFT as listed in Table 3.2: (aJ Linearity.
(b) llme-shifting. (c) frequency~shif:ing. (d) differentiatlon-in-frequeocy, (e) convolution. (f) modulation, and (g)
~v.ar:. reLrtion_ Since all d;}ta in MAT LAB have to be finite-lengtl1 vecrors, the sequences lc be used to verify the
propertie.~ are thus restricted to be of finite length.

M 3.5 Using M.-HLAB verify the symmetry relations of the DTFf of a complex :.equence as listed in Table 3.3.

M 3.6 Using MATLAB verify the symmetry relatimLS of the DTIT of a reaf sequence as listed in Table 3.4.

M 3..7 Write a MATLAB program 10 compute theN x N Dl-T matrix D,v of Eq. (3.40) and then fQtiD its inv=e.
Using this program verify the relation given by Eq. (3.43) for N = 3, 4. 5, and 6.

M 3.8 Using MATLABcomputerll;>: N-puintDFfsofthelength-N sequencesofProblern3,12for N = 3. 5, 7, and 10.


Compare your re-st~lls wit.h. that obrnined by evaluating the DTFTs computed in Problem 3.12 a!&-' == 27rk/N,
k=U.l, ... , N - L

M 3.9 W:i~e a ~An.All progmm to :.:ompute 1he-eircularconvolution oftWQ length-N sequences via the DFT-based
;;.ppma<.'h, Using this pmgram detennin..: !he dxular convoh.ltiOfl of the following pairs of .seque'lces:
200 Chapter 3: Discrete-Ttme Signals in the Transform Domain

!a) ~!nl = 1.1., 4, -2, 0. f. -4), h[n] = IL -3. 0, 4, -1, 3},


(b) xlnl = {:2 + j3, 3- j, - J + j2, ]3, 2 + }4},
v[nj = j-3- j2, 1 + }4, I - j2, 5- j3. I+ )2!.
lc) x[nj = sin(?Tn/2), yln] = 2", O~n ~4.

~ 3.10 Using MATLAB pr-ove the following general properties of the DFf listed in Table 3.5: (a) linearity, (b)
dr{;ular Lime-shifting, (c) circular frequency-shifting. (d) duality, (eJ N -pointdn:uhu-convolution, (f) modulation. and
(g) Parseval's relation.

M .}.11 Csing MATLAB verify the symmetry relations of the OFT of a compieJt sequence as Jisted in Table 3.6.

M 3.12 L'sing MATLAB ~"'fify the symmetry relations of the DFT of a real sequence as lined in Table3.7.

M 3.13 Verify the results of Problem 3.54 by computing the DFr X[k) of tbe sequence ..~:(n] given using MATLAB
and then evaluate the functions of X[k] listed.

M 3.14 Verify the results nf Problem 3.55 by computing the IDFf x[n] of the DFf X[kJ given using MATLAB and
then evaluate the functions of x[n j listed.

M 3.15 Write a MATLAB function w implement the overlap-save method. Using this function, demonstrate !he
filt.~ri.cg of the noise-corrupted signal of &ample 2.14 u.siHg a length-3 moving average filter by modifYing Program
3_6.

.anci show their pole-zero plots. Determine all possihle ROCs of each of the above z-transfcnns, .and describe the type
of the-ir inverse z-transforms (left-sJded, right-sided, two-sided sequeru;es) associated with each of the ROCs.

M :U7 Using Program 3_9 determine the partial-fraction expansions of the Nransforms listed in Problem 3.98 and
the:1 determine their inverse z-transfonns.

M J.tB Using Program 3 _9 determine the partial-fractior: expansillns of the z-transfQ£111S listed in Problem 3.99 and
then determine theu inverse z-trnnsfonns..

M :t19 Using Program 3 _l 0 determine the z-transform as a ratio of two polynomials in z -l from each of the pa;tial-
fr.!K:tion expansions listed below:
lO 8
(a) X l (z) = -2 ...- c--=--,
4+z 2+l 1'
3+z- 1
lzl > 0.5,
1 0.25;: 2'
5 • 3
( o) X3(z) =
f3+2z
, 2
) ~,-+~"':-.~, + ,..,+~0~
.•~,,~, . jzi :> 0.9,

lO
(d) Xq(z) =4+ 5+2z I lzl > 0.5.
6+5:: 1 +:: 2.'
3.14. MAnAs Exercises 201

M 3.20 Using Program 3_1 I determine the first 30 samples of the inverse z-tnmsfonns of the rational ;:-transforms
determined in Problem M3.19. Show that these samples are identical to those obtained by explicitly evaluating the
exact inverse ;::-transforms.

M 3.21 Write a MATLAB program to compute the NDFf and the inver.;e NDFT using the I.agnu;ge interpolation
method. Verity your pr-ogram by computing the NDFT of a length-25 sequence and reconstructJng the sequence from
its computed NDFT.
LTI Discrete-Time Systems
4 in the Transform Domain
We showed in Section 2.5 that a linear, time--invariant (LTJ) discrete-time system is completely charac-
t<:rized in the time-domain by its impulse response sequence {h[n]j. As a result, the transform-domain
r:=presentation of a discrete-tUne signal can equally be applied to the transfomH:Iomain representation of
an LTI discrete-time system. Such transform-domain representations provide additional insight into the
tehavior of such systems, and also make it easier to- design and implement them for specific applications.
In this chapter we discuss the use of the D1FT and the :::-transform in transforming the time-domain
n~presentations of an LTI discrete-time system to alternative characterizations. Specific properties of sucb
t~orrn-domain representations are investigated and several simple applications are considered. As in
t:Je earlier chapters, AUTLAB has been used extensively ;o illustrate various concepts and applications.

4.1 Finite-Dimensional LTI Discrete-Time Systems


The LTI discrete-time systems we shall be concen:ed with in this book are characterized by linear constant
coefficient difference equations of the form of Eq. (2.81 ). Applying the discrete-time Fourier transform
(D1FT} to this equation and making use of the linearity and the time-shifting properties of Table 3.2 we
a!Tive at the input-output relation of the LTI system in the transform-domain given by

N M
L d,~;e-i"'kY(e "') = L
1 p.,e-jmkX(ejw}, (4.1)
k=G k=H

where Y(elw) and X (el'"') are the DTFTs of the sequem:-es y[nj and x[n]. respectively. In developing
Eq. (4.1) it has been tacitly assumed that Y(e1 w) and X (ej"') exist. 1be above equation can be alternately
'"1'Titten as

{f~d;e-i"") Y(eiw) ~ (tpke-i"'*)" X(~w). (4.2)


\J=o k=o
The input-output relation oflhe LTI system in the <:-domain is obtained by applying the z-transforrn to
~nth sides ofEq. (2.81) and making use of the linearity and time-shifting properties of Table 3.9 resulting
m
N M
Ldkz-kY(z) = E'P.kZ-kX(z), (43)
k=O k-=0

where Y(z) and X{z) denote the z-transforms of y(n] and xln} wilh associated ROCs. respecti•·ely. A

203
204 Chapter 4: LTl Discrete-Time Systems in the Transform Domain

m:>re convenient form of Eq, (4.3} i:s given by

(4.4)

4.2 The Frequency Response


Most discrete-time signals encountered in practice can be represented as a linear combination of a very
large, maybe infinite, number of sinusoidal discrete-time signals of different angular frequencie~>. Thus,
knowing the response oftbe LTI system to a single sinusoidal signal, we can detennine it:o; response to more
complicated signals by making use of the superposition property of the system. Since a sinusoidal signal
can be expressed in terms of an exponential signal. the response of the LTI system to an exponential input
is ·Llf practical interest. This leads to the concept of frequency :response. a transform-domain representation
of the LTl discrete-time system. We first define the frequency response, investigate its properties. and
describe some of its applications.. The computation of the time-domain representation of the LTl system
from its frequency response is outlined.

4.2.1 Definition
An important property of an LTl system is that for Cf:rtain types of input signals, called eigenfunctions. the
output signal is the input signal multiplied by acompleJit constant. We consider here one such eigenfunction
as the input. Recall from Section 2.5.1, the input-output relationship of an LTI discrete-time system as
shown in Figure 4.1, Wl!h an impulse response h[n], is given by the convolution sum ofEq. (2.64b) and is
ofthefonn
~

y[nj = :L h[k]x[n - kJ (4.5}


k=-oo
wbere y[n} ami x[nl ar-e, respectively, the output and the input sequences. Now if the jnput x[n] is a
complex exponential sequence of the form

-oo<n<oo, (4.6)

then from Eq. (4.5) the output is given by

(4.7)

which can be rewritten as


(4.8)
w.bere we have used the notation
00

H(ejw) = L h[n]e-jw". (4.9)


n=-oo

He-m~ seen from Eq. (4.8) that for a complex exponential input signal ejWA, the output ofanLTI discrete-
time _system is also a complex exponential signal of the same frequency multiplied by a complex constant
H(e 1""). Thus. e 5 wn is an eigenfunction of the system. Another example of an eigenfunction is given in
ProbJ.em4.1.
4.2. The Frequency Response 205

x { r r ] - G - - - _v[n)

Figure 4.1: An LTI discrete-lime system.

The quantity H(ei"') defined above is called the frequency response of the LTI discret~-time system,
and it provides a frequency-domain description of the system. Note from Eq. (4.9)that H(el"-') is precisely
the discrete-time Fourier transform (DTFr} of the impulse response h[nJ of the system.
Equation (4.8) implies that for a complex sinus-Oidal input sequence x[n] of angular frequency <Vas in
Eq. (4.6}. the output y[n] is also a complex .sinusoidal sequence of the same angular frequency but v.eigbted
by a complex amplitude H(eliL') that is a functio11 of the input frequency a;- and the system's impulse
response coefficients h[nJ. We shall show later inSection42.7 that H(ejw) completely characterizes the
LTI discrete-time system in the frequency domain.
Just like any other discrete-time Fowier transform, m genera), H(eJ"') is also a complex function of w
with a period 2rr and can be expressed in terms of its real and imaginary parts or its magnitude and phase.
Thus:,
H(ef"-') = H,.,_(ej"')--+- j H 101 (ej"-') = j H(e1"')1ei6 \w). (4.10)

where H.,.(eiw) and Himteiwj are, respectivdy, the real and imaginary parts of H(ei"'), and

(4.11)

The quantily jH{ej"')l is called the magnitude response and the quantity 9(w) is called the phase resporue
of the LTI discrete-time system. Design specifications. for rhe discrete-rime systems, in many applications.,
are given in terms of the magnirude Iesponse or the pha<oe response or both. In some cases, the magnitude
function is specified in decibels as: defined below:

(4.12)

where Q'(w) is called the gainftmction. The negative of tl'te gain function, a(-w) = -Q(w), is called the
attenuation or lossfun,ction.
It should be noted that the magnitude and phase functioos are real functions of w, whereas the frequency
response is a complex function of w. For a discrete.time system char.acterized by a real impulse response
h{n] it follows from Table 3.4 that the magnitude function is an e-...-en function of w, i.e., JH(el"")l =
!H(e-Jw)l, and the phase function is an odd function of w. i.e., 9(w) = -8{-w). Likewise, Hre(ei"') is.
even, and Him{eiw) is odd.

4.2.2 Frequency Response Computation Using MATLAB


TheM-file function freqz ih, w) in \fATLAB can be used to determine the value~> of the frequency
response of a prescribed impulse respo:~se vector h at a set of given frequency points w. From these
frequency re~nse values, one can then compute the real and imagmary parts using the functions real
and imag. and the magnitude and phase using the functions abs andang 1. e as illustrated in the following
example.
206 Chapter 4: LT, Discrete-Time Systems in the Transform Domain

' (' '

I I
V'htr:r 10 fmd 1\ fl
h{mrx vtr fttt "'

<~i'J>" +P ? )
rLLv,\fT \Ly :( (1 DJ:' 't/81: 0 t HH+"'"»
'Y\d'N ' ' } , } ') ,',,{ \t{;$1k!: {((+!! s
t h, ';}SS\}/t

{ Tt:::7t)} "';: , ;, tH
tr P<t::/:,;2, v} t
'%- :srv ;t ,/,:' Lhw ifu±Jl,il,<«M&
71: j f!':YL 0!1 i 1 TILY d(xt J !t7;
f!LL v ]!1 ,lji{ 't ,1!:£~ 'lf:J 'i,k
rl?!J«; V4¢W< ! 0<111 } r 1t1Aib?s&} f '~ih r 'fit' i 7
£ilib,X:titlS0\ ¥ ~' Nrx\i;' (¢< , 'r ~-"' Y4: } ;
{"!4\'kft\W

'The phase responses of discrete-time systems when detennined by a computer may also exhibit jumps
by an amount of 2n caused by the way the arctangent function is computed, for example, in the function
angle in MATLAB. The phase response can be maCe a continuous function of w by unwrapping the phase
response across the jumps by adding multiples of ±2Jr. 'Ibe MATLAB function unwrap can be employed
1Lx J de:notes the inreger part of :c.
4.2. The Frequency Response 207

;i_()-,- - - - - - - - - - - - - "

r
'
' '' '
""
''
' '
' '
'
'
' ' \,

::.8
"' "'
(a) (b)
Figure 4.2: Magnitude and pha<ie responses of the moving-average filters of length 5 and 14.

I to this end, provided the computed phase is in radians. 2 The application of unwrap is illustrated later in
this chapter in Figure 4.28. These jumps should not be confused with the jumps caused by the zeros of the
frequency response as shown, for example, in Figure 4. 2(b ).

4.2.3 Steady-State Response


Note thai the frequency response function also determines the steady-state response of the LTI discrele-time
system to a sinusoidal inplli as shown In the following example.

t*:ICR i"LJ£ OL
l'ihhw WMM;M; !¢¥pl¢wqm:ov ~i!WIW't. ·pr, nm n:u:Rs rh& <:!~

~-

2
care should be taken in using ur:cwrap as it can gh~ wrong answers rometimes if the computed phase response is sparse wid!
rapidly changing values.
208 Chapter 4: LT! Discrete-Time Systems in the Tm.nsform Domain

4.2.4 Response to a Causal Exponential Sequence


An ur:der1ying assumption in developing Eq. (4.18) is that the system is initially relaxed before the appli-
cation of the input of Eq. {4.17). However, in prnctice, the excitation to a discrete-time system is usually
a causal sequence and applied at some sample imiex n = n 0 • Hence, the omput for such an input will
be different from the one shown in Eq. (4.18) which we investigate here. Without any loss of generality,
we develop ne.~~:t the expression for the output response when the input 'is a causal exponential sequence
applied at n = 0.
Now, fromEq. (4.5). the ourput response y(n] for an input

x[n] = ejru",U[n]

is given by

y[n] ~ (:t
k=O
h[k]efw(<->)) l'[nj ~ (:t
k=O
h[k]e-f""') ei-!L[n].
.
Thus, the output y[nl = 0 for n < 0, and for n :::: 0 it is given by

y[n] ~ (:th[k]e-i•') ei""


lE,
t=O

~ (Eh[k]e_,.,) ,1~ _ h[k]e_,.,.) ei-

= H(ej"')ejwn - { f:
~=n+l
h[kJe- jwk) ei""'. (4.19)

The first term in the output in Eq. (4.19) is the same as that given by Eq. (4.8) and is called the steady-stale
respon.se:
Yu[n] = H(ei"')eJwn.
The second term in Eq. (4.19) is ca1Jed the transient response:

To determine the effect of the second term on the output response, we observe that

(4.20)

Now, fur a. causal and stable LTI discrete-time system,. the impulse response is absolutely summable, and as
a result, the transient response YtrEnl i.s a bounded sequence. Moreover, as n --+ oc, ~+I jh(k]~ _. 0,
4.2. The Frequency Response 209

and hence, the transient response decays to zero as n gets very large. On the other hand. for a causal
FIR LTI discrete-time system with an impulse response of length N + 1, h[n] = 0 for n > N. Hen~
Yu[n] = 0 for n > N- L Thus, here the output y[n] reaches the steady-state value y$,.[n) = H(el'»)eJwn
forn = N.

4.2.5 The Concept of Filtering


One application ofan LTI discrete-time 5-ystem is to pass certain frequency components in an input sequence
without any distortion (if possible) and to block other frequency components. Such systems are called
digitalfiltRrs and are one of the main subjects of discussion in this text. The key to the filtering process is the
inverse discrete-time Fourier transform given in Eq. (3.7) which expresses an arbitrary input sequence as a
linear weighted sum of an infinite number of exponentia1 sequences, or equivalently, as a linear weighted
sum of sinusoidal sequences. As a result. by appropriately choosing the va1ues of magnitude function Qf
the LTI digital filter at frequencies corresponding to the frequencies of the sinusoidal components of the
input, some of these sinusoidal sequences can be selectively heavily attenuated or filtered with respect to
the others.
We now explain the concept of filtering and then define the most commonty desired filter characteristics.
To understand the mechanism behind the design of such a system. consider a real coefficient LTI discrete-
time system characterized by a magnitude function

We apply an inputx[n] = Acosw 1n + BcosWln to this system, whereO < wr < a>r < W:2 < rr.
Because of linearity. it foJJows from Eq. (4.18) that the output y[n] of this system is of the form

y[n] = AIH(ej"'~)l cos(w1n + B(w!)) + B!H(ej"><)! coS(f.V2n + B(fi}2)). (4.22)

Making use ofEq. (421) i:n Eq. (4.22). we get

y[n] ~ A!H(ej"'1)1 cos(w1n + B(wJ)),


indicating the LTI discrete-time system acts like a iowpass fi]ter.
210 Chapter 4 : LTI Oiscr~te- r me Systems tn It!a Transform Domain

Now, from Eq (4.9) lhe traquCJiey respcm of 1111! obt:A·c .FIR tiliw is IJ'-1!1!11 b)'

l/(~'""t = h(UI + IJ(Jl[J6> + fr[21r-Jla.t


= tx(J- ~-JlM-) + lh-JAJ = 20' ( t.ir" +:21'-j... ) e-}t~.~- ~e-J.,
(laco~.w + tJ•~-J~_

1lH: magnitude and p~ fiuxtions l)f tfUs. fllta nR

= 1:la-cn
llf£e'""•l cv il'tl. (4 :l5)
94w) = -w (4-.26)

tn ~,tl'lllnp ~ ruv. -~Ill;}' ~~rot rTf"'l.!l~ll,l' the: tR~tl;lr the fi~. (bl: -~ fum;ti(lll
at w = 0.1 111boulil be ~:quid IIOi :u:m. Slau!Eiy, 10 paa lhl! 1\:i!; nqlknej aTmpanenl w11boalm uay II(I;EnUDbrort,. .,.-e
ut:ed P;.l cOJVR.II11u the. ~tu.;'Je futu:tlo .11 t1r = U.-4 1:11 cqu m L. Thus.. Llr two COIIdiLions ULtt mus~ be .satistkd
~

2tro..':oJ(0.t) + tJ = 0.
~~fOAl+ 8 = J.

1:t = - 6.76l9S, /J = J3.4:56'33i. ~4.27)

Note !hal ~ alhlt,.&: ~ulJui\ gu.:tra:~~ llH! puulJ~ uf (Z.,.. c .rp + PJ S\ltKti Wnll; Bq. (.t.V) in Eq. (4.2JJ
we oblal11 ,be inpiJL..oOIJ~ Rl tioo nf the de n.-d F1R firer

wi.th
(4.29)

T~ vc:rify Ute- filt-trin~ &o.."!troci. 'llo'.c unple~:Mr:'l. flw! r11~ nf 'ijq f4 2tiJ 011" M.~TU'I~ .d Cttk<utatt- ~ hFRL I 00
=
uutp.l!. umpb hegjllnin: from 11 0. Nu~ th...at the m~m 1-.:u: ~llll~rncd ~o be 1 s.aJ seqUcooe wilh the fir.~L
OOCW!TU ·.ample NX:uninn M " = n, and (OJ" t.:nOC.ulaticg JlOJ anti. Jt 1J WI!!~ .... - L] ... X'[ -:tl = 0. The MA.fi..AB
J"'''O!::'**l'l used 11u a.lr11b1.e lhe nu 1 i:s &i'Vell belot;o.<

Pre-gram 4_2
So~Jt up he fllt.er coer fie ient::o
b • [-6.1619S 1l.4~63lS o.76i95~!
Set: in& ti~l <:omhtions t:e;. Z~r<:J •.•alu(•:;.
:zi - [a o·:
I t;er...erate tJ1~ two nl r..u r-oi<'l 1 eo~?Q.Uencee
n .,. 0!~'9;:
x 1 .. c;o!" 1~. 1 n 1 ,;
A2 • t:O!!O ~ 0. oG nI ;
Gen~~al~ ~he fil~or ou~pul s~ueace
Y .. rllt.cr Ch. 1. xl .. x2, "1, •
P ot t.he ln.llu t. an{3 the ou t:.~u t:. ~ot:Qll n~s
plo~(n,y, ·~ ',n,x2,'b--·.n.xl.~g- . 'l;grid
a~i~ITD 100 1.2 4]J:
yliUleU'Amplitl.lde' ;; .x:l.:s.bf•lL'1' me iooex n·)~
l~g.::ur.:H '.t -• , 'Y ,n]'. ·b-- ', '.)1::2 [.;_~','g-.·. • xl [nj 'l
42. The Frequency Response 211

Figure 4.3: Output y[n J (solid line), low-frequency input xl [n l (dash-dotted line), and high-frequen.;:y input x2[n]
{dashed line) signah oftbe FIR filter of Eq. (4.28}.

'18ble 4.1: Input a.~d output sequences of \'be filter of Exilii'.ple 4.3.

n cos(O.ln) cos(0.4n) x(n] y[n]

0 J.O 1.0 2.0 -13.52390


l 0.9950041 0.92l06W 1.9160652 13.956333
2 0.9800665 0.696'7067 1.6767733 0.9210616
3 0.9553364 03623577 J.J176942 0.6967064
4 0.9210609 -0.0291995 0.8918614 03623572
5 0.8775825 --Q.4[61468 0.4614357 -0.0292002
6 0.8253356 -0.7373937 0.0879419 -0.4161467

Several comments are in order here. First, computation of the present value of output requires the
knowledge of the present and two previous input samples. Hence. the first two output samples are the
result of the assumed zero input sample values at n = -I andn = -2, and are therefore the transient part
of the ourput. Since the Jmpulse response is of length N + 1 = 3, the steady-state is reached at n = N = 2.
Second, the output i.s a delayed version of the high-frequency component cos(0.4n) of the input, and the
delay is one sample period.

4.2.6 Phase and Group Delays


The output signal y[nl of a frequency-selective LTI discrete-time system with a frequency response
H(el"') exhibits S;Qffie delay relative to the input signal x[n] caused by the nonzero phase response
O{w) = argjH(eJ«>)} of the system. If the input i& a s:inusoidal signal of frequency w0 as given by
Eq. (4.17), the output is also a sinusoidal signal of the same frequency w 0 but lagging .n phase by 8(w0 )
212 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

radians as demonstrated in Eq_. (4.18}. We can rewrite Eq. {4.18) as

(4.30)

indicating a time delay. more commonly known as phase deiay at w =w 0 given by 3


G(w.,.}
r.,(w,) = ---. (4.31)
• Wo

When the input sjgnal contains many sinusoidal components with different frequencies that are not har-
monically related, each componenf will gc through different phase delays when processed by a frequency-
selective LTI discrete-time system, and tt.e output signal, in general, will not look like the Input signaL
ln such cases, the signal delay is defined using a different parameter. To develop the necessary expres-
sion. we consider a discrete-time s.ignal xfn J obtained by a double-sideband suppre~ carrier {DSB-SC)
m(Jdu!ation with a carrier frequency w, of a low-frequency sinusoidal signal of frequency w0 fHay99}: 4

x[n] = Acos(w0 n)cos{wcn). {4.32)


As indicated in Example 2.1 0, x [n] can be rewritten as
A A
:r-[n} = 2 cos(wtn) + l cos(w..,n), (4.33)

where Wf = w,. - W 0 and w,. =We+ w 0 •


If the above signal is proces.;ed by an LTI disccete-time system with a frequency response H (eF"}, it
follows irom Eq. (4.22) that the vufpu! signal is of the form

(4.34)

assuming IH{eiw)l ~ I in the frequency range wt .:::: w::;:: w,. Thus the output is also in the fOim of a
modulated carrier signal with the same carrier frequency (t)<- and the moduiation frequency w 0 as the input,
br:t the two t:ontpunents .in the output have different pb.asc lags relative to theircom::sponding components
in the input.
Consider the case when the modulated input given by Eq. (4.33} is a narrow!Jar..d signal with the
frequencies !L'f and Wu very dose to the {--artier frequency We. i.e-.• w 0 is very small. In the neighborhood of
We we can express the unwrapped phase response 9c(w} of lhe LTI discrete-time system approximately as

(4.35)

by rnakin_g a Taylor's s..-"'Ties expansion and keeping onJy the first two terms. Using the above formula we
eYaluate the- time delays of the canier and the modulating t:omponents. In the former case it is given by

e,.(w,.) + e~(wt)
(4.36)
2wc
3Tt.e minus si_gn ;.,dicates phase !ag.
4 See Section 1.2.4 for a ~view oftRe OSB-SC rnodu£atioo scheme for analog signals.
4.2. The Frequency Response 213

El(ro)

"
,.y
""~ ~
"
.. '' ' '
~o-<f',.-""

,
""" '
'
..
'
~--~-~----~---------m
0

Figure 4.4: Evaluation of the phase de-lay and the group delay.

wbich is seen to be the same as the phase delay if only lhe carrier signal is passed through the system. On
the other hand, in the latter case. it is given by

- e,.{w,.)- Oc(Wt) ~- dtlc(W}I .


(4.37)
w,. - Wt dw w==w,

The parameter

is called the group delay or errvelope delay caused by the system at w = We. In the general case. the group
delay is defined b-y
dB,(w)
rg(w) = ----. (4.38)
dw
The group delay is a measure of the linearity of the phase function as a function of Ehe frequency anrl1s
the time delay between the wavefonns of underlying continuous-time signals whose sampled versions,
sampled au = nT, are precisely the input and tl:e output discrete-time signals. If the phase function is in
rnilians and the angular frequency w is in radians per second, then the group delay is in seconds. Figwe
4.4 illustrates the evaluation of the phase delay and the group delay of a typical phase function. Figure
4.5 shows tbe waveform of an amplitude-modulated input signal and that of the output generated by an
LTI system. As can be seen from this figure. the carrier component at the output is delayed by the phase
delay and the envelope of the output signal is delayed by the group delay relative to the v;.11vefonn of the
underlying continuous-time input signal.
The waveform of the underlying continuous-time output shows distortion when the group delay of the
LTf system is not coostant over tbe barufwidth of the modulated signaL If the distortion is unacceptable,
a delay equalizer is usually cascaded with lhe LTI sysrem so that the overall group delay of the cascade is
approximately Jinear over the band of interest. However, to keep the magnitude respor.se of the parent LTI
system unchanged the equalizer must ha.,.-e a constant magnitude response at all frequencies. 5 It should be
noted that the group delay is equal to the phase delay up to rhe first phase discontinuity.
For the filter of Example 4.3, the pbase function B(ar) = -w. Hen~ the group delay is given by
rg(w} = 1 which is also evident from Figure 4.3, and pointed out earlier. Likewise, for the moving-
214 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

''
'' Jnput

'' '' i
' ''
'' t-- '
,'
___.IT,., --,'Tg 'j-0-
o ' . '
' '' '

''' '' '' ' ' ''


' '
! 0Jt_?Ut

' ' ' ,_ ''


Figurt 4.5: Illustra1ion of the concept of the phase delay and the group delay. (Adapted with permission from A.
Wllhams and E J. Taylor, Electronic Filter !ksit<m>r'J Handbook, 3rC edition. McGraw-Hill. New York NY, 1995.)

average filter of Eq. (4.n) the group delay is given by


M-1
r...,-(w) = --- . (4.39)
2
or in other words, the moving-average filter exhibits a con'i.tant group delay for all frequencies.
The group deiay can be detem:ined using theM-file gr;::del 'ly fn MATLAB. We shall illustrate its
use later in this chapter.

4.2.7 Frequency-Domain Characterization of the LTI Discrete-Time


System
We now derive the frequency-domain .representation of an LTI discrete--time system. If Y(ei"') and X (e-i<")
dernte the D1Ffs of tile output and input sequences, y[nl and xlnJ, respectively, then from Eq. (4.5} by
taking the DTFf of both sides we obta-in
oc
Y(ej"') = L y!n]e-i=

(4.40)

Interchanging the summation sigm on the right-hand side of Eq. (4.40) and rearranging we arrive al

Y(e'") = >=toc h[k] (j'::~ xfn- k]e_,_)


= ,f:;oc h[k] (%=~ x[f]e-i~''+k')
= kE""G h[kJ (~=x[fje-i"'i) e-iwi<. (4.41)
4.3. The Transfer Function 215

The quantit)' inside tlle parentheses on iheright-hand side ofEq. (4.4f) is recogmzed as X (eJ"'). !he DTFf
of the input sequence x{n]. Substituting thh notation arn1 rearumgmg, we finaUy obtain

Y(_ei'"J = Ct'X h[k]e-1 """) X{ei"') = H(ei"-')X(tJw) (4.42}

whc;-e H(t>l~') .is tlle frequency response of the LTI system as defined in Eq. {4.9). Equation (4.42) thus
relates the input and the output of an LTI: system in the frequency domain. It should be noted that the
development of Eq. (4.42) provides a proof of the convolution property of the DTFT given .in 'Table 3.2.
From Eq. (4>.42_) we obtain
Y(ef'"')
H(t:'j"') = X {eJ"-')
. {4.43)

Thu>.. the frequency response of an LTI discrete-time system is given by the nttio of the DTFI Y(eJ"-) of
!he output sequence y[n 1 to the DTFT X (el"') ofthe input sequence x[nj. Foe example, for an LTf system
characterized by a linear constant coefficient difference eqllation of the form of Eq. (2,8 I), the expre..•>sion
for its. frequeucy respon;;c H(.ei"-') can be simply derived from Eq. (4,2) resulting ill

(4.44)

4,3 The Transfer Function


A generalization o-::· the frequency response functioa H ( e 10 ) leads to the con<---epl oftransti':r function. which
is defined next. As we have seen. the frequency respous.e function does provide valuable information on
the behavior of an LTI digital filter in the frequency-domain. However, being a complex function -of the
frequency variable w, it is difficult to manipulate i:t for the realization of a digital filter. On the other hand,
tlle .:-transform of the impulse response of an LTI system, cai!ed the transfer function, is a polynomial in
::- 1, and for a system with a real impuhe response, it is a poiyriDmial with real coefticiznts. Moreover,
in most practical cases, the LTI digital iiher of interest is characrerized by a linear difference equation
witr. constant and real coefficienl5-. The tra.11sfer f:mction of such a fiTter is a real rational function of the
variable z- 1, i.e .. a ratio of two polynomials in z- 1 with real coefficients, and is thus more amenable for
the synthesis.
We first develop the mput-outputrelarion of an LTJ ;,ystem in tlle z-domain from its v-arious time-domain
descriptions and"hrrive at different form~ of r.he transfer function representation of the sy~tem. We then
study its properties and in particular develop the conditions for the BlBO stability of a c-ausal LTI system.

4.3-.1 Oeftnitlon
ComJder the LTI digital discrete-time system of fjgure 4.l with an impulse response h(n l The input-
output relation of this system is given by Eq. (4.5) where yln] and x[nl are, respectively, the output and
th.:: input 'i<!quences. lf Y ( z). X {z), and H (;,:)denote the z-transforms of y[n 1• .t ln ], and h[nJ, respectively,
th,~n by laking the ::~lransform~ of both sides of Eq. (4.5) and following the steps simi!ar to lha: used in
the development of l:!q. 1_4.42) we arrive at the input-output relation of the filter in the z-domain given by
Y{z) = H(z)X('l). (4A5)
From the above we get

H(z) = ~~:> (4A6)


216 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

T;~e quantity H(z), which is the z-transforrn of the impulse response :;equern:e h[n] of rhe filter. is more
com~only called the transfer junction or the system function. Thus. the transfer function H(zj of an
t.:TJ discrete-time system is glven by the ratio of the .:-transform Y(i':) oftlre output sequence y[n] to the
z-transform X (z) of the input sequence x[n ;. (t should be noted that Eq. (4.45) also follows from the
convolmion property of the z-transfonn given in Table 3.9.
The inverse z-transform of the transfer function H (z) yields the impulse response hin]. For a causal
ratioual transfer function, the methods outlined in Set-'tion 3.9 can be used to compute its impulse response.
For exam pte, an analytical form of the impulse response can be detennined via a partial-fraction expansion
ru;.ing MATLAR Program 3_9. On the other hand. a fixed number of impulse response samples starting at
n = 0 can be ~puted using MATLAB Progrrum. 3_11 or 3)2.

4.3.2 Derivation of the Transfer FuncHon Expression


In th"" case of an FIR digltal filter, the inpc~t-output relation in the time-domain is given by Eq. {2.97_,1, and
by taking the z-transfonn of both sides of this equation, we arrive at

N, )
Y(z) = ( "~~ h~njz-" X(z),

from which we obtain


N::
H(z) = L h[nlz " (4.47}
n=N1

For a causal FIR filter, 0 ,::s N 1 ,::s N2. Note that-all poles of H{z) of a causal FIR filter are at the origin in
tho~ z-plane, and as a result. the ROC of H(::.:) is the entire z-plane excluding the point z = 0.
In the case of an IIR digital filter, the transfer function expression in general is an infinite series.
However, for :he finite-dimensional llR flter characterized b}' the difference equation of Eq. (2.81), the
tnmsfer function expre~sion follov.•s dirK:ly from Eq. (4.4) and is gh~ by

H(z) = l'{z) = Po+ PlZ- 1 + P2C 1 + · · · + PMZ-.M. (4.4B)


- X{z) du+dtz 1 +chz 2 +---+d.vz N

This is seen to be a rational function in z- 1. i.e., it is a ratio of two polynomials in z- 1• By multiplying


tht· numerator and the denominator of the right-hand side by ~M and zN, respectively, thlirnnsfer function
can be expressed as a rational function in .;:;

J/(z) = 7 \N-_M) Pf.,Z M+ PIZ M-1+ P2Z M-2+ ··-+PM (4A9J


- fiuzN + drz.V I + d2zN 2 + · · · + dN .
An alternate way to expre.%- the lransferfuDction ofEq. ( 4.48) is to factor outtlie numerator and denominator
polynomials leading to
{1M (] ,_ -1)
H(z) = ~ t=l, - <;H , (4.50a)
<-<u TIJ:=t (1 ~ ).f.O:Z I}
or to represent the transfer function of Eq. ( 4.49) in factored form, as

H ) _
v-dz
PO (N-M) n::,
N
(_..:- ~.t) , (4.50b)
0 nk=:(z-).,1:}
4.3. The Transfer Function 217

where .;I, .;2 •... , ~.ware the finite zeros, and 1 1 , .l..z, ... , /...1¥· are the finite poles of H(<.}. If N > M, there
are additional (N - M) zeros at z = 0, and if N < M, there are additional (M - N) poles at z = 0. For
a causal IIR. filter, the impulse response is a causal sequence. The ROC of the causal IIR transfer function
H(z) of Eq. (4.50b) is thus exterior to the circle going through the pole furthest from the origin, i.e.. the
ROC is given by

+v:r;'" wh A&'n+ tV!\


.; 4: \}.{jJf "' f_f' 'J'S&tr:t r
iF. t'l<!ftf':
Kr,

\hrLk.t'· 2 f{/20(1 1LB¥t;


1
218 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

,I
C.Sf
0

a I
~ 0~
-
~

'
I'
~~I
c

---;-----,~---;;--~---;---'
_, L i
-1 -4)5 0
.... ,.,
05

Figure 4.6: Pole-zero plot of the UR transfer function ofEq. (4.54).

4.3.3 Frequency Response from Transfer Function


If the ROC of H (2:} includes the unit circle. then the frequency response H (elu;) of the LTI digital filter
can be obtained from its transfer function H (z) by simply evaluating it on the unit circle, i.e .•
(457)

As indicated inEq. (4.10), the frequency response H(eit.d) can be-written in terms of its real and imaginary
parts. Hre(ei"') and Him{el""). oc in terms of its magnitude and phase functions, IH (ei.u)! and arg[H (ei"')],
respectively. For a real-coefficient transfer function H(z). it can be shown that
1H(e-i(<>)i 2 = H(eiw)H*(eiw) = H(eiru)H(e-i(>}) = H(.t)H(z- 1)1z=fli<»· (458)

Fot astablerationa1 transfer function H (z) in the form ofEq. (4.50b}. the factored fQim of the frequency
response H (el<>l) is obtained by substituting z = ei(JJ resulting in

(459)

The above fonn is convenient to visualize the contributions of the zero factor (z - ~k) and the pole factor
(z -).~:)of the transfer function H(z) to the overall frequency response. From Eq. (4.59), the expressiQn
for the magnitude function is thus given by

!H(ei"')l =I_do
pO lfei"'I<N-M) fit;=. tie~"'- {"kl
TI~ 1 le'"' - At I
4.3. The T-ansfer Function 219

Figun 4.7: Geometnc interpn~tatJon cf frequency re:<prnl~C compmation of a rational transfer function.

i Pol nf_. lej'"- ~k! (4.60)


= :, dn nf=; ieJw- ),.kl.

Likewise. from Eq. (4.59), the phase response for a rational transfer function is of the form

arg H(el"") = arg(pO/dc) + t<>(N- M)


M N
+L arg(ejw- ~.-) - L arg(ejw - ) . .~;). (4.61)
l<=l ii.=l

The magnitude-squared function, fora real-coefficienl mtionaltransfer function can be computed using
Eq ( 4.59) which leads to

(4.62)

4.3.4 Geometric h1:terpretation of Frequency Response Computation


For an LTI digital filter with a rational transfer function H(z), the factored furm of the frequency response
t~xpression given by Eq. (.<1.59) is convenient to develop a geometric interpretation of the frequency response
•:omputation from the pole-zero plot of the transfer function as w is varied from 0 to 2.rr on the unit circle
in the z-plane. The geometric intl!rpretation can be used to obtain a sketch of the response as a function of
the frequency.
If we ex.amine the expression for the frequency respome given in Eq. {4.59), we observe that a typical
factor is of the form
(ei"'- peJrt-),
where peN> h a zero if lhe factor is from the numerator, i.e .• a zero factor, or is a pole if it is from the '
denominator, i.e., a pole factor. In the .;:-plane the factor (ei<» - pe#) represents a vector starting from
the point z = pei<l> and ending on the Ul'.it circle at z = eJf:L' as sh0¥.'0 in Figure 4.7. As w is varied from 0
to 2rr, the tip of the vector moves counterclockwise from the point z = 1 lm:.'ing the unit circle and back
to the point z = l .
As indicated by Eq. {4.60), the magnitude response jH (ef"')l at a specific value of w is given by the
product of the magnitudes of all zero vectors divided by the product of the magnitudes of all pole vectors.
Likewise, from Eq. (4.61) we observe that the phase respome arg H {ej"') at a specific Wllue of (J) is obtained
220 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

by adding the phase of the term p0jdo and the linear-phase term w(N - M) t~ the sum of the angles ?fall
zero Ve{;tors minus the sum of the angles of aU pole vectors. Thus, an approximate plot of the magrutude
and phase responses of the transfer function of an LTI digital fitter can be developed by examining its pole
and zero locations.
Now. a z£!10 vector hao; the smallest magnitude when w = ¢, and the pole vector has the largest
magnitude when w = rp_ If the digital filter is to be designed to highly attenuate signal components in a
specified range of frequencies, we need to place zeros of the transfer function very close to or on the unit
circle in this range. Similarly, to highly emphasize signal components in a specified rat1ge of frequencies,
we need to place poles of the transfer function very close to the unit circle in this range.

4.3.5 Stability Condition in Terms of Pole Locations


We describe in this text &evera.l methods for detennining the transfer function of an FIR or JIR filter meeting
the prescribed frequency-domain specifications. However, before the digital filter is implemented we need
to ensure that the transfer function derived will lead to a stable structure. We now establish the condition
to be satisfied by a causal LTJ digital filter described by a rational transfer function H (z) in order that it
be BlBO <>table.
Recall from our earlier discu~ion in Section 2.5.3 that an LTI digital filter is BIBO stable if and only
if its 1mpulse response requence h[n] is absolutely summable, i.e..
00

S ~ L [h[n]: < oo. (4.63)


n=-oo

The above stability condi1ion in terms of the impulse resporu;e samples is difficult to test for a system with
an impulse response of infinite length. We now develop a stability condition in terms of the pole locations
of the transfer function H(z) which is much easier to test if the pole locations are knov.n.
Recall from Section 3.7.1, the ROC of the ;:-transform of h[n], H(z.}, is defined by values of tzl = r
for which h[n]r-" is absolutely sumrnable. Thus if the ROC .includes the unit circle lzl = I. then the
digital fi.]{er is stable, and vice versa. In addition, for a stable and causal digital filter for which h[n] is
a right-sided sequence. the ROC will include d!e unit circle and the entire l>Plane outside the unit circle
including the point z = :>0.
As indicated earlier, an FIR digital filter with bounded impulse response coefficients is always stable.
On the other hand. an llR filter may be unstabfe if not designed properly. In addition, an originally stable
llR filter cbacacterized by infinite precision coefficients may also become unstable after implementation
due to tbe unavoidable quantization of all coefficients, as inustrnted by the following example.
4.3. The Transfer Function 221

Tune •nd"""
(a) (b)

Figure 4.8: (a) Impulse response of original tnmsfei function of Eq. (4.64}, and (b) impulse response of transfer
function of Eq. (4.65).

The stability testing of an IIR transfer function is lherefore an irn}Xlrt.ant problem. However, it is
difficult to compute the sumS of Eq. (4.63) analj1icaHy in most caM:s. For .a causal IIR transfer function,
it can be computed approximately on a computer by replacing the right-hand side of Eq. (4.63) with the
following finite sum
K-l

SK = L lh[nJ:. (4.66)

"""'
and iteratively compming .Eq. (4.66) until the difference between a series of consecutive values of SK is
smaller than some arbitrarily chosen small number, which is typically 1o-6 . Fora transfeT function of very
high order this approach may not be satisfactory. We now develop an alternate stability condition based on
the location of the poles of the transfer func!ion. For practical reasons, we restrict our attention to causal
transfer functions.
Consider the IIR digital filter described by tt:e rational transfer function H(:t.) of Eq. (4.48}. If the
digital filter is assumed to be causal. i.e., the impulse response sequence fh[n]} is a right-sided sequence,
the ROC of H(z) isexlerior ro the circle going through the pole that is farthest from the origin. But stability
requires that {hlnl} be absolutely :mrrunable, which in tum implies that the discrete-time Fouriertransfonn
H {ei"') of fh[n]} exists. Now, the z-transform H(z) of a sequence {hfn]} reduces to the fuuriertransfonn
H(ei"') by letting z = el"-' if the unit circle lies wi.lhin the ROC of the transfe.r function. Therefore we
conclude that all poles of a stable causal transfer function H (Z) must be strictly iflside the unit circle, as
indicated in Figure 4.9.
222 Chapter 4: LTJ Discrete-Time Systems in the Transform Domain

Fizure 4.9: StabJ!ity regton (shown ~haded) i-1 !he ;:-plane for pole locations of a stable casual transfer function.

\¥ bqJ; - j\@1! {t¢1iF


u II;, 0kw!#!Jllw"

4.4 Types of Transfer Functions


The time-domain classification of a digital transfer function based en the length of its impuhe response
sequence lead~ to the finite impulse response (FIR) and the infinite impulse response(IIR)trausfer functions.
We describe here several other types of classifications. In the case of digilal transter function<; with
frequency-selective frequency responses, one classification is based on the shape of the magnitude fuocrion
:H(ejw)l or the form of the phase function G(w}. Based on this, four types of ideal filters are usually
defined. These ideal filters have doubly infinite impulse responses and are unrealizable. We describe here
very .simple realizable FIR and IIR digital filter approximations, In a number of applications. these simple
filters are quite adequate and provide satisfactory performances. The importance of transfer functions with
linear phase is then pointed out and the possible realizations of these transfer fi.:nctions with FfR filters are
discussed.

4.4.1 Ideal Filters


As. pointed out in Example 4.3, a digital filter designed to pass signaJ components of certiDn frequencies
without any distortion should have a frequency response of value equal to one at these frequencies. and
should have a frequency response of value equal to zero at all other frequencies to totally block signal
components with those frequencies. The runge of frequencies where the frequency response takes the
value of one is called the passband. and the range of frequencies where the frequency response is equal to
zero is called the stopband of the filter.
The frequency responses of the four popular types of ideal digital filters with real impulse response
coefficients are shown jn Figure 4.10. For the lowpass fiher of Figure 4.lO(a), tbe passband and the
stopband are given by 0 -::; w :S w, and we < w ~ rr. respectively. For the highpass filter of Figure
4.JO(b). the stopband is given by 0 :S w < UJc, whiJe the passband is given by we :S w ~ 1T. The passband
region of the bandpass filter of Figure 4.\0(c) is U:cJ ::=: u; :::; Wcl and tbe stopband regions are given by
0 ~ w < We! and We2 < w < ;r- Finally, for the bandstop filter of Figure 4.10(d), the passband regions.
are 0 S w :S Wet and wc2 :S w :S :If, while the stopband is from «~ct < w < Wc2· The frequencies We,
Wet, and wc2 are called the cuwjjfrequencies of their respective filters. Note from this figure that an ideal
4.4. Types of Transfer Functions 223

HHP~i'
- Jwl-

~
I
I I [
(a)
'
' "' -· -IDe 0

(h)
"'" rr "'

HBP{~) H BS(ejro)

D i' n •
~

'
rr --mc2 -roct OOc1 Wc2
' m
'
I IIr[
-it ----<0<'2 ----{!)ct ro,I Ulc2 :1: "'
I (c) (d)

Figure 4.10: Four types of ideal filters; (a) idea! low-pass filter,_ (b) ideal highpass filter. (c) ideal bandpass filter. arod
!d) ideal bandstop filter.

filter thus bas a magnitude response equal to unity ln the passband a.rod zero in the stopband. and has a zero
phase everywhere.
We have already encountered the frequency response HLP (ei"') of the ideal Iowpass filter of Figure
4.10{a} in Example 3.3 where \V-e computed its impulse response given in Eq. {3.12). We repeat it here for
convenience:
SlDW..-n
hLp{n} = , ~oo < n < 00. (4.67)
""
In this example, we have shown that the above impulse response is not absolutely summable, and hence,
the corresponding transfer function is not BIBO stable. Note also that the above impulse response is not
causal and is of doubly infinite length. The remaining three frequency responses of Figure 4.10 also are
characterized by doubly infinite, noncausal impulse responses and are not absolutely summable. As a
result, the ideal filters with the ideal "brick wall" characteristics of Figure 4.10 cannot be realized by a
fi nile dimensional LT[ filter.
In order to develop stable and realizable transfer functions, the ideal frequency response specifications
of Figure 4.10 are refa.1:ed by including a transition band between the passband and the stopband to permit
the magnitude response to decay more graduall}' from its maximum value in the passband to the zero
value in the stopband. Moreover, rhe _magnitude response is allowed to vary by a small amount both in the
passband and the stopband. Typical magnitude specifications used for the design of a lowpass filter are
shown in Figure 7 .1. Olapter7 is devoted to a discussion of various fitter design methods that lead to stable
and realizable transfer functions meeting such re!axed specifications. In the following two sections we
describe several very simple low-order HR and llR digital filters that exhibit selective frequency response
characteristics providing a first-order approximation to the ideal characteristics of Figure 4.10. Frequency
responses with sharper characteristics can often be obtained by cascading one or more of these simple
fihers. which in many applications are quite satisfactory.
224 Chapter 4: lTI Discrete-Time Systems in the Transform Domain

l<{nJ ---8--- w(nj

u{nj <= vi-nj, y{n] = w{-nl

Figure 4.11: Implementa!ionof a zero-phase filtering scheme.

4.4.2 Zero-Phase and Linear-Phase Transfer Functtons


A second classification of a transfe_r function is with respecl to its phase characteristics. In many applica-
tions, it is necessary to ensure that the digital filter designed does not distort the phase of the input signal
components with frequencies in the passband. One way to avoid any phase distortions is to make the fre-
quency response of the filter real and normegati'v-e, i.e., to design the filter with a zero phase characteristic.
However, it is impossible to design a causal digital filter with a zero phase. For non-rea1-time processing
of real-valued input signals of finite length. zero-phase filtering can be very simply implemented if the
ca~saJity requirement is relaxed. To this end, one of two feasible schemes can be followed. In one scheme,
th(~
finite-length input data is precessed through a causal real-coefficient filter H (z) whose output is then
time-reversed and processed by the same filrer, once again as indicated in Figure 4.11.
To verify the above scheme, Iet v[ -nJ = u[n]. Also, let X (ei"'), V(ei<»), urei<»), W(eiw), and Y(ei"")
denote the discrete-time Fourier transforms of xfnj, vln}, u[n], w[nJ, and y{n}, respectively. Now, from
Figure 4.11 and making use of the symmetry relations given in Tables 3.3 and 3.4. we arrive at the relations
between the various Fourier transforms as
V(el"'-') = H(eiw)X(ei'»), W(ei"') = H(eiw)U(eJw},
U(eJ""J = V"'(ei"'), Y(ei"') _ W"'{eJ"').

Combining the abm·e equations we obtain

Y(e-i"') = W*(ei<») = H*(elm)U*Ce'"') = H*(e1"')V(ei(;7)


= H,.(ej{!J)H(eiw)X{eJw) = IH(eiffi)fZ.X(eim).
Th~refore the overall arrangement of Figure 4.11 implements a zero-phase filter with a frequency response
',H(eJw)J 2
The functioo f i l tf i l t:: in MATLAB implements the above scheme.
A second scheme. to achieve zero...phase filtering is outlined i.n Problem 4.67.
In the case of a causal transfer function v.ith a nonzero phase response, the phase distortion can be
avoided by ensuring !hat the tran~er function has a unity magnitude and a linear-phase characteristic in
the frequency band of interest. The most general type of such a filter bas a frequency response given by

(4.68)

which has a linear-phaseresponse from w = 0 tow= 2rr. Note that the above filter has a unity magnitude
response and a linear phase with a group deiay of amount D at aU frequencies, i.e.,

r(w) =D. (4.69)


The output of this filter to an input x[n] = AeJ<»n is then given by
y[n] = Ae- j&D ejwr. = A~w(n-D).

If X 11 (t) and y,.(t) represent the canti.nuous.-time signals whose sampled versions, sampkd at t = nT, are
x[n} and y[n] given above, then tbe delay between Xa(t} and y,.(t) is precisely tbe group delay of amount
4.4. Types of Transfer Functions 225

IHil'(ejmJI

I I!I ' 00

}W
argHL,.ze }

~·~~~r-----=-, • w

Figure 4.12: Frequen...}' response of an ideallm>tpass filter with a linear-phase :esponse i:J the passband,

D. Note that if Dis an integer. then the output sequence y[nj is identical to the inputsequence xfn], but
delayed by D samples. If D is not an integer, y[n ]. being delayed by a fractional part, is not identical
to x[n]. But in this latter case, the waveform of the underlying continuous-time output is identical to tbe
waveform of the underlying t."'Oltinnous-~ime input and delayed by D units of time.
If we desire to pass input signal components in a certaln frequency range UDdistorted in both magnitude
and phase, then the transfer function should exhibit a unity magnitude response aDd linear-phase response
in the hand of interest. Figure 4.12 shows the frequency response of a lowpass trans£'er function with a
linellT-phase characteristtc in the passband. Since the signal components in the stopband are blocked, the
phase respome in the stopband can be of any shape.
226 Chapter 4: LTI Discrete-Time Systems Kl the Transform Domain

N:l3
N = 1:2

.t 0.4

0.2
.g
"
0.4

0.2
~ ? 9 1 0 ? 9 .
l 9
''
' • 0

J, J, 6 6'
" b
--o.2 OL--,--.--~,--,o--l~Oc-_jl.2 ~- 2 ol__~,--:,--:,--:,c--:\0~-:1~2~
Tirrn: irnkx n Time indell n
(a) (b)

~re 4.13; FIR approxi1nation to d;e ideal linear-phase Jowpass filter.

4.4.3 Types of Linear-Phase FIR Transfer Functions


In tt.e previous section \Ve pointed out why it is important to have a transfer function with a linear-phase
property. It r.u:ns out it is always possible to design an F1R transfer function with an exact linear-phase
response, while it is nearly impossible to design a Iinear-pbasel!R transfer function. Recall that the length-
3 FIR transfer function of Example 4.3 has a linear-phase response, as indicated in Eq. (4.26). This filter
was cllaracterized by a symmetric impulse response of the fonn h[O] = h(2J. Note also the symmetry of
the impulse responses of the truncated approximations to the ideal linear-phase lowpass filter shown in
Figure 4.l3. We show now that a causal FIR transfer function of length N + l.
N
H(z) ~ Lh[n]z-• .
• ..o
has a linear phase if its impulse response h[n] is either symmetric, i.e .•

h[n] = h[N- n], Osn:::: N, (4.74)

or is antisymmetric, i.e.•
h[n] = -hiN- nl, 0 ~n ~ N. (4.75)
Sine~ the length of the impulse response can be either even or odd. we can define four types of s ymmetty for
the impulse response as demonstrated in Figure 4. t4. It follows from Eq. (4.75) that for an antisymmetric
FIR filter-of odd length, i.e.• N even. h(N /2] = 0. We next examine each of these four cases {Rab75J.
4.4. Types of Transfer Functions 227

h[nt
T h[n]
l
"
'
'
'...Center of ......... Center of
symmetry symmetry

(a)JYpe I, N ~ 8 (b) Type 2, N ~ 7


h[n] h[nj
I
l
j
7 g
n --k~l~,~~·~~-+7 ____ "
" l
.! 6 0 3 : 5 6

"'Center of 'Center or
symmetry gymmetry

(c) JYpe 3, N ~ 8 (d)Type4, N ~ 7

Figun> 4.14: Illustration of the fuur types ot impulse response symmetry.

Type 1: Symmetric Impulse Response with Odd Length


In this case, the degree N is even. Assume N = 8 for simplicity. The transfer function of the conesponding
fiiter is given by

H{<:) =hi OJ+ h[IJz- 1 +h[2]z-2 + h[3]z-3 + h[4]z- 4


+ h[5]z- 5 -r h{6}::- 6 + h[i'Jz- 1 + h[8lz -s. (4,76)

But from Eq. (4.74) for N = 8. h[O] = h(8l. h[l] = h{7], h[2] = h[6], and h[Jl = h[5]. Then. Eq. {4.76)
reduces to

H(z} = h(O]O-+ z- 8} + h[1J(z- 1 + z- 7) + h[2](z- 2 + z- 6)


+hl3Hz:- 3 + z- 5 ) + h{4lz ·"
= h[0Jz- 4 {z4 + z- 4 }..;... hfl]z-4(z3 -;- z- 3) +h[2]z- 4(z 2 + z- 2)
+ z- 1 ) + h{41z- 4
+h[3Jz- 4 {z
= z- 4 {hf0J(z4 + z-4) + h[l](z: 3 + z- 3) + h{2)(z 2 + z- 2)
+ h[3J(:: + z- 1) + hf4}}. (4.77)
As a result, the corresponding frequency response is given by
H (eJ"') = e-j4w{2h[O] cos(4w) + lh[l] cos(3w) + 2h(2Jcos(2w)
+2h[3]cos(w) + h[4]}, (4.78)
228 Chapter 4: LTJ mscrete-Time Systems in the Tra11sform Domain

obtained using the fact that (z"' + z-m)/21;,=<jw = co~(mw). Note that the quantity inside the braces
in the abo"V--e expression is .a real function of w and can assume positive or negative values m the range
0 :s :wl ~ 7!. Here the phase ls given by

O(w) = -4w + /3.


where f3 is either 0 or tr. and hence. it is a lineae function of w in the generalized sense. The group delay
is given by
dBtw)
<x(w) = - - - ' - = 4, (4.79)
dw
indicating a constant group delay of four samples.
In the general case for Type 1 FIR filters., the frequency response is of the- form

H(eiw) =e-jNcu;'2fi{w)
' '

where the amplitude response if((t)), also called the zero-phase response. i; given by
N/2
if(w) = h [ ~] + 2 Lh [!f- n Jcos(wn). (4.81)
n=1

Type 2: Symmetric Impulse Response with Even Length


~ere the degree N is odd. Let N = 7. By making use o-f the symmetry of the impulse response coefficients
gJVen by Eq. (4.74), the transfer function of the FIR filter can be written as:

H(z) = h[O](I + z-1 ) + h[1Xz- 1 + z- 6 ) + h[2](z- 2 + z- 5)


+ h[3](z-3 + z-4 )
= z-7J2(hiOl(:7/2 + z-712) + h[l](z512 + z-5/2)
+h[2l(.z3/2 + z-3/2) 7 h[3J(zl12 + z-112)}.
4.4. Types of Transfer Functions 229

Ftgutt 4.15: Magnitude responses of the length· 7 moving-average lowpass filter (solid line) and the mo<lified lowpass
filterofEq. (4.82) (dashed line).

The frequency response function is thus given b-y

H(ej"'} = e-J?w,'2{2h[O]cos(;') + 2h[i]cos(~)


+ 2hf2] cos(-¥)+ 2hl3)cos(T)}. (4.83)

As before, here, the quantity inside the braces in the above expression is a real function of w, and as in the
previous case, it can assume positive or negative values in the range 0 :;: lw! ~ :r. Here the phase is given
by
B(tv} = -jm + /3,
where p is again either 0 or 7r. As a result. the ph~e is also a linear function of w in the generalized sense.
The corresponding group delay is
dfJ(w) 7
Tg(&) = - dtv = 2' (4.84)

i;'tdicating a group delay of (7/2) s.amples.


The expression for the frequency response in dle general case for Type 2 FIR filters is of the form

(4.85)

V.'here the amplitude response is given by

!N+IJ/2
H{w) = 2 L h [Nil - njcoo (w(n- ~J). (4.86)
n=l

Type 3: Antisymmetric Impulse Response with Odd Length


Here the degree N is even. Consider N = K Then applying the syimnetry condition ofEQ. (4.75) on the
e,_pression for the transfer function, we a:Ti.ve at

H(z) = z- 4thl0)(z 4 - z- 4 ) + hfl](z3 - z-3;


+ h[2}(z 2 - z- 2 ) + h[3](z- z- 1)}, (4.87)
230 Chapter 4: LTI Discrete-Time Systems ln the Transform Domain

where we lmvc used the fact that h[4] = 0. As a result, using the fact that (:.:"' - z:-m}i;;:•.,J .. -

2e/1f/2 sin(mw), the frequency response is given by


H(ejw) = e-J4weJ:rrf2{2h[O] sin 4w + 2h[i] sin 3cv + 2h[2J sin 2w
+ 2h[3]sin w}.
It also exhibits a generalized linear-phase response given by

9(w) = -4w + ~ + /3,


where j3 is either 0 or JL The group delay here is given by

"t"g{w) = 4, (4.89)

implying a COilstant groap delay of four samples.


The expres£ion for the frequency response in the general case of1}-pe 3 FIR filters i.~ given by
H(ei'"') = je-j.Vw/2f!(w), (4.90)

where the amplitude response is of the form


N/2
H(w)=2Lh[~ -nJsin(am). (4.91)
>t=l

Type 4: Antlsymmetrlc Impulse Response with Even Length


In ':his case the degree N is odd. For N = 7. the transfer function can be expressed as
H(z) = z-7f2(h[O](z712- z-7!1) + h[l](zS/2- .z:-S/2)
+h[2](z.3!2 _ ~-3/2) + h[3](zl '2_ z:-l/2)}.
1
(4.92)
Th~ corresponding frequency response is thus given by
H(eim) = e-j?wl2.,J:r/2 {2h[O] sin(70J/2) + 2h[l}sin{5w/2)
+ 2hl2] sin(3w/2) + 2h[3] sin(w/2)!. (4.93)
It has a generalized linear-phase response

6(w) = -~w+ ~ + IJ,


where /3 is either 0 or ;r _The group delay is constant and is given by

<g(W} = i- (4.94)

Here, the frequency response in the general case for 1)-pe 4 FIR filters is given by
H{ejw) = je-jNwf2fi(w), (4.95)
where now the amplitude response is given by
{N+l)f2

ii(w)=2 L h[~-n]sin(w(n-!)). (4.96)


n=l
4.4. Types of Transfer Functions 231

General Form of Frequency Response


[n each of the four types of linear-phase FIR filters, the frequency response H(el«>) Js of the form

R(ejw) = e-]Nwj2ej!J iJ(w}.


Jt should be noted that the amplitude response H(w) fOT each of the four types of linear-phase FIR filters
<;an become negative over certain frequency ranges, typically in the stopband, as indicated in Figure 3.3.
The magnitude and pi1ase res}X)nses of :he linear-phase FIR filter are given by

for il(w) 2:0,


for ii(w} < 0.

The group delay in each case is


N
r(w) = 2.
Note that, even though the group delay is constant, since in general ~H (ejw)l is not a constant, the output
.-.avefonn is not a replica of the input waveform.
An FIR filter with a frequem.:y response that :sa reai function of w i.s often called a zero-phase filter.
:ilJ(;h a filter must have a noncausal impulse response.

4.4.4 Zero Locations of Linear-Phase FIR Transfer Functions


Let us now study the zero locations of a linear-phase FIR transfer function. Consider first an FIR filter
w:ib a symmetric :impu1se response. Its transfer function H (z) can he written as

N N
H(z) ~ Lhlnlz-• ~ LhiN -•lz-•, (4.97)

using the symmetry condition of Eq. (4. 74). By making a change of variab1em = N- n, we can rewrite
the rightmost expression in Eq. (4.97) as
N N
H(z) = L h[m]z-N+m = z-N L h[m]z"" = z-N H(z- 1). (4.98)

Similarly, the transfer function H (z) of an FIR filter with anantisymmenic response satisfying the condition
cf Eq. (4. 75) can be expressed as
N 11
H(z) = Lh[n]z-n = - _Lh[N -n}z-" = -z-NH(z- 1}. (4.99)
n=O

/>, real-coefficient pol)'nomial H (z) satisfying the condition ofEq. (4.98) is called a mifTOT-image polyno-
mia; (MIP). Likewise, a real-<:oefficient polynomial H(z) satisfying the condition ofEq. (4.99) is caJled
all antimirror-image polynomial (AIP).
ln either case, it follows from Eqs. (4.98) and (4.99) that if z =
~o is a zero of H(z), so is z = 1/~o­
Moreover, fm an FIR filter with a real impulse response, the zeros occur in complex conjugate pairs.
232 Chapter 4: LTt Discrete-Time Systems in the Transform Domain

Hence, a zero at z = fu is. associated with a zero at z = tO. Therefore. a complex zero that is not on the
ur1Lt circle is associated with a set of four zeros given by
z = re=f<l>.
Fm a zem on the unit circle, its reciprocal is also its complex conjugate. Hence, in this case lhe zeros
appear as a pair

A J'eal zero is paired with its reciprocal zero appearing at

z = p, z=~
Note that a zero at z = ±I is its OYln reciprocal, implying it can appear only singly. However, a 1}'pe
2 I'1R filter must have a zero at z = -1 since from Eq. (4.98) \\<"e note that

H(-1) ~ (-l)NR(-1) ~ -H(-1),

im()lying H(-l) = 0. Jnthecaseofa'I:ype 3 or4FIR filter, Eq. (4.99)impliesthat H(l) = -H(J),


ioclicating that the filter must have a zero at z = 1. On the other hand, only a Type 3 FIR filter is restricted
to ]mve a zero at z = -1 since here

H(-1) ~ -(-1)''H(-l) ~ -H(-1),

forcing H(- l) = 0. Figure 4.16 shows some examples of zero locations of all four types of FIR filters.
As can be seen from the above discus!iion, the principal difference between the four types of linear-
ph.:u;e FIR filters is with respect to the number of zeros at z = 1 and z = -1. Sununarizing, we conclude
thll.t:

{a) Type l F1R Filter: Either an even number or no zeros ar z= I and z = -1.
(b) Type 2 FIR Filter. Either an even number or no zeros at z = 1, and an odd number of zeros at
Z= -L

(c) Type 3 FIR Filter: An odd number cf zeros at z = l and z = -1.


(.:l) Type 4 FIR Filter: An odd number of zeros at z = l, and either an even nmnber or no zeros at
Z= -1.

The presence of zeros at z = ±I leads to the following limltations on the use of these linear-phase FIR
filt;:rs for designing filters. For example. since the Type 2 FIR filter always has a :z.ero at z = -1:, it cannot
be ·ned to design a highpass filter. Likewise, the 'JYpe 3 FIR filter has zeros at both z = l and z = -1 and,
as a result. cannot be used to design either a lowpass or a highpass or a bandstop filter. Similarly, the Type
4 FIR filter is not appropriate to design a lowpass filter due to the presence of a zero at z = L Fmally, the
Type l FIR filter has no such restrictions and ;;an be used to design almost any type of filter.

4.4.5 Bounded Real Transfer Functions


A causal stable real-coefficient transfer function H(z) is defined as a bounded real (BR) transfer junction
[Vai84} if
for all values of w . (4.100)
4.4. Types of Transfer Functions 233

Jfm 7 jim::.

0
'
0
0
0

0
0

0 ! !:--+--;;o
Unitcirdc U11it drd~
(c) (d)

Figure 4.14J: EAamples of zero locations of linear-phase FIR transfer functiQfl.S: (a) Type t, {b} Type 2, {c) Type 3,
and (d) T)-pe 4 filwrs.

If the input and output of a d1gital filter characterized by a BR transfer function H(z) are given by xlnl
and y[nJ. respectively, with X (e1 ""') and Y(ei"') denoting their respective discrete-time Fourier transforms,
then Eq. (4.l00) implies that
(4.101)
Tntegrating Eq. (4.100 from -:;r torr, and appiy1ng Parsevars relation (see Table 3.2), we arrive at

= =
L 2
y (n] c0 L 2
x [n]. (4.l02)
!0=-'XJ n=-oc

Or in other words, for all finite-energy inputs, the output energy is less than or equal to the input energy
icnpiying that a digital filter characterized by a BR transfer function can be viewed as a passive structure.
IfEq. (4.100) is satisfied with an equal sign, then from Eq. (4.102), the output energy is equal to tbe
input en.ergy, and such a digital tilter is therefore a loss[e.rs system. A causal stable real-coefficient transfer
function H(z} wlth a frequency response H (ej"') of unity magnitude is thus caned a /oHless bounded real
( LBR) tnmsfer function [Vai84 ].
The BR and LBR transfer functions are the keys to the realization of digital filters with low coefficienl
!;f'...nsitivity (see Section 9.9).
234 Chapter 4: LTI Discrete- Ttme Systems in the Transform Domain

4.5 Simple Digital Filters


In Chapter 7 we outline various methods of designing frequency-selective filters sati:.fying prescribed
specifications. In this section we describe several low-order FIR and IIR digital filters with reasonable
seh!elive frequency responses that often are satisfactory in a number of app-lications.

4.S.1 Simple FIR Digitaf Filters


FIR digital filters considered here have integer-valued impulse response coefficients. lbese filters are
em~loyed in a number of practical applications, primarily bc;.;au"le of their simplicity. which makes them
am.~nable for inexpensive hardware implementation.

Lo·wpass FIR Digital Filters


Tlw simples.t tuwp-ass FIR filter is the moving-average filter ot Eq. (4.13) with M = 2 which has a trnnsfer
function
(4.103)

Tht' above transfer function has a zero at ::: = -l and a pofe a! ::: = 0. It follows from our discussion in
Section 4.3.4 that the pole vector has .a magnitl.lde of unity, the ntdius of the unit circle, for all values of
w, ()n the other hand, as w increases from 0 to n. the magnitude of the zero vedor decreases from .a value
of 2, the diameter of the unit circle. to zero. Hence, the magnitude re~nse jHo(e 1 (>])! is a monotonically
decreasing functior: of w from w = 0 to w = rr. The maximum value of the magnitude function is unity
at uJ = 0. and the minimum value is zero at w = rr, :.e ..

From Eq. (4.103), lt follows that the frequency res:po1:se of the above filter is given by

(4.104)

whose magnitude response is given by cos(w/2) which is seen to be a monotonically decreasing function
of"' (see Figure 4.17(a)). The frequency w = w,. at which iHo(ei"")! = _}zfHo(e10)1 is of practical
inte.rest :-;ince here the gain Q(w,-) in dB ls given by

§(we)= 20log 10 JHo(~ 1 "'')1 = 20log 10 jH[I(ei0 )1- 20logw Jz


= 0- 3.0103 ;:= -3.0 dB,

since the de gainQ(O) = 20 log 10 :Ho(eNH = 0. Thll.'i. lhegam Q(w) atw =We is approximately 3 dB k-Ss
than that at zero frequency. As a result, w, is called the 3-dB cutofffrequency. To determine the expression
for Wc we set 1Ho(ej'""")l 1 = cos 2 (wc/2) = 1/2. which yield:> We = ::r/2. This result checks with that
give11 in the plot of Figure 4.17(a). The 3-dB cutoff freq~ency tL'e can be considered as the passband edge
frequency, and as a result, for this filter the passband width is approximately n;2. The stopband is here
from n /2 ton. Note from Eq. ( 4.103), the transfer function Ho( z) has a zem at z = - l or w = ;r, which
is in the S[ophand of the filter.
A cas..:.ade of the .o;;imple FIR filters of Eq. (4.103) results in an improved lowpass freql;.ency response,
as illustraled in Figure 4.17(b) for a cascade of three sections. The 3-dB cutoff frequency of a cascade of
M st:~tions of the lowpas~ filter ofEq. {4.103) can he shown w be given by (Problem 4.49)

(4.105)
4.5. Simple Digital Filters 235

'"
f)L-
"
(aJ
Flro;~----;mk-r I'I:R !""">'"'' h!<" c"-"-~"'"
11-"----
1 ,_
--."-,

1}1-
-, :12-

-- ______ ;
0.3

(b) (c)

Figun"! 4.17: Magnitude res]}Qtl"-<'~ u( FIR filter~; (a} first-0fdt!r lowpass HR filter flo(_;:_) of Eq. (4. 103), {h) cascade
of first-order Jowpati:i F1R filter;;, aiJd (c) fir!;J:-orde-.r high paM filter H1 (;:} of Eq. t4.1%).

ForM = J, the above expression yields We = 0.3002R. which also cheo:-ks w.ith the plot in Fignre 4.17(b).
Thus, the cascade offin.t-order sections yields a sharper magnitude response but aT the expense of a decrease
in the pas;;band width.
A better approximation to the ideaf low pass filter is given by the higher-order moving-average fitter of
Eq. (4.13). Signals with rapid fiuctuatior.t. in sample values are generally associated with high-frequency
components that ace essentially ehminaled by a moving-average filter of the type of Eq_ (4.13), resulting in
a much smoother output wave-form as .ilius.traled earlier in Example 2.14. Example 4.9 discusses .a simple
approach to improve the nwgnimde- response of a moving·<Werage lowpass filter.
The mming-average fiJter is <lften employed as the b.a.:;ic building block in the design oflowpas.-, filters
llsed in sampling rate alteration and is considered in Section 11. 12.

Highpas.s FIR Digital Filters


The 'l-impfcst highpa'>~ FIR filter i~ obtmned by replacing::; with-;: in Eq. (4.f03), resulting in a transfer
function
(4.106)
Chapter 4: LTJ Discrete-Time Systems in the Transform Domain

TI1e corresponding frequency re6ponse is g1ven by

-"'ie-;.,;;2 ,_sa~(!:!)
H 1\'ei"-')- 2 . (4.107)

wbose: magnilude response is shown in Figure 4.17(c_l. The monotonically inc-reasing behavior of the
mf,gnitude-function can again be demonstrated by ex.arrrining the pole-zero pattern of the transfer function.
The 3-dB cutoff frequency of this highpass fi;ter is .also at :n /2. The transfer function H: (Z) has a zero at
;;: == 1 or <L' = 0. which :s in the ;;;topband of the filter.
lmprovcd !i.ighpass frequency response can be obtained by cascading several sectiom. of the simple
highpass filter of Eq. {4.106). Altema1ively, a higher-order highpass filter of the fonn
M-!

HJ(:.:) = .~ :~:::C-n":::-". (4.108)


n=C

obtained by replacing z with - z in the expression for !he transfer function of a moving-a•·erage Jowpass
filler yields a sharper magnitude response.
AD application of the F1R highpass filters is in moving-target-indicator (MTI} radars. In these radars,
inleriering signals, called clutters, are generated from fixed -object~ in the path of the radar beam [Sko62J
The duHer, generated mainly from ground echoes and weatherreturns, has frequency components primarily
nelf .zero frequency (de) and can be removed by filtering the radar rerum signal through a tv.•o-pulse
ca_ -u:eler, whi.ch i:> the iirst-on:ler highpass filter of Eq. (4.1 06). Often, the frequency components of the
clutter occupy a smaJI band near de, and for a more effective removal it is necessary to use a highpass
filter with a sharper magnitude response and a slightly broader stopband. To this end. a cascade of two
twa-pulse cancelers. called a three-pulse canceler, provides an improved performance. Problem -4.56 and
Exercise M4.4 describe two simple llR highpa!.s filters. proposed for clutter rejection.

4.5.2 Simple I!R Digital Filters


Wf: now describe sevemJ simple IIR digital filters with tint-order and second-order transfer functions
and sketch their corresponding frequency responses. In many applieations, use of such filte-rs provides
satisfactory results. Often. more complex fre-quency responses can be achieved by cascading these simple
tra:u;fer functions.

Lowpass IIR Digital Filters


A tirst-order lowpass IIR digital filrer has a transfer function g1ven by

l-a i+z- 1
HLP(Z) = ~ 1 -rxz l, (4.109)

where Ia: < l for stability. The above transfer function hat. a zero at z = -1, i.e., w = n, which is in
the stopband of the filter. It has a real pole at z = o•. As w decreases from 0 to :n:, the magnitude of the
zem vector decreases from a value of 2 to 0. whereas, for a positive value of a, the magr.!tude of the _pole
vector increase;, from a va1ue of 1 - a to I +a. The maximum value of the magnitude function is unity
at u = 0, and the miuimum value is zero at w = n, i.e.,

Tiulrefore. IHLp(ej""}l is a monotonically decreasing function of w from w = 0 tow= ;r (see Figure


4.1 S).
4.5. Simple Digital Filters 237

~c--~-----~ -::-;;-=u:;-~

!-- n.=0.71-
' ~~~_j;

Ia) (b)

Figure 4.18: MagnitllCe and gam respomes of the first-ur.fer lcwpass filter of Eq. (4.109) fm three values of a.

From Eq. (4.109). the squared magnitude function can be easil:r deri-ved:
-_ ~ ( l - o:) 2 0 + cos.w}
!Ht.rie-'"'Ji- = --~ , ~ (4.1 to)
20 +a- 2acosw}
The derivati\·e of jHLp(eJ"-')f with respect tow is given by
d!Ht.p(el&)(! -(l-a)"l.! +2a+a2 )sinw
dw 2(1 + o<2 2a cos w) 2
which is always nonpositlvein the range 0 s w ~ n verifying again the monotonically decreasing bebavior
of the magnitude function. To determine the 3-dB cutoff frequency we we set l Ht.p(ej:n.o)l 2 = l/2 itl
Eq. (4.110) and arrive at the equation

(1 - a) 2(l +COS We)


~-

2(1 + rx2 2acosw,) 2


which when solved yields
COS We =- -,
l +a-
~ (4.1lla)

The above quadratic equation cnn be solv-ed for a yielding two solutions. The wlution resulting in a stable
transfer function HLp(z) :is given by
(4.1 1 lb}
COS We

PUots of the magnitude and the gain res-ponses of the above Jowpass transfer function are sketched in Figure
4.18 for several values of a.
It follows from Eq. (4.110) chat the first-order lowpass transfer function Hu-(z) of Eq. {4"109) is a
bounded real {BR) function if !ni < 1.

Hlghpass IIR Digital Filters


A first-order highpass transfer function HH p (z) i.s given by
l+a 1-z- 1
HHp(z)=-2~1 a-. l ' {4.112)
Chapter 4: LT; Discrete-Time Systems in the Transtorm Domain

Uc,-----~ r ----~-~-··-~-~--.

'
0'
··/~----!
~0_:~,: ,.;~--=------ -1. ' ' /
'' /
2 ,!
' /
;:;o.t.~ i--,.~U8' 1
~ : '/
::;: ,.,
0.4• I
-a=ll!- I
' :-- a-os:
; - <l!=ll.7:

.uv,L____ ..~---"'.·.".-5 . J _____ __1


-W'f--- -~----
<X=il5'

I} 0.?- 0.4 0.( l) & j Hi' w·•


d<

(a} (b)
Figure 4.19; Magnitude and gain responses. of the first-order highpass filter ofEq. (4.1 J 2_) fw- three values of a.

where !al < 1 for stability. Its 3-dB cutofffrequem:y We is also given by Eqs. (4.1 f 1a) and (4.1 I Ib). Plots
of the magnitude and the gain responses of the above highpass transfer function for several values of a are
shown in Figure 4.19.
It can be shown that the first-order highpass trar.sfer function ofEq. (4.112)is a BR function 'if jai < l.

Bandpass IIR Digital Alters


A second-order bandpass digital filter is described by the transfer function

(H!3)

Its squared magnitude function is given by


j<dl7 (l~a) 2 (1-cos2w)
]HBp(e ) - = , (4.!!4)
2[1 + pzo + a) 2 + a 2 2,80 +a}'' cos w + 2a cos 2w1
which goes to zero at w = 0 and at w = Jt. It assumes a maximum value of unity at w = w.,, called the
center frequency of the bandpass filter, where

w., = oos- 1 (/J}. (4. 1!5)


The frequencies w, 1 and wcz where the squared magnitude response goes to 1/2 are called the 3-dB cmoff
frequ~<ncies,
and their difference, Bw. assuming <'Llc2 > w.,;, called the 3-dB bandwidth, is given by

Bw = w._z- W<f =cos~\ ( l :-aZ). (4.116)

Plots of the magnitude response Df lhe bandpass filter of Eq. (4.113) are given in Figure 420 fo; several
values of a and {3.
4.5. Simple Digital Filters 239

" •••
[ - U=OS
1 o:=OZ
~ o.s
·§:11.6

i 6.4

"'
0.2 OA .,
(a) (b)

Figure 4.2~ Magnitude response of the second-order bandpass filter or Eq. (4.113}: (a) three specific vfllues of a
with f3 = 0.34. and (b) ttuee specific values of fJ with a = 0.6.

II can be shown that the bandpass transfer function of Eq. (4.1 13) is a BR function if Ia! < I and
I IPI <I.

Bandstop IIR Digital FUters


Finally, a second-order bandstop digital filter has a transfer function of the form
l+cr l-2fjC 1 +z- 2
Hss(z) = - 2 - 1 ,8{1 + ~)z I + £¥Z 2. (4.118)
24J Chapter 4: LTI Discrete-Time Systems in the Transform Domain

rI .
1
' 1

I
I II
' I
J \

(a)
'"
Figure 4.21: Magnitude response and lhe group dt-lay of the bandpass transfer function ofEq. (4.117b).
"
(b) - •• ••

/
' I
'
' \ ''
' ;
\\ '
--(!:0.&
-!!=OS
' I ""'QZ
II
.I '
' \
j
•., "••• t ..
•• ••• I

(a) (b) ""'


F1gllft 4.22: Frequency response of the -second-order bandstop filter ofEq. (4.113). (a) 1"hree specific values of a
with f3 = O.S, and (b) three specific values of p 'With a =0.5.

Its magnitude response is plotted in Figure 4.22 for- various values of a and /3. Here, the magnitude
response takes the maximum value of unity atw = 0 and w = :te, and goes to zero at w = w 0 • where w" is
given by Eq. (4.115). Since the magnitude response goes to zero at w.,.
roa is called the notch frequency,
and the digital filter ofEq. (4. i 18) is more commonly called a notch filter. Tile 3-d.B notch bandwidth Bw
is given by Eq. (4.116).
The bandstop transfer function ofEq. (4.118) is again a BR function if Ia I < 1 and I.Bt < 1.

Hlgher-Ortlet DR Digital Alters


By cascading the simple digital filters described above, we can implement -digital filters with sharper
ma1~itude responses. For example, for a -cascade of K first-order lowpass sections characterized by the
transfer function of Eq. {4.109), the overall structure has a transfer function G LP (z) gi\'eD by

GLP(Z)
1-a l+z-t
= ( - 0- I •
)K (4.119)
- -a._ 1
4.5. Simple Digital Filters 241

Using Eq. (4.1l0) we obtain the corret.ponding squared-magnitude function as

• o [ (I - a)•{l
o + cosw) ]K
IG LP {J' ,w)l"-
- 2 (1 + a2 2a cos w)
(4.120)

To detenn.ine the relation between its 3-dB cutoff frequeocy We and the parameter a, we set

(4.121)

which when solved fur a yields. for a stable GLp(z).

1+(1-C)COSWc-sinwc·\/2C cl
a ~ '--'--"'-"-';c---:;C'-:-::c-c-"c__ __ (4.122)
l c +COS We
where
(4.123)

It should be noted that Eq. (4. 122) reduces to Eq. (4.111 b) for K = I.

Likewise, a cascade of first-order highpass sections results in a bighp~ filter with a sharper roll-off
in the gain response.
Figure 4.24 shows the magnitude responses of a single second-order bandpass filter (plot marked 1),
a cascade of two identical second-order band~ fitters (plot marked 2). and a cascade of three identical
second-order bandpass filten (plot marked 3). All bandpass sections are characterized by a = 0.2 and
fJ = 0.34. Since the parameter fJ of all second-order sections is identical, the center frequency of the
cascade structure is the same as that of the single section. Hov,rever, the 3-d.B bandwidth decreases with an
increase in the number of sections. Similarly, a cascade of identical second-order bandstop filters results
in a higher-order bandstop filter with an identical center frequency but with a narrower 3-d.B bandwidth.

4.5.3 Comb Filters


The simple filten of the previous two sections are characterized either by a single passband and/or a single
stopband. There are a number of applications where filters with multiple passbands and stopbands are
required. Tite comb .filter is an example of this latter type of filter. In its most general form, a comb
filter has a frequency response that is a periodic function of w with a period 2N1L, where L is a positive
integer. H H(z) is a biter with a single passband and/or a single stopband. a comb filter can easily be
generated from it by replacing each delay in lts realization with L delays. resulting in a structure with
a transfer function given by G(z) = H (zL)_ If tl:e magnitude response JH(ei,..)l exhibits a peak at wp,
then the magnitude response of jG{ei<»)l will exhibit L peaks at Wpk/ L, 0 :;::: k :::: L - 1. Likewise, if
the magnitude response jH (eiw); has a notch at w", then the magnitude response of IG{ejw)l will have L
242 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

-IG
liY'
L_ ___ - - -
!:( w·'
I
~orm.ihzed (,.,quen<:y Nom-.alicred fre<jucn<y

(a I (b)

Fig:un 4.13: (a) Gain reo;ponse~ of a singie first-order lowpass filter (K = I) and a casc-ade of four identical first-order
lol.\-p.l$5 filters (K = 4) with a 3-dB wtofffre<juency of w, = 0.4rr. (b) Passhand details.
~--------

1 r

Ff&ure 4.24: Gain re:>ponses of a single second-m'der bandpass filter, a cascade of two identical sa;DI!d-onier ":;.andpiiSS
=
filters, and a cascade of tllree identical Sie!:Oild-order bandpa_o;s filters. All -;;e.;:tions characterized by a 0.2 and
f3 = 0.34.

notches at w 0 k/ L. 0 ::; k :5 L - l _ It should be noted that a comb filter can be generated from ei!.her an
FIR or an UR prototype filter.
To illustrate rhe generation of the comb filter conslder the prototy-pe low-pass FIR filter of Eq. (4.103},
which has a lowpass magnitude response as indicated in Figure 4.17(a). The comb filter generated from
Ho{z) has a transfer function
(4.124)
with a corresponding magnitude response as sketched in Figure 4.25(a). The new filter is essentially a
notch fihec with L notch frequencies in tt.e range 0 ~ w < 2Jr located at w = (2k + l)n j L and bas L
peaks in its magnitude response located atw = lttk/L,O ~ k ~ L -l.
The comb filter G 1{z) generated from the prototype highpass filter ofEq. (4.106) has a transfer function
ghen by
(4.125)
with a magnitude re£ponse as indicated in Figure 4.25(b }. This comb filter again bas L notch frequencies
in the range 0 s uJ < 2rr located at w = 2::rk/L, 0 _::: k ~ L- I, which are exactiy at tbe locations
of the peaks of the comb filter Go(z) of Eq. (4.124). Likewise its magnitude response has L peaks at
w = (2k + I}n/ L. 0 ~ k _:::: L - L which are precisely the locations of the notch frequencies of Go(z).
Depending on the application, comb filters with other types of periodic magnitude responses can be
e-asily generated by appropriately choosing the prototype filte~. For example, theM-point moving-average
filt<:r of Eq. (4.52):
H. , I - z -M
\Z}=M(l
z 'l'
4.&. A!lpass Transfer Function 243

~
'
! :

'' l'
\' ''
'' I
',,
I
I

'
,I
~
\!
_L I
·~-·-'--
'I
"'
I
I
" '
(a} (b)

Fagure 4.25: Magnitude resp:mses of FIR comb filters: (a) generated from a prototype lowpas.;; filter of Eq. (4. l03)
wlth L = 5, and {b) genera led from a protol:ype highpass filter of Eq. (4.!06) wltlt L = 5.

has been used as the prototype. This fiJter ha~ a peak magnitude at w = 0, and M - l notches at
w = ::!JT£/ M. I S f.
.:.<=: M - 1. The comb filter g-enerated from this prototype has a transfer function

1_ ~-LM
G{,.· - H( L ) - - - ' - -
~)- z - M{l - cL)'

wlb.>se magnitude ~ponse has L peaks at w = brk/L, 0 ::S k ::::; L - !, and L\M- 1) notches al
,;v = 2Irk/ LM ,l S k ~ L(M- 1). By choosing Land M appropriately, peaks and notches can be t:reated
at desired locations. In ionospheric measurements of electron concentration. the weak lunar spectral
components are usually masked by the strong solar spectral compo!lents. Tirese two spectral components
have been separated by using two such comb filters [Ber76].
One of the applications uf a comb filter considered in Section 11.5.1 is in creating special audio effects
in musical sound processing, where both FIR and IIR prototypes are employed. Comb filters wilh multiple
notch frequencies find applications in the canceltation of periodic interferences. The comb filter is also
f:mpJoyed in LORAN navigation systems for the suppression of cross-rate interferences {Jac96!.
An interestingapplicationofthe comb filters Go(z) and G 1 (z)of.Eqs. (4.l24)and (4.125), respectively,
is in digital color television receivers for separating the luminance component containing the intensity
information and the chrominance components containing the color information from the compcsite video
signa! {Orl961. The basic structure for this purpose is as shown in Figure 4.26, where the delay chain is
chosen to provide a line delay, i.e., the rime to scan one horiz.ontalline. HO\Iiever, a complete separatio-n
of the two components is not possible with this structure. Moreover, the filtcr Go(z} acts as a loYtpass
fi Iter by averaging two successive horizontal lines of the vjdeo signal and bhu:s the luminance component
On the other hand, improved separation of me two components c.an be achieved by the structure of Figure
4.26 when the deiay chain i:s chosen to provide a frame delay. Unfortunately, the separation fails if there
is some motion from frame to frame.

4.6 Allpass Transfer Function


We :\OW turn our attention to a very special type of UR tmnsfer function that is characterized by unity
magnitude for all frequencies. Such a transfer function, caUed an allpass transfer function, has many
useful applications in digital signal processing {Reg88l We define the- allpass transfer function, examine
244 Chapter 4: LTI Dtscrece· Time Systems in the Transform DGmaii1

+ Gi<J
luminance
Composite .J '2..-+--!\
video -----"'! -'

Figure 4.26: Filter structure for tbe se~arntkm of tbe luminance and chrominance components of a composite video
slgnal

some of its key properti~ and outline one of its common applications. Later in this chapter and elsewhere
in the text we ;Jiscuss various other applications. One important application considereri in this chapter is
the development of an algebraic test for tbe BIBO stability of a causal IIR transfer functioo.

4.6.1 Definition
An IIR transfer function A(z) wi<h unity magnitude response for an frequencies. i.e.,
for all w (4.126)

is called an allpass transfer function.. Now an .'\.fth-order cau>.al real-coefficient aUpass transfer function
is of the fonn
.d -> , d -M-1 + z-M
d MTM-tz·-r-··-...!...lZ ·
AM(Z) ==:; ± 1 M 1 ,,. (4.127)
1 +dtz +··-+dM-Jl + +dMZ"'"
1f we denote the denominator polynomial of the allpass function AM(Z) as DM(Z),

D M ()
Z = 1 + d IZ -'"+··-+ d .\1-IZ -M+I+d MZ -M · (4.128)

the;1 it follows !hat AM{Z) can be written as

(4.129)

Note from above that if z = rej¢ is a pole of areal-coefficient allpass transfer function, then it has a zem at
z = (1/r)e- i¢. The numerator of an alJpass transfer function is said to be the mirror~image polynomial of
the denominator, and vice versa. We shaH use the notation i>u{z) to denote the mirror-image polynomial
of a degree-M polynomial, i.e., iJM{Z) = z-M DM(-Z- 1). Equation (4.129) implies that the poles and
the zeros of a real-coefficient allpas-s function exhibir mirror·image symmetry in the z-pl~ as shown in
Figure 4.27 for the foUowlng third-order allpass funclion:

A z = -0.2+0.1Bc 1 +0.4z- 2 + c 3
{4.130)
J(} I +DAz 1 +0.18z 2 -0.2z 3.

To show that the magnitude of AM(ei"') is indeed equal to one for aU w, it follows from Eq. (4.129)
that
4.6. AUpass Transfer Function 245

[
.,
i" "
0
0

.:: ..()_j

-I
0

-1.5
-I _.,,
Figu~ 4.27: Pole~ zero plot of the real coefficient aUpass transfer function of Eq. (4.130).

1\
I
I'
'
\
'\
-
_, -8

{a)
-
(b)

liigure 4.28: (a) Tbe- principal value of the pr..ase function. and (b) the unwrapped phase function of the <lllpass tnlnsfer
fumtion of Ec;_. (4.130).

Therefore.
-MD ( -1) MD ( )
A ( ,A ( -I)= Z M Z Z M Z = 1.
M ZJ M Z DM(Z) DM(z-1)

Hence.
(4.13!)
As shown earlier in Section 4.3.5, the poles of a causal stable transfer function must lie inside the
unit circle. As a cesuit, aU zeros of a causal stable allpass transfer function lie outside the unit circle in a
mirror-image symmetry with its poles situated inside the unit circle.
It is interesting to examine the behavior of the phase function of an allpw;s transfer function. Fig.
ure 4.28(a) shows the principal value of the phase of the stable third-order allpass transfer function of
Eq. (4.130). Note the discontinuity by the amount of 2;rr in the phase 8(w). If we unwrap the phase by
removing the discontinuity, we arrive at tbe unwrapped phase function Br:(w) indicated in Figure 4.28(b).
As can be seen from this :figure, the unwrapped phase function is a nonpositive continuous function of w
in the range 0 < w < ;;r. This property of the unwrapped phase function holds for any arbitrary causal
stable allpass function.
246 Chapter 4: LTI Discrete-Time Systems i'i the Transform Domain

Figun! 4.19: Use of an allpas~ filter as a delay equallur.

4.H.2 Properties
We now stare three very useful and important properties of a causal s!.able allpass function without proof
!Rt:g8&l.
Property 1. It follows from Section 4.3.5 that a causal stable real coefficient allpass transfer function
is a lmsless bounded real (LBR) transfcr function or, equivalently. a causal stabie allpass filter is a lossless
stnJCture.
Property 2. The second property is concerned with the magnitude of a stable allpass function A (z).
It can be shown very simply that (Problem 4.83)
< l for!zl> 1,
IA~z):
I=; for lzl = l,
> 1 for!zl < l.
(4,132)

Property 3. The last property of inlerest is with regard to the change in phase for a real stable allpass
function over the frequency range(J) = 0 tow = rr. Let r(w) denote the group delay function of the allpass
filter A(z), i.e..
r(w) = - L[ec(ej"')],
where B,(w) Th the unwrapped form of the phase function B(w) = .arg{A(ei"'H in order that the group
delay r (w) be- well behaved. Now the unwrapped phase function ec{w} of a <>table allpass function A(z)
i.s a monotonicaUy decreasing function of w so that r (w) is everywhere positive in the range 0 < w < 1r.
Th.~rcfore, an M(h-orde: stable real allpass transfer function satisfies the property (Problem 4.84)

fo" r{(J)) dw = Mn. {4.133)

Or in oh'ler words. the change in the phase of an Mth-order allpass function as w goes from 0 to :n is M1r
rad:iam.

4.1>.3 A Simple Application


A simple but often used application of an a11pass filter is as a delay equalizer. Let G(z) be the transfer
functi;:.n of a digital filter that has been designed to meet a prescribed magnitude response. The nonlinear
phase response of rhis filter can be corrected by cascading it with an all pass filter section ,4_ (z) so that the
ov"t,rall cascade with a transfer function G{z)A(.:) has a constant group delay over the frequency domain of
iniJ::-rest (see Figure 4.29}. Since the aUpass filter hlli a unity magnitude response, the magnitude response
oflhe cascade is still equal to ~G{ej"-')1, while the overall delay is given by the sum of the group delays of
G ( ::) and A.(z). The all pass is designed so that the overall group delay is approximately a constant in the
frequency region of interest
Various other applications of the alipass filter are described in the latter parts of this book.

4:7 Minimum-Phase and Maximum-Phase Transfer Functions


Another useful dassi.fication of a transfer function is in terms of the behavior of its phase response. Consider
the two first-order transfer functions H1 (z) and H1 (;:);
4.7. Minimum-Phase and Maximum-Phase Transfer Functions 247

Jlm.: jlrn z.

-+-:---+-~rl--- Rez
a -b

Orut circle L"nit circle


(a} (b)

FiguTe 4.30: Polc-zeru p!O£s of the transfer functions of Eq. (4.134): (a} HI (z) and (b) H2(z}.

Figure 4.31: Un.wrdpped pha~ response of Eq. (4.134) for a= 0 8, b = -0.5.

z+b . ) bz +I
H.(z) = --. H 2{Z = - - , Ia! < l, jb) < L (4.134)
.:::+a z+a
As c.an be seen from the pole-zem plots given i.n Figure 4.313, both transfer functions have a pole inside the
tmi: circle at the same location at z = -a and are therefore stable. On the other hand, the zero-of H1 (z) is
inside the unit circle at;: = -b, whereas the zero of H2(z) h outside the uni:: circle at z = -1/b situated
in a mirror-image symmetry with respect to the zero of H1(z). However. the two transfer functions have
an identical magnitude function, since H 1 (z}Ht (C 1) = Hl\Z)/h(z- 1).
Fmm Eq. (4.134).

arg{HJ{eP)] =BJ(w) = tan- 1 sinw _ tan-1 sin a; , {4.135a)


b+cosw a+cosw
bsinw sinw
arg[H2(ei'''Jj = (tz(w) =tan- 1 -tan- 1 (4.U5b)
l + bcD£w a+ cosw
Figae 4..11 shows the unwrapped phase response~ of the two transfer functions. From this figure it can be
seen that H2(.:) has an excess pflase lag with respe-Ct to H 1{z).
Genera.Jizing the above result, a causal stable transfer function with aU zeros outside the unit circle has
an excess phase compared to a causal stable transfer function with identical magnitude but having all zeros
248 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

F:iguft 4.32: Delay-complementary linear-phase bandstop and brndpass FIR filters ofEqs. (4.l39a} and (4.139b}.

inside the unit circle. As a result, a causal stable transfer function with all zeros inside the unit circle is
called a minimwn-phase transfer jimction, whereas a causal stable transfer function with all zeros outside
the unit circle is called a maximum-phase transfer function.
It can be easily shown that any nonminimum-phase transfer function can be expresse-d as the product
of a minimum-phase transfer function and a stable allpass transfer function (Problem 4.&7).

4.8 Complementary Transfer Functions


A set of digital transfer functions with complementary charocteristics often finds. useful apptieations in
practice, such as efficient rea.Ezations of the transfer functions, low sensitivity realizations., and filter bank
design and implementation. We describe next four useful complementary relations and indicate some of
their applicari<lns.

4.8.1 Delay-Complementary Transfer Functions


A set of L transfer functions {Ho{z), H 1 (z), ... , HL-1 (z)} is defined to be delay-complementary of each
other if the sum of their transfer functions is equal to some integer multiple of the unit delay fVai93J, i.e .•
L-l
L Ht(z) = {J.z-
,_
110
, f3 # 0, (4.136)

where no is a nonnegative integer.


A delay-complementary pair fHo(z), H 1 {z)J can be readily designed if one of the pair is a known Type
lliaear-phase- FIR transfer function of odd length. Let Ho(z) be a type 1 linear-phase frequency-selective
FIR transfer function of length M = 2K + 1 with a magnitude response equal to I ± 8p in the passband
and t~s than or equal ro O.s in the stopband where &p and 8s are very small numbers. From Eq. {4.80) its
frequency response is of the form
(4.137}
where Jlo(w) is the amplitude response of the FIR transfer function Ho(z). Its delay-complementary
transfer function Ht (z) defined for /3 = 1 and no = K has a frequency response given by

(4.138)
4.8. Complementary Transfer Functions 249

+ Luminance

Composite
video
-1

Figun- 4.33: Luminance and chrominance components sepru-a6on filter structure with improved vertical details.

Now, in the passband, 1 - Op ::::; ifo(w) ~ I + Op. and in the stopband, -lis ::5 iio(w) ::5 8s. It follows
therefore from Eq. (4.138), in the stopband of Ht (z). -lip ::5 ilo(w) ::5 Op, and in the passband of Ht (z),
I - !J, :5 Ho(w) _:::: l + Js. As a result. H, (z) has a complementary magnitude response characteristic to
that of Ho(z) with a stopband exactly identical to the passband of Ho(z) and a passband exactly equal to
the stopband of Ho(d. For examplet. if Ho(z) is_ a lmvpass filter, Ht (z) will be a highpass filter, and vice
versa. The frequency w 0 at which Ho(Wn) = H1 {w0 ) = 0.5 the gain responses of both filters are 6 dB
below their maximum values. As a result, tt)0 is called the fi...dB crossover frequency.

An interesting application of the delay-complementary FIR transfer function pair is in digital television
receivers ~Orf96]. It has been pointed out earlier that the structure of Figure 4.26 for the separation of
the luminance and chrominance components of the composite video signal tends to bJur the luminance
output. resulting in the I~ of vertical details. The low-frequency vertical details can be recovered from
the output of the comb filter G1 (<:}of Figure 4.26 by a Jowpass filter and added tu the output of the comb
filter Go(z). The vertical details can be removed from the output of Gt {z) by filtering it through a filter
that is delay-complementary to the lmio·'Pass filter. In practice the lowpass filter HLP(Z) employed is a
bandstop filter whose low-frequency passband coincides with the frequency range of the desired vertical
details while its delay-complementary filter H lip(<.) is a bandpass transfer function. Tile overa1! structure
is thus as shown in Figure 4.33, where the delay-chain of length K in the top path is chosen to equalize
the total delay in both the top aiJd bottom paths.
One set of delay-complementary linear-phase bandpass/bandstop filters proposed for use in Figure
4.33 is the one given inEqs. (4.139a) and (4.l39b} for which K = 10. Other linear-phase bandstop FIR
transfer functions suitable for vertical details recovery aTe given in Problem 4.91.
Delay-complementary filter sets can also be used as crossover filters for separating the digital audio
input signals into two or three subsignals occupying different frequency bands which are then used to
drive the appropriate speakers of a loudspeaker system f0rf96J. Design of such delay-complementary
crossover filters is considered in Exercises ~17 .30 and M7 .31. Another important application of the delay-
complementary property is in the realization of !ow-sensitivity FIR digital filters discussed in Section
9.7.3.
2~:;0 Chapter 4: LTi Discrete-Time Systems in the Transform Domain

4.8.2 Allpass-Complementary Transfer Functions


A set of M digitai transfer functions {H;{z)/, 0 :::; i ::::; M - L is defined to be al/pass-complementary of
each other, if the sum of their transfer functions if; equal to an aUpass functian A{z) fGar80], {Neu84aj.
I.e.,
M~l

L H;\z) ~ .4(c). (4.140)


i=O

4.8.3 Power-Complementary Transfer Functions


A set of M digital transfer functions {Ht(zi}. 0::; i ::: M- I, is defined to be power-wmplemenrary of
ea::h other, if the sum of the squares of their magnitude responses is equal to a oonstant K for al1 values of
:u [Neu84a], [Vai93J, i.e.,
M-l
L IH;{eiili)? = K, for all w, (4.141a}
i=O
where K > 0 is a constant. By analytic continuation, the above property is equivalent to
M~l

L H;(z- 1)Hi(Z) = K, for all z, t4.14lb)


i=O

for a real-coefficient H 1(z). Usually, by scaling the transfer functions, the power-complementary property
is ·iefined vtith respect to K = I.
For a pair of power-cQffipJementaty transfer functions., Ho(z) and Ht (z). the frequency % where
!Ho{eJw,.)l 2 = !Ht (ej""')J 2 =!. is called the crossover frequency. At this frequency the gain re.~ponses of
both filters are 3 dB below their maximum values" As a result, Wa is also called the 3-dB cutofffrequency
of both filters..
Consider two transfer functions. Ho(z) and H, (z) de.wribed by

Ho(z) = ![Ao(z) + A1 (.z)}. !4.l42a)


HJ(Z) = ~fAo{z)- At(z)], (4.J42b)

where Ao(z) and A 1(z) are stable allpass rransfer functions. II follows from the above that tbe sum of
the two transfer functions. is an allpass function Ao(d, and hence, Ho(z) and HJ(Z) of Eq_s. (4.142a)
and (4. 142b) are an allpass-complementary pair. It can be easily shown that the two filters described by
Eq. (4.142a) and (4. 142b) also fonn a power-complementary pair (Problem 4.93). It can also be easily
show-n that the transfer functions Ho(z} and Ht (!.) of Eqs. (4.142a) and (4.142b) are also bounded real
transfer functions (Problem 4.94).

4.!lA Doubly-Complementary Transfer Functions


A set of M transfer functions satisfying both the allpa.<>."-complernentary property of Eq. (4.l40) and the
pov~•er-complementary property of Eq. (4.141 a) is known as a doubly-complementary set [Neu84a}.
A pair of doubly-complementary HR transfer functions. Ho(::) and H1 (z), with a sum of altpass
decomposition in the form of Eqs. (4.142a} and (4. 142b) can be simply realized by a parallel connection
of the constituent al!pass filters, as indicated in Figure 4.34. We shall demons:rate in Section 9.9.2 that
such realizations also ensure low sensitivity in the passband wi!h respect to the !llultipJier -~.:oefficiems.
4.8. CompJementary Transfer Functions 251

Aufz)

________
Figure 4.34: Parallel aHpass realization of doubly-complementary IIR transfer functions.
,,,

"'
0.6 0.8
"'
"""
(a) (b)

Figure 4-.35: lllustrat:Wn of the complementary properties of the first-onler lowpass and higbpass rransfer functions
with a= 0-3. (a} Allpass;::;omplementary and (b) power-comple-mentary.
252 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

It can be easily shown that the bandpass transfer function Hsp(l.} of Eq. (4.113) and the bandstop
transfer function Hss(z) ofEq. (4.118) form a doubly-complementary pair (Problem 4.%).

4.8.5 Power-Symmetric Filte-rs and Conjugate Quadrature Filters


A real-coeftkient causal digital filter with a transfer function H(z.) is said to be a power-symmetric fiiJer
if it satisfies the condition [Vai93]:

(4.146)

where K > 0 is a constant. It can be shown that the gain function Q(w) of a power-symmetric transfer
function at w = ::r /2 is given by 10log 10 K - 3 dB (Problem 4.97).
If we define G(z) = H(-z). then it follows from the above equation that H(z) and G(z) are power-
complementary as
H(z)H(z- 1) + G{z)G(z- 1 ) =a constant.
If Hi.z) of Eq. (4.146) is the transfer function of an FlR digital filter of order }\l, then the FIR digital
filter u.rith a transfer function
(4.148)
is called the conjugau quadratic filter of H(z) and vice versa [Vm88a]. Note that by definition, G(z) is
also a power-symmetric causal filter_ It follows from Eqs. (4.146) and (4.148.} that a pair of conjugate
quadratic FIR filters H(z) and G(z) are also power-complementary as they satisfy Eq. {4.147).

4.8.6 Magnitude-Complementary Filters


A set of M digital filters {G;(z)}, i = 0, l, ... , M- l, is defined to be magnitude-complementary of each
other if the sum of their magnitude responses is equal to a constant [Reg87c}, i.e.•
M-1
L !G;(ei")l ~ f3 for all w, (4.149)
i..O

where f3 is a positive nonzero constant.


Consider two real-coefficient rloubl y-complementary transfer functions Ho (z) and Hr (z) that are related
according to Eqs. (4.142a) and (4.142b). Define

Gc(z:) = HJ(z) = ![Ao(z) ...;...A 1(z)] 2 , (4.150a)


GJ{Z) = -Hf(z) = -![Ao(z)- A 1(z)J 2 . {4.150b)
4.9, Inverse Systems 253

y{nj
x{n] v!nJ
Inverse of
h; (nJ

Figure 4.36: Cascade of a discrete-time system h. 1[n j with its inverse discrete-time system h2[n'].

II follows from above lhat Go(z) + Gt (z) = Ao(z)A 1 (z), i.e., Go{z) and G1 (z) are an allpass-complemen-
tary pair. It can be easily shown that (Problem 4.110)

!Go(ej"')l + IG1(ej"')! = !Ho(ejw)(;: + IHJ(ej"')l 2 = 1, (4.151)

i.e., Go(z) and G t (z) are a magnitude-complementary pair of transfer functions.

4.9 Inverse Systems


As indicated in Section 2.5.2, two LTI cat:SaJ discrete-time systems with impulse responses h 1[n] and h2[n)
are inverses of each ot.'i}er if
(4.!52)
An application of the inverse system design, as pointed out earlier, is in the recovery of a signal x(n] that
has been transmitted through an imperfect transmission channeL The received signal y[n], in general, will
be different from x[n] as it will be distorted by the impulse response h 1 [n] of the channel. To recover the
original signal x[n] we need to pMS y[n] through a system with an impulse response h2[n] which is the
inverse of the channel's impulse response (see Figure 4.36). The output v[n] of the inverse system will be
identical to the desired input x!n].

4.9.1 Representation in the z-Domain


It is easy to characterize the inverse system in the z-domain. Taking the z-transform of both sides of
Eq. (4.152) we get
Hl(Z)H:I(Z) = l, (4.153)
\\here H 1 (z) and H2(z.) are the z-transforrns of h1[n] and h2[n], respectively. From Eq. (4.153) it follows
that the transfer ftmction H2(z} of the inverse system is simply the reciprocal of H1 (z). i.e.,

(4.154)

Fm a rational transfer function H1 (z)


P(z)
H1 (z) = D(z). (4.155)

tt.e transfer function 1-h (z.) of the mvuse system is then given by

H ( ) D(z) (4.156)
2 z = P(z:)'
It follows from Eqs. (4.155) and (4.156) that the poles (zeros) of the inverse system H1{z) are the zeros
(poles} of the system Ht (z).
254 Chapter 4: LTI Dlscrele-Time Systems in the Transform Domain

It follows from the above example that to obtain a unique inverse, the ROC needs to be known ~:~ priori.
To this end, it Is a usual practice to look for a causal inverse of a causal system. The causal inverse of the
parent causal system wlth a minimum-phase transfer function is always stable. However, the inverse of a
nonminimum.-phase system Is unstable .if causality is imposed.

4.9.2 Recursive Computation of the Input Signal


If the parent causa1 system has a known impulse response h 1 [nJ and is excited by a causal input signal
x[n]. then knowing the output signal y(nl for n ~ 0, we can determine tbe samples of the input signal
using a recursive relation without determining the inverse system. To develop this relation we recan that
the input-output relation in l:he time-domain is given by

'
yin] =x[n]@htln] = Lx[k]htfn- kJ, n ~ 0. (4.157)
k=O

From Eq. {4.157) for n =0 we have


y[O] = >[O]hdOI.

>[OJ= yjO] . (4.158)


ht{OJ
To determine xfnJ for n 2:: 1, we rewrite Eq. (4.157; as

"-'
y[n] = x[n] ht [OJ+ L x[kj h1[n - kJ,

·~
4.9. Inverse Systems 255

which yields
y[n}- L~:Jx[kj h1 fn- kl (4.159)
x[n] = hJ[O] , n =::. l,

provided h: {Oj F- 0.

The process of determining the sequence x{n} from the -convolution sum given in Eq. (4.157) using
Eqs. (4.158) and (4.159) is called decorwolution. An alternate interpretation of tbe deconvolution algo-
rithm given by these two equations can be established by treating the convolution sum as a polynomial
multiplication in the z-domain. If Y(z), X(z), and H1 (z) denote the z-transfo:nm; of lhe sequences y[n"],
x[n], and l/1 [nl. respectively, then in the z-domain. the convolution sum ofEq. (4.157) can be written as
256 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

Y{z) = X (z}Hr (::.). {4.160)

Hence X (z) can be found by divid:ing the poly!Wmial Y(z) by the polynomial H1 (z). i.e ..

X(c) ~ Y(<) . (4-.{6!)


H1 (z)

Now, the ;o-tranforms Y(z), X(z), and Ht(z) are polynomial~ in z- 1 • Hence, X(z) can be found by a long
division of Y(z} by H 1 (z) as indicated in Example 3.35 for the inverse z-transform computation.

4.1 0 System Identification


There are applications where the objective is to derernrine either the impulse response h[n J or the transfer
function H (z) of an unknown initially relaxed causal LTI system by exciting it with a known input sequence
x[nl and observ-ing the corresponding output y[n]. The system identification is thus dual to the problem
of determining !he inpul xfnJ knowing the impulse response h[n] and the output y[nj described in the
pn·vious section. In the time-domain, the system identification problem can be solved by interchanging
the role,; of the input and the impuise response in the method outlined above.
The recursive relation for computing rhe impulse response samples h[n] of a causal LTI system from
the: specified causal input sequence x[n] and the observed output sequence y[n] is therefore as follows:

h[O] ~ y[O] y[n]- ~:Jh[kJx[n- k]


hfn] = x[O] • n ~ I,
x[Ol'

provided that x[Oj -# 0. The above process can be irnplemer~ted in MATLAB by a simple modification of
Prognnn 4_3.
Alternately, h[n] can be determined in the z-dcmain by dividing the z-tral'.sfonn Y(z) of y[n] by tbe
z-transforrn X(z) uf x{n]. To this end again the function deconvcan be employed
If the causal LTI system has a rational transfer function H(z:) of known order M, the numerator and
the denominator coefficients of the transfer function can be determined from the first 2M + 1 impulse
response coefficients. The method to detennine H {z) Is deS<:ribed in Section 8.1.3.
A second method of system identification is based on computing the energy density sp«trum S= (ei"'}
of the input signa1 x[n], and the cross-energy density spectrum Syx(ej») of the input signal x[nJ and the
output signal y[n]. These spectrums can be evaluated by taking_ the DTFTsofthe autocorrelation sequence
rxx[tl of x[n], and the cross-correlation sequence Ty;o:[t'j of y(nJ and x(n].
Consider a stable LTI discrete-time system with an impulse response h{n). The input-output relation
of this system is given by

y[n] ~ L h[k]x[n- k). (4.162)


k=-ao
We assume that the autoc~l;Hion sequence r.u[CJ of the input is known. The cto£S-correlation sequence
ry..-[t] is defined by
00

ryx[£] = L y[n]x{n- €]. (4.163)


'1=-oo
Substituting Eq_ {4.162) in the above equation we gel
4.10. System ldentffication 257

'" [£1 ~ ,t= (~= h[kjx[n- k]) x[n- I]

~,I-:~ hfk] c~oc x[n- k)x!n- e])


=
~ L hfk]ru[i-k]. (4.164)
~=-=

If the system is assumed to have a causal finite-length impulse response oflengt.'l N, then Eq. (4.164)
reduces to
N-1
r>xl£] = L h[k]r_u[t- k]. (4.165)
k~

ln the z-domain, Eq. (4.164) is equivalent to

where S_._.-(z) and Syx(Z) are the z-transforms of ru [fj and ryx-[£1, respectively, and H(z) is the transfer
function of the LTI system. On the unit circle, the above equation reduces co

(4.166)

using Eq. (3.19). Fr.cmEq. (4.166) it follows rhat the frequency response of the LTI system can be expressed

(4.167)

Ifxl.n] isselectedtohaveaconstantenergyspe-ctrumforal! valuesofw, i.e., S.:Aejw} = 1/ K, 0.::: jwl .::::; Jr,


then the above equation reduces to
H(e - KS y x
j.:u)- ( .jw,
e).

It follows from Eq. ( 4.167) that the frequency response H (ei"") of an LTI discrete-time system can be
determined by taking the ratio of the cross-energy spectrum of the output ami the input sequences. and the
energy spectrum of rhe input. The frequency response is proportional to the cross.-energy spectrum if the
input has a constant energy spectrum.
In some applications, where the input signal x[n] is not known. the system can be identified by
computing the autocorrelation of the output signal y[n j which is defined by

=
ryy[l] = L y[n]y[n- .f]. (4.168j
11=-00

Substituting Eq. (4.162) .in the above equation we arrive at


258 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

= L h[k] L h[m--!Jrnfm-f-k]
k=-00 m=-·"'-'
(4.169!

ln the z.-domain, Eq. (4. i69) becomes

S,.Jz) = H(z:}H(z- 1 )S_u(z.).

where S_v~tz) i:c-- the ;;:-transform uf rn~EJ. On the unit cirdc, the above equation reduces to

For an input signal with a flat energy density spectrum, we then have

or equivalently in the z-domam. with K = :! • this becomes

(4.170)

If ttce ~ystem is characterized by a rational trans.fer function R\.t) = P(z}/ D(z), then

indicating that the numerator and the denominator polynomials of SH (;::) eliliibit mirror-image symrneu-y.
Toderennine HCz) one cande!ermine the mots of the polynomials A Lz) and B(z) and associate appropriate
factors of these polynomials with the nume~tor and the denominator of H(z:).
lt should be noted from the above discuss.ion that the autoc--<JITelation of the output signal can provide
only the magnitude response of !he system but not the phase xsponse. A solution of Eq. (4.170) thus
leads to m.any possible anr,wers. A single solution can, howeve1, be obtained by imposing -;orne additional
cons[rain!;; on d1e phase properties of the system.
4. 11. Digital Two-Pairs 259

(a) (b)

Figul:'e 4-.37: A digita"i two-pair.

··~~~~'•'~·~·· ~
~: "{t
It 4:r:r w ,:',,,,;c:::" 'I
+

4,11 Digital Two-Pairs


The LTI discrete-time system:;. considered so far are single-input, single-output structures characterized
by a transfer function. Often. such a system can be efficiently realized by interconnecting tWD--input,
two-output structures., more commonly called Om-pairs [Mit73bl Figure 4.37 shows two commonly used
block diagram representations of a t\.vo-pair, with Y: and Y2 denoting the tv:o outputs and X 1 and X2
denoting the two inputs, where the dependence on the variable z has been omitted for simplicity. We
consider here the transfonn--domain characterizations of such digital filter structures .and discuss severaJ
two-pair interconnection schemes for the development of more complex structures. Later, we outline
minimum-multiplier realizations of allpass transfer functions based on [be two-pair representation.

4.11. 1 Characterization
The input-output relation of a digital two-pair is given by

[~~]=[;~: trz
f22 ][ x,x, l' (4.171)

ln the above relation the matrix -c given by

·~['"
t;:J:
(4.172)
'" tn J
260 Chapter 4: LTI Discrete-Tlme Systems in the Transform Domain

X
' - [A
8'1
.
''
.-; x·
'
[A 8"]
D'
I'
''
C' D"0 C" '
y
• X' y•
'
Fignre 4.38: r -cascade connection of two-pairs.

is called the transfer matrix of the two-pair. From Eq. (4.l71) it follows that the transfer parameters can
be found as follows:

tl! = ~I , •n = _!i I . tz1 = ~2 I . tn = Y 1


2
. (4.173)
x 1 ix.;-=o X 2 1x!=n }( 1 ix.;-=O X 2 x1=0

An al.ternative characterization of the tv..-o-pair is in tenus of its chain parameters as

[ X,Y; lJ ~ [ CA B ] [ Y1 )
D X2 '
(4.174)

where the matrix T glven by

T= [ ~ ~] (4.175)

is called the chain matrix of the two-pair.


The relations betv.~n the transfer parameters and the chain parameters .can be ealilly derived and are
given as
c 1
tu = '-'A_D_-;-'-BC:c ,,, = -.1. j,-.
B
= -- (4.l76a)
t i l = A' ~~ A'
A
I D = rur2t- tut22.
.4~-. (4.!16b)
'" '"
4.11.2 Two-Pair lnterconnectton Schemes
Two or mo.re two-pairs can be connected .in cascade in rwo different ways. The cascade connection of
twO-pairs shown in Figure 4.33 i:sc-alled a r -cascade. \Ve next determine the characterization of the overaD
two--pair. Let t'le individual two-pairs be characterized by their chain parameters as

[ x;
Y{
]=[ C'
A'
D' xt '
B' ] [ Y' ] [ X"
Y{' l= [ A''
C''
B" ) [ Y" ]
D" X~ - (4.177)

But from Figure 4.38, X~ = Yf, and Y{' = x;.


Substituting these relations in the first equation of
Eq. (4.177} and combining the two eqJ.Ultions, we arrive at

Xj ] _ [ A'
[ 1'{ - c' 8'
D'
J[ c'·'
A" B"
D"
J[ x; l'{ ] (4.178)

Therefore, the chain matrix of the overall cascade is given by the product of the individual chain matrkes,
as indicated in Eq. (4.178).
4.12. Algebraic Stability Test 261

x;-
Y'
'
X"
' ..-; y;
: •'
[' ;,
l:~ ' :~j
1
iz''
'
X' '21
Y, X'
'
1
21 ··;,! 1------ r;
'
Figure 4.39: r-cascade connection of lwo-pairs.

Figure 4.40: A constrained two-pair.

A recond type of cascade connection, called the r -ca-wxuie, is shown in Fig!ll'e 4.39. If the individual
two-pairs are described by their transfer matrices

r;, l' x; ] [ t;; l [ X''


Xj ] (4.179)
r22 J L x; · t''22 J' 2
.

then it follows that the overall cascade is characterized by a transfer matrix that is the product of the transfer
matrix of the constituent two-pairs, i.e.,

tj2 (4.180)
t'22

Another useful interconnection i.s. tbe constrained rwo·pair indicated in Figure 4.40. Jf we denote the
transfer function Y 1fX 1 as H(z), then it can be shown that H(z} can be expressed in terms of the two
parameters and the constrain1ng transfer function G(z) as

R(z) = -
Y1
~-
''---'+_:D:c--·_::G,_(z::cl
C (4.18la}
X1 A+B·G(z)
t1ztz1 G(z)
= tll + l - t22G(;:)
{4.18lb)

4.12 Algebraic Stability Test


We have shown in Section 4.3.5 that the BIBO stability of a causal rational transfer function requires that
all its pole'i be inside the unit circle. For very-high-order transfer functions it is difficult to determine the
pok locations analytically, and the use of some type of root finding computer program is necessary. We
outline here a simple algebraic stability test procedure tha: does not require the determination of the pole
locations. The algorithm is based on the realization of an allpass transfer function with a denominator that
is the same as that of the mmsfer function of interest
262 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

Figure 4.41: Stability triangle fm a second-order digital transfer functiOOc. Slability region is the shadd area

4.12.1 The Stability Triangle


For a second-crder transfer function the stability can be checked easily by examining its denominator
coefficients. Let
D(z) = l +dtz- 1 +d2z- 2 (4.I82)
deno:e the denominator of the transfer function. In terms of its. poles, D(:.;) can be expressed as:

{4.183)

Compar!ng Eqs. (4.182) and (4. I83) we obtain

(4.184)

Now for stability of the £ransfer function, its poles must be inside the unit circle, i.e.,

Since from Eq. (4.182) lhe coefficiem dz is given by the product of the poles, we must have

idzi < L (4.185)

Nnw the roots of the- polynowjal D(z) are given by

{4.186)

It can be l>hown (Problem 4.l20) that a se('Ond coefficient Cillldition is obtained using Eq. (4.1B6) and is

(4.187)
The region in the (db d2)-ptanc where the two coefficient conditions of Eqs.. (4.185) and (4.187} are
sati.>fied is a triangle, as sketched in Figure 4.41. and is known as the stability triangle for a second-order
digital transfer function {Jac96).
4.12. Algebraic Stability Test 263

4. 12.2 A Stability Test Procedure


It is impossible to develop simple conditions on the coefficients of the denominator polynomial of a higher-
order transfer function similar to that developed for a se...--.ond-order function as given above. However, a
number of methods have been proposed to determine the stability of an Mth-order transfer function H (z)
without factoring out the roots of its denominator polynomial DM {z). We outline below one such method
[Va.i87e].
Let
M
DM(z.J = Ldzz-r, (4.188)

·~
where we assume for simplicity do = L We first form an Mth-order allpass transfer f.:mction from DM(Z)
"'
A ( ) = DM(Z)
M z DM(Z)
dM + dM-IZ-I + dM_ 2 z-'2 +- · · + d 1z-{M-l) + z-M
- 1 +d1z l +tf:2z 2 +···+dM-lZ (M l)+dMZ M.
(4.189)

1f we express.
M
DM(Z) = n(l-l,z- 1),
f=l

it tt.en follows that the coefficient dM is the product of aU roots, i.e.,

dM = (-l)M n
M

i=l
Ai.

Now for stability we must have lA; I < I, which implies the condition !dM I < l. ff we define

then a necessary condition for s:ability of .4.M(Z), and hence, the origina1 transfer function H(z), is given
hy
< 1. kL (4.190)
Assume the above condition holds. Next, we form a new function AM-! (z) according to

(4.191)
264 Chapter 4: LTI Discrete-Time Systems in the Trans1orm Domain

Substituting in the above equation the expre._<oSion for AM(Z) from Eq, (4.l89), we arrive at

(dM-!- dMdl) + {dM-2- dMdJ)z-l +,. ·


-'- (dt- d!>!du-dz-{M-l' + 0 - d"ftJz-<M-lJ (4.192)
( l - dl,) + (di - d.wdM-I)Z I+···
+ (dM-2- dMdz)z-(.W-l) +
(du-1 - dudt)Z-(M-l)

which is seen to be an (M - l)th-order allpass function. AM-I (z) can be rewr.tten in the form

d 'M 1 +d'M 2Z - l T"


-

+d'.,.-(M-2.)
1~
+ Z -(M-1')
AM-' ( z) ~ ;-'S-::'::0'-';'=-c--;;------:::~'y,--;-:;,c---:::== (4.!93)
l+d~z '+···+d:U-2z (M 2l+JM-tz {U J)'

wh~re
, d, - dMdM-i
i= :1,2, ... ,M-1. (4.194)
aj = 1- d2
M
Now from Eq. (4J9l) the poles Ao of Au-J(Z) are giver. by the roots of the equatio:~

(4.195)

By assumption Eq. {4.190) holds. Hence the above equation implies that

(4.!%)

If Au(z) is a stable allpas:s function, then according to Eq. (4.l32), !A!.tCdl < I for !z! > l, jA,w-(zH = l
for lz: = 1, and IAM(z)l > 1 for lzl < I. Therefore, if AM(Z) is a stable allpass function, then P·ol < 1,
or in other words, .4M-! (z:) is a stable allpass function. Thu._~ if AM(Z) is a stable allpass function and
k~ < 1, then AM-l(Z) J.s a stable allpass function.
We now prove the converse~ that is, if Atlf-1 (z) is a stable allpass function and kL- < 1, then AM(Z)
is a stable a11pass function. To this end, we .invert the relation af Eq. (4.192) to arrive at

Au(z) = kM + c ' Au-1 (z) . (4.197)


l+kMZ 1Au-t(Z)

If (o is a pole of AM(Z), then


-1
(0 AM-J ({o} = ~-,1-,
<u
By a:sl'.umption Eq. (4.190) holds. Therefore, ](0- 1 AM-I ((u)l > I, :i.e.,

(4.!99)

Assume Aw-I (z) is a sl.abie allpass function; then according to Eq (4.132), lAM-; (z)~ ~ 1 for :zi ~ 1.
Now, if J(ol 2: l, then from Eq. (4.132), IA,\!-1{-to)l ::5: 1. which contradicts Eq. (4.199). On the other
hand, if Ito' < l, then from Eq. (4.132), lAM- 1((o)l > I. satisfying the condition ofEq. (4.199). Thus, if
Eq. (4.190) holds and if AM-t (:::)is a stable allpass function. then AM(Z) is also a stable allpass function.
Summarizing, a necessary and sufficient set of conditions for the allpass fum:tion A_\!(Z) to be stable
is therefore:
4.12. Algebraic Stability Tes1 265

(.a) k~ < 1, and

(b) The altpass function AM~! (z) is stab1e.

Thus, once we have checked the condition k~ < 1. we merely test for the stability of the lower-order
all pass function A M-1 {z). The process can now be repeated, generaling a set of coefficients

and a set of allpass functions of decreasing orders,

AM(Z), A..w-J (z} •... , Az(z). A1 (z), Ao(z) = l.

The allpass function AM(Z) is stable if and only if kf < I for all i.
266 Chap!e:- 4: LTI Discrete-Time Systems in the Transform Domain

TheM-file poly2rc in MATLAB can be utilized to determine the stability test parameters flq} very
efficiently on a computer. We illustrate its application below.
4.13. Discrete-Time Processmg of Random S!gna!s 267

4.13 Discrete-Time Processing of Random Signals


As we sh:.;ll observe later m lh'<"! text, there are oc-ca;.iom; when we need to study the effect of processing
a random discrete-tune signal by ii iinea.r time-invariant di-;crete-time system. More precisely. we need to
dc:ermine the <.;tatiMl-.:al properties of t!1e output signal l_v!_ n)} ger:erated by a stable LTI system wlth an
impulse response ~hfnl} when its :inpm x[n l is a particular rea.lizati:on of a wide-sense stationary (WSS)
random process {X[n]}, For simplicity we assume the input sequence and the impulse response to be
real-valued. Now, the output of an LTI system is given by the linear convolurton of the input and the
impulse response of 1he: '>Y~tem. i.e ..
·~

y[nJ = L· hjk].x;-n- k]. (4.203)


k=-=

Since the function of a random variable is aho a random variable, rt follows from the abo.:we that rbe
output y[n l is abo a sample sequence of an output random process {f[n ]l.

4:13.1 Statistical PToperties of the Outp;.Jt Signal


Sin<:e the input x[n.] is a sample. sequence of a stationary random process.. its mean m~ is a constant
inritpcndent of tbe time lr.dex n. 6 The mean E(y~n}J of the output random process y[n l is then given by

= L h~kl.E(x[n- k]) = m, L h[kl = msH{eifl}, (4.204)

which JS. a constant independent of the time index tr.


The autocorrr:latioo function of the nulput of the LTI di:-;crete-tlme !>ystem for a real-valued input is
<>ivcn
0
bv

of,_,_,lr< + -L n1 = C(v[n ~ f)y[nJ)

~E(U~=~h[i]xfn H-<11 Lt hlk]x(n-k]J)

_
- 'L\ ' h[i] L hfkiE(xln + e- i]xln- kJ)
;~- = li.=-X
x• ex;,

= L hfi; L h[k1.PxAn + e- i,n- k]. (4.205)


· -X k=-00

Since 1ht input is. a sampie sequence of a WSS random proce;s.. its autocorrelation sequence depends. only
on the difference { + k - i of the time indices n +f. - i and n - k. i.e.•

(4.206)
268 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

Substitutmg the ubove in Eq. (4.W5), we arrive at


~ X

.Pnln +i, n] ~ L h[i] L h[k]¢ull + k ~ i] ~ .P,Il] (4.207)


'=-<:X' k=-=

indicating that the output autocorrelation sequence depend.'> on the difference£ of the time indices n + E
and n. As a result ofEqs. (4.204; and {4.207), it follows that the oJtput _v[nl is also a sample sequence of
a WSS random process.
Substituting m = i -kin Eq. (4.2C7) we- arrive a1
X =
¢yy[i} = E -¢xxlf- m] L h[k!h[m + k]
m=-= k=-=
=
= E ¢xx!.f- mJn.~o[m), (4.20&)
m=-oc
where
X

r;,hfm] = L h[k]h[m + k! = hfm]@h!-ml {4.209)


k=-:JC

is called the aperiodic auwcorrelation 'Sequence of tlle impulse response sequence {h[nj}. It should be
noted that r;,_ 11 [mJ is tr.e autocorrelation of a finite-energy deterministic sequence and is not the same as the
autocorrelation of an infinite-energy V.'SS random signal.
The cruss-\:orrelation function between the output and the input seque::ces of the LTI system for a
real-valued input is given by

.P-.·xfn + f, nJ = E (y[n + .f} xfn])


~E (tx h[i]x[n +£ ~ i]x[n])
=
= L .h[i]E (x{n +t - i] x[nj}
<=-00

=
= L h[iJ.PxxU- i] = ¢y_.-[f]. (4.210}
!=-oc-

The above re~ult .indicates !hat the cross-correlation sequence depends on £, the difference of the lime
indices n T £ and n.

4.13.2 Transform~Domain Representation


We now cons.:der the z-transfonn representation of Eq. {4207). As indicated in Sectkm 3.1 L2, the <:-
transform of ¢nl.t'] may e)(ist it the input random signal is of zero mean. From Eq. {4.204), for a
2ero-mean random input. the output of an LTI sys1em is aho a zero-mean random signal. In this case we
obtai.'! f:-om Eq. (4.207) by taking the z-transfonn of both sides

(4.211)
4. 13. Discrete-Time Processing of Random Signals 269

where ¢_,x(z). <Pvv (z). and !V(z) are, respectively, the z-tran~onns of 4>n (£1, q)yy(·t'l. and ;f- [r l But, from
Eq. (4.209} 41(:)-~ H{;JH(z- 1 ), whkh when subs!itmed m Eq. (4.211) yields

'-l>,_y(Z) = H(<:!H(z- 1 J<Px:r(z.). (4.212)

On the unit circle, Eq. (4.212) reduces to

<Pyy(ejw-) = IH(e1 "')!2 ¢'u(e1 "') (4.213)

Csing the notations Pu.(w) and Py_--(w) to denote the input and output power spectral densities, <Pu{ei'~)
and ¢'H(e1"'), respective-ly, we can rewrite Eq. (4.213) as

(4.214)

Now, from B-1- (2 162), for a zet-o--mean WSS proces~ v[n], the total average puweris given by the
meun-square value E(v 2[n D :;: ¢yy(O). But,¢_,_,[£] is given .by the irrverse Fourier transform of !Pyy(ejw):

~
'f')'Y ["]
-<- = l
2.Jr !' - (
_,.. ....,,_Y e jw·_!e i"->! d w.
. (4.215)

Therefore, from Eqs. (4.213) and (4.2t4) the total average power in the output signal y!.n] is given by

E{y 2 [n]) = ¢:vy{Ol

= .!.._
2rr
!"' _,..
<l>yy(e1 "') dw

~ -'1" .,
2-:r -"!f
IH(e-1"'W·P.u(w) dw. (4.216)

For a real-valued WSS random signal. x[n 1 the autocorrelation sequence 4'xx [t:] is an even sequence, and
hence, <~>xx(ej"') is an e\"en function of w. Assume the LTI system hfn] to be an ideal filter with a square
rr.agnitude response
IH(efwJ12 ~I~: Wei ~ jWj :S Wd,
0 :S ltv1 < WcJ, WcZ < !w! < Jl'.
(4.217)

In this case. Eq. (4.2f6) reduces to

(4.218)

Since the total average output pov.-er ¢'yy[01 is always nonnegative independent of the bandwidth of the
lit11ear filter, it follows from the above thal
Pxx(W) ::::-_ 0, {4.219)
proving that the power spe-etral densiry function of areal WSS random signal is also nonnegative .in addition
to being a real and even function.
Likewise, from the ;;~transforms of both sides of Eq. (4.210) we obtain

(4.220)
270 Chapter 4: LTI Discrete-Time Systems in- the Transform Domain

where ¢1y..-(Z) is the z-transform of~(£]. On the unit circle, the above equation reduces to

(4.221)

The function 4>yx{ej"'} is the cross-spectral de~ity or cross-power spectnan, denote-d by Pyx(W). Note
that if x[n 1 is a WSS white noise sequence, its power s~trum is a constant K at aU frequencies. In this
case. the above equation reduces to

An application of Eq. (4.221) is in determining the frequency response of an unknown system by


exciting ~t with a WSS random signal, and then computing the cross-power spectrum and the input-power
spectrum. A ratio of these two power spectrums then yields an estimate of the frequency response of
lhe system. Since both power spectrums are real functions of w, the ratio provides only the magnitude
response. To this end, the function t:.fe in MATLAB can be used. Some of the forms of this function are
Txy = t:fe{x,y)
'I·xy = tfe(x,y,r:fft)
[Txy, fJ = tfe(x,y,nfft-,FT)
Txy = tfe(x,y,nfft,FT,window}

Variance of the Ou1pul Signal


Now we develop the expression for the variance of the output random signal when the inpur to the LTI
system is a real-valued white random process. From Eq. (3.150) Vt"e get

(4.222)

Substituting Eq. (4.214) in the above equation and making use ofEq. (3.158), w-e arrive at

-,,~>re--~--,,-,~--7-.1-0_4 __________________
a;= ;!. L: IH(ej"')l 2 dw, (4223)
4. 13. Discrete-Time Processing of Random Signa-ls 271

O,l ll.4 06 0 ..1>


NormoliU>d ~y

(a) (b)

Figure 4.42: (a) Estimated gaiJJ response. and (b) actual gain response.

Figure 4.43: A typical unifonnly distributed random sequence (solid line) and the output of a 3-point rururing average
filter(dashed J.ne).

which can be alternatively written as

ai = a}_ 1 H(z)H(z- 1)z- 1 dz, (4.224)


2..'"f1 'fc
where C is a counterclockwise dosed contour in the ROC of H(z)H(z.- 1).
Using Parseval"s relation in Eq. (4.223). v.-e can also express the output variance as

ih[nJf. (4.225)
n=-co
272 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

-n.

4.14 Matched Filter


In practict". a deterministlc signal x[nl transmitted through a channel is usually corrupted by a random
signal e{n ]. The noise-corrupted signal when processed by an LTI system with an impulse response h[n ].
generates an output signal that conlains rhe desired noise-free output y!n] and an interference signal dln ],
where
y[nl = h[nj'@x[n],
d[n] = h[n]@)e[n].
The LTI system which maximizes the signai-to-noi.se ratio at its output is called the matched .filter. We
develop next the characterization of such a system.

4.14.1 Characterization of the Matched Filter


Now the output at time instant no can be expressed as

y[n,,] = f--1" H(t:i"')X{elw)eiwn,dw, (4229)


" -·
4.14. Matched Filter 273

v.-bere X{ei"'-') and H{ei(l)) are the DTFTs of x!n] and h[n), respectively. If the random signal e[n} is
assumed to be a z.ero~mean WSS process, from Eq. (4.216) the total average power of d[n] ~given by

(4.230}

·Nhere Pu(w) is the power spectrum of eln].


Hence. the output signal-to-noise ratio can be expressed as

fir f"",n: H (elw)X (ej"')ei""'~dw!


1
2

(4.230
2
.};; J-::.r IH{ei«>)j Pu:(w)dw
To simplify the above expression, \\-"e make use of the Schwartz inequality which states

u~: A{ejw)B{ejw)dw( ~ L: jA(eJW)t £: jB{ei~)r


dw dw,

where the equality is obtained when A(d"-') = K · B*(ei"') with K areal constant. To this end, we rewrite

Substituting the above in Eq. (4.231) we arrive at

(·~) ~~~~:~f dw
2
< _,_frn iH(ei"")! 'P...,(w}dwf1rn
2
N out- 2JT J:!,1r IH(ei"')I Pee(m)dw

<-
1
- 2H
f' -x
IX(eiwJI' dw.
Pee(W)
(4.232)

The maximum signal-to-noise ratio is obtained when the equality is achieved at which
· X*(ef"-')e- Jwn.,.
/Pee(w)H(e;w) = K '~·ii~E-~
··iPe:e(w)

(4.233)

For a white noise process e[n], Pee(@) =a;; then


. K . .
H(eiw) = 7 X*(el"')e~;um,.. (4.234)
rri
Taking the inverse DTFT of the above we obtain

h[n) = ~
bra~
j"' X*(el"')e-i"'""ejwn dw
-1r

(4.235)
274 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

for a real xjn J, where rt 0 is the time instant of peak signal output. The LTI dis~rete-ti.me system ~veloping
the maxlmum output signal-to-noise ratio is said to be ''marched" to the mput and, hence. 1s called a
matched filler.

4.14.2 Spread Spectrwn Communication Systems


Matched filters are used for detection of signals in radar and digital communication systems. We deS<...Iibe
here the basic idea behind the matched filter-based detection in the latter application [Vit95}, [Mad98].
Signal pc=e&-<>ing for digital communication is typically performed on the {L'l general, complex-valued)
baseband representation of (real-valued) passband signals. Consider the example of multiuser spread
spectrum communication, in whi;:h the transmitted sequence is obtained by modulating a low-rate sequence
of data symbols by a high-rate spreading sequence, typically chosen to be pseudo-random and periodic.
The elemcnL'> uf the spreading sequence are termed chips. The number of chips per symbol is called the
proo~s.~ing gain, denoted here by N. While the syn:bols and the chips can take complex values in general,
we restrict attention here to binary ± l se<:uences. In this case, the symbols are called bits.
Let bo. b 1, b2 .... denote the sequence of blts to be transmitted. Then the narrowband sequence to be
transmitted consists of groups of bits with the firs: group containing N bits with each bit being bo, the
second group containing N bits with each bit being b1, and so on. The resulting sequence is then multiplied
element by elemen1 by the spreading sequence, anC: the product sequence is transmitted.
RJr example, a typical data sequence for a single user, its length-5 spreading code, and tbe resulting
encoded product sequence could be as indicated below:

Data: +1 +1 -'-1 ~I +I -I -1 - I - I -I -I -I -I - I
+I -I +I - I +I +I - I +1 -1 +1 +1 - 1 _._ I -I _,_,
Spreading Code:
Product Sequence: +! - I +1 -1 _._ I - '
+I -I +I -1 -I +I - +I ' _,
I!! the above example, the spt"eading sequence has a period equal to the processing gain, which is-
termed :;; shon spreading sequence. In many applications, the spreading sequence may be aperiodlc, or
have a period much larger than the processing gain, which is termed a long spreading sequence. For long
spreading sequences, a different matched filter is needed for every bit, because the spreading sequence
changes from bit to bit. In this case, a preferable implementation may be correlation of tbe received
sequence with the spreading sequence over the N chips corresponding to a given bit.
In order to support several simultanecus users, different low-rate narrowband data streams are mod-
ulated by different high-rate spreading sequences. All product sequence.s are then summed into a single
composite signal and transmitted over the same channeL
k the re.."eiving end, to decode a particuiac data sequence, the composite signal is passed through a
matched filter whose impulse response is the time-reversed version of one period of the spreading sequence
used to generate i.ts corresponding product sequence. The output of the matched filter then show--s peaks at
the Nth time instants. The signs of the matched filter output at these instants are precisely the bitofthedata
being transmitted. However, the outputs of the matched filters whose coefficients are the time-reversed
version of the periods of the other spreading sequences will oot show string peaks at the Nth time instants.
The above ;Xmcepts are illustrated in the following example.

*'*''**
.,...,._,
11 I" I" "'L
¥3 ld "b t
~.£
I
·+1
I
*"0"" *"

h<J
, "'"'
'"'' t

~"I

"I -~
'*""! +Z

""I """"I
-"t"
-
'*"
4.1 5. Summary
275

c~--~~--~ --- -~- ·:


"

_,_- ~-

0 5

(a) (b)
Prodncl S&j"'""'e
2~~~----~-----'--"'~~-~~,

0
'
1l ofT~CT',_L'rTc-C,~_,",cLLL~_,j
'
<

',~--~,--~c,"o~---,-,---;;,~--c

(c) (d)

Figure 4.44: (a) The low-rate data sequence, {b} the spreading sequence .q. (c} the product sequence. and (d) the
output of matched filter #L

4.15 Summary
The frequency-domain representations of discrete-time sequences, considered in the previous chapter, are
applied in this chapter to develop the frequency-dQmain repcesentations of an LTI discrete-time system.
One such representation is the frequency response, obtained by applying the discrete-time Fourier transform
(DTFT) to the impulse response sequence.. which characterizes an LTI discrete-time ~ystem uniqueJy in
the frequency-domain. The DTFT of the output sequence of an LTI discrete-time system is simply the
product of its frequency response and the DTFT of the input sequence, which provides the input-output
rdatiun of the system in the frequency domain.
276 Chapter 4: LTI Discre-te-Time Systems in the Transform Domain

,,
.' o,9
'i.
'
,,'
-<' -2rI 6'

(a)

_,;-----;---- ;;--;;---c;;;-~
0 s 10 15

(b) (c)

Figure 4.4S: (a} The output of matched filter 112, (b) the output of matched filter #3, and {c) the output of matched
fill.er #4_

A generalization of the frequency response concept is the transfer function defined by the z.-transfonn
of the impulse response sequence. An alternative input-output relation is thus given by the product of the
transfer function and the z-transfomt of lhe input sequence that results in the z-transform of the output
sequence. For a stable causal LTI discrete-time system, an poles of its transfer function must be strictly
im:ide the unit circle. For a stable causal transfer function, the frequency response is given simply by its
values on the unit circle .in the z-plane.
The concept of filtering is introduced and several ideal filters .are defined. Several simple approximations
to the ideal filters are next introduced. In addition,. various special types of transfer functions that are often
encountered in practice are reviewed. The concept of complementary transfer functions relating a set of
transfer functions is discussed. and several types of complementary conditions are introduced.
The inverse system design is encountered in estimating the unknown input of a discrete-time system
from its known output. Ute determination of the transfer function of the inverse of a causal LTI discrete-
time system with a rational transfer function is outlined. The recursive computation of the unknown causal
input signal from the impulse response of a causal LTI system and its known output is outlined. Next.
two methods are outlined fur the system identification problem. [n one approach. a recurshre .algorithm is
de!;-cribed for determining the impulse response of a causal initially retaxed system from its known input
and output sequences. In the second method, the frequency response of the system is determined from the
cross-energy spectrum of the output and the input signal and Ehe energy spectrum of the input. Alternately,
rhe square magnitude function of the system can be determined from the energy spectrum of the output
and the input signals.
An important building block in the design of a single-input, single-output LTI discrete-time system is
the digital two-palr, which is a two-input, two-output LTI discrete-time system. Characterizations of the
4. f6. Problems 277

digital two-pairs and rhe1r imerconneclions are di~cus~J. A v~ry s1mpJe algebraic procedur<" !01 testing
the ~tabtlih of a causal. Ln m:msfer function i:> then introduced. Finaily, the chapter concludes with a
di,;cus.;;ion 'on the statistical properties and transform-domain representation of the output signal of an LTi
discrete-time system generated hy a random input signal.

4.16 Problems
4.J Show that the function ufnj = z", where <:is a complex conslant, i~ an eigenfunction cf an LTI discrete-time
<>ystem. h v[nJ = z".«!n] with .z a c>Jmplex con<;!ant also an eigenfunction of an LTI discrete-time system?

4.2 Determine a dosed·fonn e;;~pression for the frequency resp..-.me H{cl&) of !l:e LTI discrete-time system dtarac-
terired by an impulse response
hfn] = 5fnl ~ aS!r.- R]. (4.236)
wht:re jaj < 1. What are the maximum and the minimum values of its magnilude response'! How many peaks and
dips of the magmtude re:sponse occur in the range 0 :::; w < h'"! What are the lOt;:atior.s of the peaks and tile dip&?
Sketch the magnitude and the pha~e responses for R = 5.

4.3 Detennine a closed-form expression for the frequeocy response G(ej"') of the LTI discrete-time system charac·
teriz.ed by an impulse- (~ponse
g[n] = htnJ®h[nJ8h[nl. (4.237)
where h(nj isgJVen by Eq. (4.236},

·'-4 De!ennine a dosed-form e!lpre~siun for the frequency re>;poose G(elw) of an LTI discrete-time system with an
nnpul~e resJX>n.re giv-en by
[nj=Ja", O::::;n:sM-1,
g l 0 otherwise,
where jaf -< l. \\'hat is the relation of G(ei'"-') to H(eiw_) of Eq. {4.14)? Scale the impulse respon.se by multiplying
it ~ith a suitable constant so that rhe de value of the mag:~itude response is unity.

·1.5 Sho~ that the group delay T(t<-'l of an LTI discrete-time sys(elll cb>lracterized by a frequency response H{e1"')
;::an k expressed as

(4.238)

4.6 A r.oncau;.a! LTI FIR disnete-time system;:-; charactenLed hy an impulse resporu,e h[nJ = a 1b[n- 2] +a;z6[n-
l} + a:;b[n] +a4S{1i + 1]- asE[n..,.. 2]. For what values of the impulse response sample:> wilt its frequency response
H(el"'_) !lave a_ zero phase"

~'.•7 A causal LT[ AR dhcrete-time ~y:>tem :s characterized by an 1mpulse re"f''nse h[nl = -<qJ[nJ + a2£ln- I]+
CJJJn - 21 + a40in- 3] ...._ asD[n- 4} + a6ii[n- 51+ trylfn- 6j. For what values of the impuh.e response jamples
will its frequef!cy re~pnnse H (ej<•') have a linear phase?

4.8 An F1R LTf discrete-tir.~e sy~H"Til is described by the differen= equation

y[n! = Ot:>.:{n -k! +azx[n +k- l] +a1xin ~k- 21


+a;;!x(n + k- 31 +a;."l:[n .;..k- 41.
where y(.n] and x~n] d~e. respe<::tively, the output anti the ::n;n1t sequences. Determine the expression for its
U-equency response H (e·'""). For what values o-f the constant k v.ill the system have a f:equencyres.ponse H (eiw} that
I:> a ~eal fuoctbn of w?
276 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

4.9 ConsJder the ;;aocade af two causal LTI svstemE: h 1[nj = a5{n.l + Oln - I J, and h2 [n! = f/' J-i !nJ, 11':31 < I.
Determme £he frequency response H(eJ"') mix overall system. hr Y..hat values of a and /J will ;Hzeiw)! = 1?

4.10 Th: input-output relation of a nonlinear dlscrere-time system tn (be frequency domain is given by

(4.239)

where 0 -< a ~ ~.and X(elw) and Y(eiw) denote the DTFrs of the input and output sequences. Determine the
expression for its freque!k']' response H(ej"') = Y(eiw)/ X(eiw) and show that it has zero phase_ 1be nonlinear
algorithm described by Eq. (4239) is knO\\"-n as the alpha-rooting method and has been used in image enhancement
[Jai89).

4.11 Determine the expression for t."te frequency responre H (ei"'; of a causal IIR LTI discrete-time system charac-
terized by the input-output relation

_v(nl = x{nl + ay(n- K],


wh:::re y[nj and ..rfnj denote, respectively, the ootput and the input StXJUences. Determine the maximum .and the
minim;;_m value~ of its. magnitude response. How many peaks and dips of lhe magnitude response occur 'in the range
0 !::: w < 2Jr? What are the locations of the p!aks and the dips? Sketch tbe magnitude and the phase responses for
R=5.

4.12 An HR L11 discrete-time system is described by the difference equation

where v[n] and x[nJ denore. respectively, the output and the inpllt sequences. Detennine lhe expression for its
freq~ncy response. For what v.al.ues of the constants ht will the magnitude response be a constant for all vafues ot .w?

4.13 Determine the input-output relat:Wn of a factor-of-2up-samp!er in the frequency domain.

4.14 The maputude response of a digital filter with a real-coefficient transfer function H (z) is shown i:n Figure P4.l.
Pf.:>t the magnitude response of the filter H {z"').

jH(ei»)!

o~~~o~['\~O.~J.------~·~ 0
FigUre P4.1

4..15 Consider an LTI discrete-time system with an impulse response h[n} = (0.4l"#[nj. Determine the frequency
±n J4. What is the steady-state output y!n 1of the system
respo!l~ H (eJ<•'J of the system and evaluate its value at a·=
for an input x[nl = hln(JTnj4),'L[n]?

4.16 A.n HR filtcr of length 3 is defmed by a symmetric impulse response. i.e., h[Oj = h[2). Let the input to this filter
be a ~um of tv.ucosine sequences of .angular frequencies 0-2 radlsamples and 0.5 mdlsamples, respectively. Determine
the >mpul~e response coefficients so that the filter pas..es only the high-frequency .component of the input.
4.16. Problems 279

4.17 =
(a} Destgna kngtlt-5 FIR bandpa'JS filterwithanaotisymmetric irnpulse,res.poose hfn J. i.e., h~n ~ -hl4-n J,
0 :s n::::: 4, satisfymg the followingmagnitudeJe!>ponse'>alues: jH(eJ1l"f 4 )j 0.5 and !HleJirf-)1 1.
= =
(b) Detecm.ine the exact expression for the frequency response of the filter designed, and plot its magnitude and
phase resp..rnses.

4.18 (a} Design a length-4FIR bandpass filterwithan anti~mmetrkimpulse responsehln ], i.e,._h{n J = -h[4-'1],
0 ::0 n ::;; 4, sati~fying the following magnitude response values: lH(el"1 4)! = I and jH(efllil)l =
0.5.
(b) De~.e1111ine
the exact expression for the frequency response of the filter designed, and plot its magnitude and
phase responses.

4.19 An FIR filter of length 5 h; defined b)' a symmetric impUlse response, i.e., h[n} =
h[4- n}, 0 ~ n ::;: 4. Let the
input w this filter !x: a sum of three co~ine sequences of angular frequencies: 02 tad/samples, 0.5 rnd/samples, and 0.8
(ad/samples. respectively. Determine the impulse response coefficienn. so lhat the filtu passes only the midfrequeocy
componem of the input

4.20 The f~uency response H (ei"'} of a length-4 fiR filler with real impulse response has the follolhing specific
values: H'e1 ) = 2, H(ef"",12 ) = 7 - j3, and H(ePj = 0. Determine H(t..).

4..zl The frequency respon~ H (el'w) of a length-4 FIR filter with a real and anti:symmetric impulse response has the
foUowmg specific values: H(ejO<) = 8, and H(ejnf1) = --2 + j2. Determine H{<.).

4...22 Consider the two LTI causal digital filter.;; with jmpulse re~ponses given by

hA[n] = 0.36[n]- il[n - I]+ 0.38[n - 2),


hB[nj = 0.3il[nJ +din- fJ !- 0.38[n- 27-
(a) Skc;cb the magrmude responses of the twc filters and compare dJ.eir cbamcter:stics.
(b) Let h ;tfn J be the 1mpulse response of a causal di:gital filter with a frequency response H., (ejw). Define another
digital filter whose impulse response ~c [n] i.s given by

{Of" all n.
What i~ the relation between the frequency response Hc(eiw) of this new filter and the frequency response
HA(ej"'J of the parent ti:lter?

4..23 A,; indicated in fuample 2.36. the trapezoidal inlegration formula can be represented a;; an DR: digital filter
represented "rry a diffe~nce equaticm given by

y(n] = y[n - 1]+ ~{x[n] + x[n- 1]}.

with yf -ll = 0. Determine the frequeru:y response of•.he above filter.

4.24 A recursive difference equalion representation of the Sin;pscm's numerical integration fOmlula is gJ.Ven by
{Ham?\9l
y[n} =
y[n -2] + 11x[n]+4xln:- lJ+z[n -2)1.
Evaluate the frequency response of !he above filter and compare il with thatofthetraperoidal method of Problem 4.23.

4.25 This problem develops e;~~;pre~sions for the even and udd parts of a real-coefficient transfer function in terms of
the mirror-image and antimirror-image parts of its numerator and denominator polynomials. [Ram89}.
(a.'i Show !bat any rea!·-<:.oeffu:tent polynomial B{;:) In z- 1 of degree N can be expressed as: B(z) = B,;(z} +
B,,;(~). whe(e B,.,,(;:) = ~!B(z) +eN B(;:o- 1)} and Bm(Z) = jiB(.:::}- z-N B{z- 1)1 are, respectively,
mirror-image and antimilmr-image polynomials.
2!!0 Chapter 4: LTI Discrete~Time Systems in the Transform Domain

(b) Let H(z} = N{z.)/ D{z) be a real rationa] functioo. of z.- 1. Show that the even and odd parts of H(z) can be
eJ<pressed as
I
lle,;(z.)=-
I
H(z)+H(z
_, I n...
} =
,(z.}Nmi(Z)- D ..;(z)Naj(Z.)
2 (z) 2 '
2 Dm 1 Der;(l}
I { _,
Hoo(z} =- HCz)- H(z )
I= Dmi(z)Nai(2)- Da;(z.)Nm;{z)
~1: . ·
2 2
D..,,-(z.)- v;,<z>
In the above equations, Dnri(Z) and Dai(Z) are, respectively, the mirroc-image and antimirror-image parts oi
D(z). Likewbe,. Nmi {z) and Naj(Z) are, respectively, the mirror-image and antimirror-image parts of N(.;:).
(c) Show that Hre(ei"') = Hev(Z.j lt=,i"', and Hrm~eiw) =- }Ho;J(Z) L:=ei"".
(d) Using the above approach. determine the real and imaginary parts of
3z. 2 +4z+6
H(z) = 3,:-3 + 2z2 + z + l·

4.;:6 In this problem we oonsider the determination of a real rational, causal, stable discrete-time transfer (unction

H(?) = P(z) = L~p;z-i


"' D(t) "'?' d·• i
L..,/=0 ' ...

frcm the 5peeitied real pan of its frequency response [Dut83]:

Hre(e
jor.
J = L}'=Oai
N
cos(iwi
L;=Obi cos(iw)
A(ej"'}
= - - .-.
B(el"')
(4.240}

(a) Show that

Hre(eiw) = ~ [H(:) +Htz-t)]rz=ej"'


=.!. [ P(z)D(C!)+ P(z- 1)D(z)]! _ (4.241)
2 D(z)D(z I) J~=f-w
(b) Comparing Eqs. (4.240) and (4.241) we get

B(ei"-') = D{z)D{:-')Il=ei"-'

A(ej""') =
1
- [P(.;;)D{z- 1) + P(z- 1}D(z}JI . (4.242)
2 <.=el"-'

The spectral factor D(d canbedetemtined,~eptforthesca.lefactoc K, fromtherootsof B(:) = B{ei"') lz=eJ"'


inside the unit circle. Show that
N
K = ..jB(l)/ fl (1 - z;).
i=l
{-:) To determine P(;;), Eq. (4.242) can be rewritten through analytic continuation as

A{:)= ~ [ P(:)D(z- 1} + P(z- 1)D(z)].


Substituting the polynomial forms of P(z.) and D(z), and <:q_uating coefficients of (z' + z-; )/2 on both :sl.des
{)[the above equation we arrive at a set of N + l equations which can be solved for the numerator coefficients
~Pi}. Using the above approach, deten:nine H(:) for which

Hre(eiw}= 1 +cosw+cos2w_
17- 8cos2w
4.16. Problems 281

4.1:7 Let H(z) be the transfer fun won of a causal stable LTI discrete-time system. Let G (z) be the transfer function
Qhtained by replacing z- 1 in H(l.) with :_:z!~-\, ie., G(z) = H(z)i,-t= ,.+eli . Sbow that G(l) =
H-<><:-
H(l) and

G(-l) = H(-t).
4.28 Com.ider the digital filter structure of Figure P4.2, where H1 (;:), Hz(:;:), and H3{z) are FIR digital fi1tets with
transfer fum:tlons given by

Hl (:) = "j + 52 z -l + 7Z
4 -2
, H 2 ( z ) = J4 + 8 -l
~z
+ 7z
3 -2
,

Detennine the transfer function H(z) of the composite filter.

y{n]

FigureP4.2

4.l9 Determine the transfer function of a causal LTI discrete-time system described by the foUowing difference
equation:

yfnl = Sx-[nJ- 5x[n - 1] + 0.4xin - 2j + 0.32x{n- 31


- 0.5y[n - lj + 0.34y[n- 2J + 0.08y[n - 3].

Ex.press the transfer function in a f.u;tored form and sketch its pale-zero plot. Is the system BIBO stable?

4.30 Determine the transfer function of a -cau!ilil LTI discrete-time system described by the following difference
eq.;ation:

y(nj = 5x[nj- 10.5x{.ot- 1} + 1L7x(n - 2] + 0.3x[n- 3]- 4.4x[n- 4j


- 0. 9y[n - l] + 0. 76y[n - 2] + 0.016y[n - 3] + 0.096y{n - 4].

Express the tflm!>fer function in a factured fonn and sketch its pole-zero plot. Is the system BIBO stable/

4.31 Determine the expression fer the impulse response {hinJ} of the fOllowing cauW IIR transfer function:

-O.l;;- 1 + 2.19.;::- 2
H(z.) = cc---:=c:'i=;;-;~~':--oc~:;-;
(1 0.8t 1 +0.4l;: 2)(1 +0.3z I)·

4.32 The transfer function of a causal LTI discrete-time system is given by

6 -z-l 2
H(;;}=1+0.5z t+; 0.4z 1.

(11.) Determine the impulse responsehtnJ of the above system


(b) Determine the output yfn] oftbeabove system for aU valt:es of n for an input

x(n} = 1.2( -0.2)n ~fn] - 0.2(0.3)" Jt{nJ.

4.33 Using ;;-tnmsformmethods., deternrine the explicit expression for the output yfn] of each of the falkl>wing causal
LTI discrete-time systems with impulse responses and inputs as indicated:
28:~ Chapter 4: LTl Discrete-Time Systems in the Transform Domain

(a) hjn} = (0.&)" t-t[n], x(n:J = (0.5)" ,u[nj.


(}) hlnJ = (0.5)'\~i[nJ, x[n] = (0.5)",u[nj.
4.3<1 Using ;:-transform methods, detennine the explicit expression for the impulse response hlnJ of a causal LTf
discrete-time system which develops an output y!nl = 4(0.75)" JL-!n 1 fm an input x[n] = 3(0.25)" J.~.ln }.

4.3S A causal LTI discrete--time systt'ro is described by the differenu: equation


y!nJ - O.Sy[n - l] + O.l5y[n- 2] = x[n],
whl:,re r[n] and y[n] are, respectively, the input and the output ;oeqnences of the system_
(a) Determine the Lnmsfer function H(?.} of the S)'5tem.
( ~) Determine the impulse response h [n] of the system.
(!CJ Detennine the step response x[n] of the ~ystem.

4.3<J Consuienhediscrete-time syslem of Figure P43. For Ho(z) = l +ca-l. find a suitable Fo(z) so that the output
y{nl is a delayed and scaled replica of the inpuL

Figure P4.3

4.3'7 Let the transfer functions of Figure P4.3 be expressed in a polyphase fonn as

Ho{z} = £o(z2) + z- 1£1 (z 2 J. Fo{z) = Ro{;:h + z- 1 RJ(.ih.


{a) Detennine the condition so that the oolptll y[n} is a scaled and delayed replica of lhe input x(n}.
(h) Show that the transfer functions of Problem 4.36 satisfY this condition.
(c) Find at least another set of nontrivial realizable transfer functions 1/o{z) and Fo(z) that satisfy this condition.

4.3U We have sbown that a real-coefficient FIR transfer function H(z) with a symmetric impulse response has a
linear-phase response. As a result. the aJJ-poJe fiR transfer function G(z) = 1/H(z) will also have a linear-phase-
response. What are the practical difficulties in implementing G(z)? Justify your answer mathemafu;:ally.

4.3!1 Considec the following causal IIR transfer function:

3z3 +2 . .:2+5
H(z) = .
(0.5z + l)(z2 + z...;... 0.6)
IsH (z) a stable transferfuoction? If it is not stable, find a stable transfer function G(z) such that IG(efw;' = 'H(ei"')l-
ls tl!ere any other transfer function having the same magnitude response as H(z)?

4.40 Check the stability of the following causa! IIR transter fun_ctjoo:

H() = (z
2 +2z- 3)(z2 - 3z+S)
z ;
(z- + 3.7;: + l.S)(z
2
0.4.t + 0.35}
-

If it is unstable, find a stable transfer furu:tion G(z) such tha1 !G(ei"')j = !H(ei"'}l- How mrmy other transfer
furu;tions have the same magnitude response as H (z)?
4.16. Problems 283

·1.41 'l"he notch filter is used to :>uppress a particular sinu.;.oidai component Qf frequency W 0 of an input signal x[n]
;m.o;l ha& a tnmsfn function with zeros at ;: = e:r:.;w,,. For each tillt::r gJWD be"low. (1} determine the notch frequency we,
: ii 1show the form of the corresponding sinusmdal sequence to tlC ,.uppresseJ, and (iii) verify by computing the outpll!
yf~J by convolution that m the steady state, _vfnl = 0 when the :>1nuwidal sequence is applied at the input of the filter.
{a) H
1 (::) = l + z-2. (b} H2(;_) = (1 -- {/' ;z -J) - .- -~--:!- (q H3{::) = (I + v'2.:-l) + z-2.
4.42 Let GL(z) and GH!_7.) repcc~cnt ide-allowpafs and high1uss filters with magnitude resp<mse.s ao;. sketched m
Figure P4.4(a). Del ermine the tnm~fer functions Hk (.::} = Yt ( :__)i X i:.J of the discrele-time systeJT. of Fig-<~re P4.4(bl.
1r = 0. I, 2, 1. and sketch th.otr uJoagnitude respo:-~se~.

D
0 rrll n "'
lc'"v"'ll
Yl(;:)
X(~}

~
'

I
l2Hv:-J G.{c)
c
\~}

0 0 rrJl -"'
r. ~Y3 ;~l
(a_) (b)
Figure P4.4

<1.43 Ld: HLp(z) denote- the trao&fer fuoctmn of an hle:ll re~l c.oefficient lowpa'l-S filter with a cutoff frequency of
t')p-Sketch the magnitude respouse of H1. p •: -:_) and show that il is a highpass filter. Detennine the relation bet\H:en
the cutoff fre.quem:y of this highpass filter in renn;<. of w p <Jnd ~~ impulse response in terms of !he impulse response
hLp lnl of the parent klwpass filler.

4l44 Let HLp-(:.} denote r:;e transfer fundi on of an 1dea: reaL .cncffic1enl low pas~ filler having a cutoff frequen.::y of
<>Jp wi!h wp < J'f /2. Consider the comple"' coefficient transfer function HLp{eF»o;:;_), where wp < we < n - Wp-
Sketch its magnitude response for -rr :-; ~" ::0 Jr. 'What cype of hlter doe~ it represent? Now consider the transfer
f·mction G(<.'! = HLp(plw£ z.) + HLpk- ;w,,_J. Sketcll its magnitude response fm- -rr ::=: w :::: N _ Show that G(z) is
a real-coefficient bandpass filter with a passband centered at (J)o. Delennine the width o-f its passband in terms of Wp
and 1ts impulse re:;;ponse g[nj in tenrs of the impulse respum-.e hu• [n J or the parent IDwpass filter.

4.45 Let HLr{<} cknore the transfer function of an i.:ka! rea! coefficient lowpa.s~ filter with a cutoff frequen.::y of
Wp WJ!h U .:.:· -tJ.'p < JT/3. Show that the transfer function F{:) = HLp(e-'""' z.l + HLp(e- jw0 7.) + HLp(z) where
"-'o = Jr - Wp is :a rea!-cud'fident b:andstq: filter \'lith the stopband centered at w 0Jl. Determine the width of its
K:op{land in terms of Wp and it& impu!:;;e response f[nj m terms of!he impuise response hLp[nj of the parent lowpas;;
filter.

4.4fi Show that the strut"ture shown ::~Figure P4.5 implements the highpass filter of Problem 4.4.1.
2!14 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

Figure P4.S FigureP4.6

4.47 Show that the structure shown in Figure 1'4.6 implements the b.andpao;s fil:ter of Problem 4.44.

4.48 Let H (z) be an ideal real-caeffidem lov.'Pass filter with a cutofl at We, where w<= = 1< / M. Figure P4.7 shows
a single-input. M..outpur filter WUcture-. called an M-band analysi.:> filt-er bank, where Httz.} = H(:u-j21r/cfM),
k = 0. I. ...• M - L Sket.::h the magnitude response of each filter lllld describe the operation of 1te filter bimk.

Figure P4.7

4.49 Consider a cascade of M sections of the first-order FIR lowpass filter of&:j. (4.103). Show that its 3-dB cutoff
freque=y is given by Eq. (4.105).

4.50 Consider a cascade of M sections of the first-order FIR !Yghpass fitter of Eq. {4. I06}. Develop the expression
for its 3-dB cutoff frequency.

4.51 Veri:!)' that the value of a given by Eq. {4.111 b) ensures tlun the transfer function H LP(Z} ofEq. (4.109) is stable.

-4.52 .SOOw by trigonometric manipulation that Eq. (4.1 i la) can be alternately e:tpressed as

tan(u;)= ~~=- (4.243)

Nexl she>w that the transfer function HLp(::.} of&]. (4.109) is stable for a value of a given by

1 - tan(wc/2)
a= . (4.244)
l +tan{wc/2)
4.16. Problems 285

4.53 Design a first-order lowpass [JR digital filter for each of the follow.ing normalized 3-dB cutoff frequencies: (a)
0.4 rad/umpfe~. (b} 0.3Jr.

•L54 Show rt.ar the 3-dB cutoff frequency We of the first--order highpass IIR digital fih« of Eq. {4.112) is given by
Eq. (4.llla).

•L~ Design a first-order highpass IIR diglta1 fitter fur eoch of tbe following normalized 3--dB cutoff frequencies: (a)
025 rad/samples, (b) 0.4JT.

4..56 The following linJ.-urdec JIR I:J'ansfer function bas been proposed for duner removal in t.m radars [Urk58J:

I - z- 1
H{::)= l-kz t·

Determine the magnitude response of the above transfer function and show that it bas a highpass response. Sca}e the
transfer functlon m that it has a 0-dB gain .at w = r.. Sketch the magnitude responses fork. = 0.95, 0.9, and -0.5,
respectively.

4.5'7 Show that the center frequency w 0 a..W the 3--dB bandwidth Bw of tile second-order UR bandpass filter of
Eq. (4.1 13) are given by Eqs. (4.115) and (4.116), respectively.

4.58 Design a second-order bandpass IIR digital filter for each of the following specifications: {a) Wa = 0.45n, Bw =
0.2n-. {b) Ult> = 0.61f, Bw = 0.1511'-

4.59 Show that the noKh frequency Wo and the 3-dB notch bandwidth Bf>J of the second-on:ier £IR bandstop filter of
Eq. (4.118} are given by Eqs. (4.115) and (4.116}. respectively.

~L60 Design a second-order bandstop IIR digital filter with for each of the following specificatiornl; (a) Wo
0.41!', Bw =
O.l5;r, (b} Wo = 0.55:rr, Bw =
0.25:n:.

41.61 Consider a cascade of K idenrical first-order lowpas5 digital fiiters with a transfer function given by Eq. (4.109).
Sbo-... that the coefficient n of the first~order section u related to the 3-dB cutofffrequerx:y we of the cascade ac;;:ording
t•> Eq. (4.122) with the parameterC given by Eq. (4.123).

4.62 Consider a cascade of K identical first-order highpassdigital filters widl a transfer function given by Eq. (4.112).
Ex: press the ooefficient a of the first-order se\:tion in terms of the 3-dB cutoff frequency We of the cascade.

4.63 If H(r.) is a kw;pass filter, show thai H(-::) is a highpass filter. If wp and w 4 represent fhe passband and
S·~ptmnd edge frequencies of H(r.), determine the locations of the passband and stopband edge frequencies of H( -z).
Determ.ine the relations between the impulse response coefficients of the two lilters.

4.64 Using the method of Problem 4.63, develop the transfer function GHp(z) of a lirst-onier IIR highpass filter
fiurr. the transfer functiOfl HLP (z) of the first-order IIR lowpass filter given by Eq. (4.109). Is it the same as that of
the bghpass transfer function of Eq. (4.1 12)? If not, de!ennine the location of its 3-dB cutoff frequency as a functioo
of the parameter a.

4.65 Let H(;zJ be an ideal lowpass filter with a cutoff frequency at :rr/2. Sketch the magnitude responses of the
f(lliOIVinpystems.: (a) H ('! 2 ), {b) H (z)H(;_2j, (c) H( -z)H (z-:), and (d} H(z.)H( -z2).

4 .. 66 Let H{z:) be a lowpass filter with unity passband magnitude, 11 passband edge at(t.lp, and a stopband edge at w8 ,
a:; shown in Figure P4.8.
Chapter 4: LTI Discrete-Time Systems in the T:ansform Domain

(a} Sketch the magnitude response of the digital filter G1 (Z) = H{z.M)Ft (.:),where Ft (:dis a lowpass filter with
unity passband magmtude, a pa~sband edge at wp/ M, and a -,topband edge at {2rr - % )/ M. What are the
b:mdedge,'i of G 1(z)"
~b) Sketch the magnitude response of the digltal filter G2 (:::) = H \zM)Fi(z) where F2(z) is a bandpass filter with
unity passban:! magnirude, aoC with passband edges at (271"- wp)/ M and (2rr + wp)/M, and stopband edges
at (2Jr - w.d 1M and 12;r + w_, _jJ M, m;peo;:tiveiy. What are the bandedges of G2 (z_}"}

lmej"'~

:If--~
I ~.

FlgureP4.8

4.(i7 Let a causal LTr dis;:rete-time system be characterized by a real impulse response hlnJ with a DTFr H(ejw)_
Ccnsider the sy~tem of Figure P4.9. where x[n] is a finite-length ~u~. Deterntine the frequency re:>JX-'flse of the
ov.~rall sy:;;tem G(ei"'} in terms of H(ei"'), and show that it has a ler-o-phase response_

~-· :nr.:
.-:::---,

FigureP4..9

4.f.S Show that the amplitude response H (w) of Type l and T.rpe 3 linear-phase FIR transfer functions is a periodic
fur.ction of w with a period 2n, and the amplimde response H(wj of Type 2 and Type 4 !inear-pha.:;e FJR transfer
fw:1ctkms is a periodic function of w with a periOO 471".

4.t'9 A length--9 Type l real--coefficient FIR filter ila~ the following zeros: .q = -0.5, zz = 0.3 + )0.5, ;:3 =
- !: + j 4-. (a) Determine the locations of the remaining zeros. (b} What is the transfer function H 1{z) of the filt«?
4.70 A length- 10 Type 2 real-coeffident FIR filter has the following zeros: Zl = 3, Z2 = }0.8, Z3 = j. (a) DeterllUile
the locations of che remaining zeros. (b) What is the transfer function H2{z) of the filter?

4.71 A length-13 Type 3 reai-coefficieutFIRfilterhas the following zeros: Zl = -0-J ....... }0.5, zz = j0.8, l1 = -0.3.
(a) Detem1ine the lootUom of the remalning zeros. (b) What 1.,. the transfer function H;,(z) of the filter?

4.12 A length-10 Type4 real-coeffidenl FrR filter has the following zeros: .~: = - L2 + j 1.4. r_z = i- + F/)_ ;: 3 =
! -c- j :-Lj 5 . ta) Determine the locations of the remaining zeros. (b) What is the transfer function H4(z) of~ filter?
4.16. Problems 287

4. 73 Show ;:nalytically that an FIR filter with a ::onst;.nl gmop <.k<ay most have either a symmelr.c or an an!Thymmctric
impub;e respome.

4.74 Comitler :he following five I--:R lramfer functions;

(if Hj(z) = -0.5 +0.45::- 1 + L02z-J +O.Iz- 4 - O.OJz- 5 - O.!Rz- 0 .


+ 0.58;:-:2
(iit 11?.{<.'< = -0.3-+ 0.11<:-l ._ 0.3::- 2 + 1.22~- 3 + 0.3.:.- 4 + 0.11:--- 5 - 0.3;-- 6 ,
{li:) H-y,(:') = I +Of,~ -l + 0.49::--- 2 - 0.48~- :>. - 0 14z- · 4 - 0.12;:- 5 + O.ff.)~- 6 ,

(iv) H4(n = 0.25-- 0.6c-! - 0.14::-?. + 0.97:::- 4 + 0.06~ · 5 + 0.36.:.- .


6

iv). /ls(zl = 0.09- 0.12;:-; - 0.14.::-- 2 - 0.48.:. -J + OA:Jz-4 + Q_06z- 5 + ~- 6 .


th1~g the M-file zpl C:Ile de!t:mline the ;o:ero ]:_~tions of each and then amwer the following questions:
(a) I>oes any one of the FIR fihers have a linear-phase r~ponse? If so, which one?
(b) Doc" .ln)" one uf the FJR filters have a nunimurn-pba:re re!>ponse? If so. which one?
(cl Does my one of the FIR filk---rs have a ma.>nmum-phase respon~e? If so, which one?

4..75 A third-order PIR tiltcr has a transfer h:rn::tmn Gt(<:_l giver. by

ia/ Deiennin.; the transfer functi(_m:; of ul! other FIR fillers whose magnitude responses are identic<>! to that of
1 G: (zl.
lb) Wh1ch one of these filters has a mini.'1lum·phase transfer function and which one has a maximum-p3.ase transfc:
function".'
ic J tr g>; (n l denotes the impulse res pons~ of the ktb FIR filleT determined in part {a), compme the partial energy of
the i~npulse response given by

£;.Jn1 = L gklmP. O:c::-:ns_3,


m=O
fUI" all value~ of L and show that

L
0 "
lgkfmll 2 S _L lgm;nimJ; 2 ,
tr.=D m--_n
""" ;")C =
2::" lgkfmJi 2 = L 2
igmin[m1: ,
.,=0 m=il
for ;1!1 va!ues of i::, a.-.d wher-e gmi"[n} is tbe impulse re,;ponse of the minimum-phase FIR filter determined in
part (a).

4.76 The z·tr;msforms of five seylleaces of length 7 ar-e given below:

lft (.:) = 0.008-3653- O.OC1782726z-; - 0.075506(-2 - 0.3"-B956512;:- 3


+0.13529123.:--1- 1.50627293~-s ~ 3.32{)7295;:- 6 ,
H2(z) = 3.3207295 - l 50627293r- 1 + 0.!3529123:- 2 - 0.34139565z- 3
- 0.075506UJz- 4 - 0.001782725z -s + 0.0033652:- 6 ,
H:>(z) = 0.026931 ~ - O.DS40756z-l + 0.02SS0603z - 2 -1- 0.9404Y8549z-'
- 2.250~765.z:- 42.53245711:::- 5 --r 1.03147943z- 6 ,
+
JL.(z} = 1.03147943 + 2.5324571;:-l- 2.2508765:::- 2 +0.9404-985z-3
288 Chapter 4: LTI Discrete-Time Systems in the Transform Domajn

+ 0.02589602;:- 4 ~ 0.0840756;:- 5 + 0 026931 J t:: -<j.


H 5 (z) = tl\6667 -0.05556z -r - 0.75z -.? + 3 5:::- 3 - 0.75::- 4
-0-05556::- 5 + 0.1666?::-l'i.

The magnitude of the DFT fm each of the above sequeiKOes is the same. Which one of the above :::-transfonm ha:> all
its zeros outside me unit circie? Which one has aH its zeros in;;;de the tmil: circle'! How many other real sequence~ of
length 7 exist that have the same DFT magnitude as tho~ given above'!

4.77 Let the fin;! four impuL<e response s.amp[es of a causal linear-phase FIR transfer function be gn.-en by hj_O) =
a, hi!}= b, h[l} = c, and h[3] =d. Determine the remaining inpul~ response samples of H(z) oflow<-8t order
for each type of linear-pluM:" filter.

4.78 The first six samp}es of !he impdiie response of an FIR filte> H(z) are gh..:n by h[OJ =-
!. h{l] = 2. h[2l =
-3, h{31 = -4, hr-r; = 5, and h(5J = 6. Detenni:ne the remaining impulse response samples of H(<:) Qflowest orde!-
for each type of linear-phase filter. l_;smg zp lan<2 determine ~e Z'!'fO iocatioos foc H(z) for each type of linear--phase
filter. Does H(;:! have a z..PTo at z == l J.lfldlnr ;:: = ~ 1? Do the zerm on the unit circle appear in complex conjugate
pairs? Do the zeros not oo the unir circle appear in mirror-image symmetry? Justify your answers.

4.79 Let Hl {z), H2{z.). H3(z), and H4(;.) be, respectively, Type l, l}pe 2, Type 3, am! Type 4lmev-phase FIR filters
Are the fol!owing filters composed of a cascade of the above filter5 · :in ear phase? If they are, whm ar-t> theirt}'pes?

(a) Ga(z} = H! (z)Hf(Z}, (h) Gb(Z) = HJ(z)H:;.(z), (c) Gctz} = H!(Z)HJ(Z),


(d) Ga(z) = Ht(z}H4(d, (e) Gr(Z) = ff2(Z)H;.(:), (f) G j(Z) = HJ(Z)HJ(Z),
(g) Gg(z) = H4(z)H4(;;;), (h) Gn(z) = Hz(z)H3(<J. (i) G,(z) = H3(z}J4(z).
4.80 Let h(n], 0 :S: n ::; N, denote the impul$tl response o-f a Type J FIR filter of length N + 1. The frequency response
is of the form H(eiwj = ii(w)e- jwN{l, where the amplitude response if(w) is as indicated i:n Figure P4.JO(a/.
(a) Construct another F1R filter of length N +l with an impuhe resp<mse defined Cy

g[n/ = ( h[ni, for all n except rt = m.


ct., n=m,

Determine a suitable m and a such that the frequency ~pome of the new filter ,.,. of t're form G(ej"')
G(w)e- fwN/ 2 , with its amplitude response G(wj as indicrue--d in Figure P4.iO(b) [Her70j.
(bl Show that if<w) can be expressed ar. the square magnitud~ response of a miniwum-phasc FIR filter F( z) of
order N/2 and develop a method tQ construct F(~).
{c} Can G(w) also be expre<sM:d as the square magnitude response of a minimum-phase FIR filter? If not why nor?

~- _::::::z=s:.J~ "'
(a) (b)
FigureP4.10
4.16. Problems 289

4.31 1be- ;ime cmutam K of an LTI stable causal discrete-time system with l'ln impulse response hlnJ is given by the
value of the- total time mterval n a! which the partial energy of the impulse response is within 95% of the total energy,
i.e_.
K ~

:L :hln]\ 2 = 0.95 L lh[nJ\


2

Determine the time cor,s!ant K of the first-order causal transfer function H(z) = 1/0 + U:l- }. ~a\ < l.

4.82 Let F; (;::I denote one of the factors of a linear-pha<;e FIR tran.sfec function H{z). Detemune at least one other
faclor Fzl:) of HC::} for the foilowingchoices of F1 (z):
(a) Fj(zj = 1 + 2<:- 1 ___,__ 3C 2 , (b) FJ{z) = 3 + 5z- 1 - 4;:-z- 2:- 3 _

4.83 Consider the first-order causal and Mable aUpass uanstez. function given by

i -d"!'z
AJ(Z)= •
l - dl

Detemtine the expression for ( 1 -!A 1(_z)l 2 } and then show that

2
• [ < O.
fo• 1<1 < 1.
2
( l -I AI {<:)1 ) = 0, for lzl~ = 1.
> 0, for lzl~ > L

Now, using !he aOOve approach, show that Property 2 given by Eq. (4.132) OOids for any arbitrary causal stable aHpass.
transier function.

4.84 Derive Property 3 of a ~able allpass transfer function given by Eq. (4.133).

4.85 (a) Stmw that the phase delay rp(w} = -8(w)/w of the first-order ailpass transfer functi<Jn

dl +_z-!
A1(.-:}=
l : d\f. J '

is given by Tp{w) 3: (l - d1 )/(l + dJ) = 0 [Ste96].


(b) Design a firs.t--orrler .allpa!>s filter with a phase delay of iJ = 0.5 samples and operating at a sampling race of
20kHz. Determim the error in samples at l kHz Jn the pha.<;e delay from its design va1ueu-f0 ..5 samples.

4.86 Consider the second-order allpass transfer function

. ) dz+d;z-! +z- 2
A "'~~ =
L~ l+dJZ 1 +J2 z 2"

If.) denotes the desired low-frequency approximate value of the phase delay rp(w} = -(J(w)fw, show that {Fet72l
2 _,) (2- J)(l -S)
d· =2 ( - - 112
.. 'l + () ' = (2+0)(1 +8)'
4.87 Let G{z) be a caus.a! stable Mnmin:imum-phase l---ansfer f!Ifl.ction. .and kt H(z) denote another causal stable
transfer function that is minimum-phase v.ith ':c(ei"'J; = JH(e·"")!- Show that G(z} = H(z)A(z}, where A(z) is a
stable cam;ai all-pass tmnsfer functioe.
200 Chapter 4: LTI Discrete-Time Systems in tne Transform Domain

4JRi The tran<;h:r function of a typical transmission channel i,; gi~cn by

{;: + I A)ZzL." -t 2: + 4)
H(z} = ~-7-:-c~Cc:c
(z + 0.8_1(;: 0.6)

In on.ler to correct for the magnilw.ie dtstortion introduced by the channel on a signal passi r1g through it, """-' ";;sh w
connect a ->table digital fil!er characterized by a transfer function G(z) at the r.:ceiving end. Determine G(;:).

4J~9 Let H (z) be a causal stable minimum-phase uansfer function. and let G(;::) denote another causal stabk IIai\Sfer
fu'lctlon which is nonminimum-phase w~lh IG(Fl"'-')! = IH\ el"-')1- If h(n] and gtn J denow their respccti'o·-e impulse
re.;.ponsc~. show that

(a) lg(OJI ::;: lh(Ojj.


'b'
' ' }"- 1"]1 2 ::::::- ""''
~t=ll ',g.._ L..t=O'ih't"
1
'-i' ·

4.'!)0 Is the transfer funumn


\z:+3Hz- 2)
HI c) ~ ;-:--"~'&~~
(z 0.25)(;: + 0.5)
mtmmum-phas<:'? l.f it i;, not minirnu:m-pha..c, then construct a minimum-phase transfer function Gtz) such thal
IG(ei'")l = IHkj'-"):. Determine their corresponding tmit sample responses, g[nJ andh[nl-. forn = 0, ! , 2. 3, +. For
w·:at values of m is L;~~ :g[n}l 2 b1ggerthru: :L;:-'=0 lh[n]l 2 ?

4.91 Til<: iollowing bandstop FIR tr.msfer functions HBs(z) have also been proposed for the recovery of ver1ica1
d<::m.ils in the st.."Ucture of Figure 4.33 employed for the separation of me luminance and me <:hrominance components
[AcaSJ], [Pri80], [Ros75]:
( ao. ~
.-,Bs ( ::.l L,
= :r~· +~ --,
-")"

{b:• Hss(:.:) = -it;:o + :.:-2,2{-1 + 6:Cl- z-4),


{c) Hss(d = i-O + z- 2 ) 2 (-3 + 14z- 2 - 32- 4 ).
D.~velop their delay-complemental)' :ransfer functions Hs p (z ).

4.!n Let Ao(z) ami Adz) be two cau~ stable allpass transfeo timctions. Define two (causa: stable IIR transfer
functions as follows:
Ht (:;:) = Ao{z)- A1 (z).
Shuw that the numeratou of Ho{z) and Ht (zJ are, respectively. a symmetric and an antis-ymmetric polynoiD.lal.

4.!}3 Show !hat the two trans~er functions of Eqs. (4.142a) and (4. 142b) are a poweo-compbmentary pair.

4.94 Show that the two bmsfcr iuw.;tmn;; ufEq~. (4.142a) and (4 l42b} are each a BR function.

4.95 Consider !he lransfer function H (z) given by

l M--1
H(:;;} = M L Ak(.::).
k=iJ

wh~.-c Ak(;:) are stable real-coefficient al.lpa&.<; functions Show 1hal H (z) is a BR funCLion.

4.'!)6 Show tha;: the bandpass tr.rnsier function HR p (:)of Eq (4.113) and the bandstop transfer f-.mclion H BS; l) of
Eq_ (4. Ll S) form a doubly--<::omplemcnt.ary pair.
4.16. Problems 291

4.97 Show that 1he value of the gain function Q(w} of a power-symmetric tram fer function defined by Eq. (4.146) at
= ;r :2 is gi~-en hy 10 log] 0 K - 3 dB.
<"-<-·

4.98 Consid~f the real...._ueftiCJent stable IIR tran:-;fer funct1on !1(:; = Ao(z2 ) + z- 1 A1 (zh, where .4.o(z} and A 1(z}
w·<e. stable <~II puss transfer function~ Show that H (z) is a power-symmetric transfer fLnction.

4.99 Show tha! the following FIR transfer functions satisfy the power-symmetric condition:
2"1 5
(a} H"!;;) =
. -1
J -;:
")! -2
+ '2-Z - "jC· 'I ,.
- 5;:-~- 5
2 ::-
(b) Hh{Z) = l + 3:.::- 1 ...... 14:.::-:2 + 22:.:: -:>- 12z- 4 + 4_,.-.5.
4.100 Let H(;:_) = a(l + bz. -l J, where a .and bare CO!L"'>I.anL~. Then H{z)H(;:;-1) i.s o-f the form Cl + d + u: ·!.
Determine the condition on c and d so that H(z) is a power-symmetric FIR transfer function with K 1. Show =
that'" = l/2 and b =
1 satisfy the power-symmetric -::omlition. Detennine two other possible sets of values for a
a·ld b to ensure the power-symmetm: condit;on. Using M.4.TLAB '>how that H(zl and G(::) 1H(-z-l) are = -.:-
p·JWer-complemcntary for the abo-ve values of the constants. a and b.

4.101 Le[ H{;:;) =


a(l-t bz- 1)(1 + d,z- 1 ~ dF- 2 ), wherea,h,dl, and d2 are coru;tants. Then H(z)H("C 1) Ill. uf
the fom1l_c;: +d +<::z- 1)[d2z 2 +d1 (i + d2)7 + (l +df -+-dj) +d!(l +d:zk-l + dg- 2 ]. Determine the condition on
c and.fl in h~Jms cf d1 and d2 so that H{z) is a power-symmetric FIR transfer function with K = L Ford1 = d"1 = I.
e·raluate the ccnstraint on c and d, and usJng it detenni~e ooe realizable set of values for a and b. Using MATLAB
sllow that H{zJ and G(z) = -z- 5 H(-z- 1) are power-complementary for these values of the constants a and b.

4.1(}2 Show that


0o·c1 _~:_:0c5o'c-_'__.+c0c·c4c'c'-~2 + OA5..cc_c,c+~0c·5o'c-_'__,+cOe·:c'''---'
· =
H(;::.) . ..,
1 +0.9z -~ +0.2z--'~
is a power-symmetric IIR transfer fundmn.

4,]03 Show that t.':.e following FIR transfer functions are BR functions::

(a! H,(z)= l+rx


1 (
!+az- I. J. o:>O.

{bl H2(Z)= l~{3(l-/3C 1 ), 8>0,

l (' ,·. ( 1-tfz-,


(c)HJ(z)=(l+a)(l-r_B).l+az-J ') a> 0, f3 > o.

4.104 Show that :he following IIR tnn;;fer functions: are BR functions:
~ ' 2 -J
(a) H](Z)=>~,
3 + z- 1
1- z-1
(h) H1(7)= 4+2z !'
I ---2
(c) H>VJ=4+2z ~ 2::2'

1+6 ·-I +J.~ -2


[d) H.~(z)=~.z"
6..,._5zl+z2'
, 3-t-2z-; +3::- 2
(e) Hs\Z} = . ,
4+2z ; +2z 2
292 Chapter 4: LTI Discrete-Time Systems in the Transform Domain

3 + 9z- 1 + 9z- 2 + Jz- 3


1 2
12+l0l +2z
4,105 [f A 1 (z) and Az(;::) are two LBR fimctions. show that At (1/ A2(:t)} is also an LBR function.

4.106 Let G{.;:_l be an LBR function of ordet N. Define

G(cl+a)
F(z) = z ( 1 + o:G(z) ,

where lo:l < l_ Sb.')W that f'(z) is also LBR. What i<> the order of F{z)? Develop a realization of G(z) in terms of
F (l).

4.107 If G(z) is a BR fum:tion, show that G(l/ A(d) is a BR fuuction. where A(z} is an LBR function.

4.108 Show tllat each of tbe following pairs of transfer functions are doubly complementary:
2+2z-l I -.z-1
(a) H(z)= 3+z I ,G(z)=3+z 1'
-1 +z- 2 3+2z- 1 -3z- 2
(b) H(z.)= 4+2z 1+2z 'pG{!..)= 4+2z 1-+2;:: 2"

4.100 Deternrine the power-complementary transfer function of each of the following BR transfer functions:
+ z.-1 + z:-2)
• ' - 2(1
(a) H a\ZJ- 3
+ 2z l + l 2 •
3(1 :5 + 6.5C 1 + 6.5z- 2 ...,...IJ;o:- 3 )
(b) Hb{Z) = '-"-C:,-;;-::c;-7-:::,-'~C'c>c-':::i'-c
18 + 2!.z t + & 2 + z 3

4.11{1 VerifyEq. (4.151}

4.Hl Figures P4.ll(a) and P4.1l(b) show, respectively. the DPCM {di.fferentiol pulse-code modulation) coder .and
dfcoder often employed fw rhe compressi011 of digital signals lJay84]. The linear ])redictor P(z) in the encoder
develops a preciction i[n] of the i11put signal x[n J and the difference signal d[n] = x[nJ - i[n; is quantized by the
qurnizer Q developing the quantized output u[nJ which is represented with fewer bils than that of x[n]. The output
of the encoder is transmitted over a channel to the decoder. In the absence of any errors due to lrallsm:ission and
qoantizatWn, the input v[n] to the docoder is equal to a[n] and the decoder generates tile output: tin J which ls equal to
the input r[nJ. Determine the transfer functiou lf(z) =
U(z.)/X(z) of the encoder in the absence of any quantization
and the transfer function G(z) =
Y(z:}/Y{z) cfthe decoder for the case of each of the following predictQrs,and show
tb.at G(z) is the inverse of H(z) in each case.
(a) P(z.) = h1z-t, and (b} P(z) = h1z-l + h2z- 2 .

utn1
+
Linear Predictor
v[nJ + y[n]
+ +
P(z.) + Linear Predictor
+
P(z)

(a) (b)
.FigureP-4.11
4.16. Problems 293

4.112 A causal :;table tTI diS(;rete-timesyst::m is characterized by an impulse response ht [11 J = -lilnJ +~(0.5)'' .ulnl+
-fz (-0.25) 11 Jtinl Detennine the impulse r~ponse h2ln j of -it'\ inverse system which is causal and stable.
4.J I3 Verify the relations betweert the transfer parameters and the chain parameters of a two-pair given in Eqs. (4.176a}
and (4. !76b).

4.114 A two-pair is said to be reciprocal it ft2' = t;n {Mit73aj. Show that for a reciprocal two-pair. AD- BC = I.

4.115 Consider the r -.;a:,~·ade of Figure P4.12(a), where the tv.o two-pairs ace ~cribed by the transfer matrices

(1 - t )--1
.~

-k, -1

Determine the lrans-f.er matrix of the cascade.

Figun: P4.12

4.116 Cons:der the T-cascade of Figure P4.12(b-), where the two two-pairs are described by the chain matrices

Determine the chain matrix of tht" c<IM:ade.

1.117 Detenrrine lhe transferparamet=s ar·d the chain parameters of the digital two-pairs of Figure P4.l3.

X2
(a) (b)
Figure P4.13

4.118 A transfer function H (:.) is realized in the form of Figure 4.40, where the constraining traru;fer fuoction is given
hy G(z} If the relation between H(:.) and G(z) is ofL'Je form
1
H{·)= k.,+z- G(z)
~ 1 +kmz iG(<:)'

determine the transfer matrix and the chain matrix parillllCters of the two-pair of Figure 4.40.
294 Chapter4: LTI Discrete-Time Systems in the Transform Domain

4.119 Determine chain parameters of the ca.~ade of three latuce- two-pairs of Figure P4.l4. Using these chain
paramecers detennine the e:xpress:ion for the transfer function A3 (z).

-.~~L-~----~~2-~----~+rw~,~r---,
-k,

Figure- P4.14

4.12(1 Derive the inequality ofEq. (4.187).

4.121 Detemllne by Jnspection which one of the following second-order polynomials ha;o root-s: inside the unit circle:
t.aJ D,.(z) = t + 0.92z-: + 0.1995;:- 2 ,
(b) Db(Z)=2+0.4:C 1 -2.8&- 2 ,
(c} Dc{z) = ) + 1.4562z- 1 +08-h:-2,
(d} Dd(Z) =I +2.1843;:-! +0.81.C2.

4...ll22 Tes! analytically the BIBO smbility uf l!1e following causal IIR transfe-r functions:
z2 -.- 0.3z- 99.l?
(a) Ha{:z)= 3 1 2 l I'
: -ozz -4z+n:
z2 + l 1.6Jz + 0.001
(b) Hb(Z) "--- 'l l] _ -.-,---y--
'" +IJ~ 2 <n;Z-::;
'} 25.: 3 + 13.7z 2 + 4.04z + 0.3
(c) H, {z) = _ •
l8; 4 + JOz'' + 18.5.: 2 + 5z -..- 0.5
(tl) lld(<-' =
"+
~
~-' _
s~-z + 7 _-4 _ z-5
~ ~
~ l+~z l+~z l+~z: 3+-foz: 4
l
(el H"'{z) = z-c-;cccc,-;-cc:cc,~c-::T:-~~OC-:c~
6 + 5z l + 4;: 2 + 3z 3 + ~- 4 + z 5 ·

4.123 Determine analytically whe!hec all roots of the following polynomial:~ ace imide the unit cirde:
(a) Da(z)=5+4z- 1 +3z- 2 +2z-3+;:-4,
(b) Db(<) = z 3 + 0.2: 2 + 0.3.; + 0.4.

4.124 A polynomial A (.•) in the complex variable s IS called a smct(v Hurwitz potynvmial if <til zeros of A (s) are in
tl1e left-half s-plane; I.e., if.~= St is a :teroof A(s), then Re{s.~: l < 0. Ld D(z) be a polynomiaLn :z of degree N with
all roots inside the unit circle. If we replace z in D(z:) by the fundion (I + s)j( J ~ s). we arrive at a rational function
of s given by B\s)/(1- s)N, where B(s) is a polynomial ins of degree N. Show that B{s) IS a strictly HurwiU
polynomial.
4.17. MA:lAB Exercises 295

4.125 A n:ro-mean WSS white noi:<.c sequence x[n} with a vammce o"} is being_ processed by a causal LTI discrete-
lime .system with an irr.pu.lse response h[nj = 5[nl- etSrn- lj, generating the WSS sequence y[n} at its outpr;t.
Detem1ine the expressions for the power spectrum Pyy(w) the autocorretation ,Pyy[€1, and the average pow·er of the
output y!nJ_ 'WhatJs th<: etTect of il' on the average ontp'.:t powe!"~

4.126 A z.:n)-rrtean WSS white nOJSe -~equcnce x!n] with a variance cr} is being processed by a causal LTI discrete-
t;me aystem w1th an impulse response h[n] =
(0.5)" M!n] gcneratmg the WSS sequence y[n J at its output. Detennine
the expression~ for the power spectnlm PyylW ). and the auto..:orrdation tP_nl tJ ofthe output y[n).

4.127 Comi.-!erthe structure of Figure P4.l5, whcrc A(z) = {ct- z- 1)/f_l -a;::- I} IS a causal stable all!)a,'% filter
with lo-: <.. l. Le( L~ inpill x [n J he- a stationary noise with powe; s~----trum given by

I
P_x X ( w) = ;-cc-:;c=:-c
l+dco~w

{a) Calculate the power spectrum Pyy(w) ofche output y[n).


(h) Does you,- answer depend upon u? If nm. where dtd you use the infonnatKm !a: -< J?

x[n] ----<I•LI_A_(z_l __j~ y[n]

Figure P4.15

4.128 A real. white zero-mean WSS random signal .x[nj is proc-essed by an LTI-digital filter with a real, causal,
stable impulse >espome h[n }. as indi-cared -in Figure P4.16. Let P,,\w} and P~u (w) denote the respective cross-power
~pcctrums. Justify your answers.

(aJ Is Pxy(wl a real function of r_u?


(b) fx Pxu{.v) a real function of w?

x[n] ---ojic__hl_nl---'~ y[n]


)-'[-n l
---oi•LI_h_ln_l__J~ '[n]
ulnl- r[-n]

Figure P4.16

4.17 MATLAII Exercises


M 4.1 Write a MATLAB program to c•Jmpll!e- the group delay using the expression ofProhlem4 5 at a prescdbed set
of di~rete freq.~:eo..:les_

M- 4.1 Write a MATLAB program w simulate the filter destgned in Problem 4.£6 and n~rify its filtering operation.
296 Chapter 4: LTI Discrete-Time Sys1ems in the Transform Domain

M 4.3 Write a MA TLAB program to simulate the filter designed in Problem 4.19 and verify its fll!ering operat:cm_

M 4.4 The following (hird-order liR transfer function has heen proposed for clutter rejection inMTf radar ~Wh!S&}:

z-Jo-z-')2
u {::.) = c(l;-~o~.4~,-:c,~,~~,c"~o~.,~,~,~.~,-+;-;;0.~6~1~,~'"l
Using MATLAB dctennine and plot its gain response and s!ww tho.t it has a highpass response.

M 4.5 Show that foreal'h ca~ listed below, H(z) and H( -~::)are power-complementary.

(a)

1 - Ls::- 1 + 3.75::- 2 - 2. 75z-J + 2.75;:- 4 - 3 75;:-5 + 1.5 2 - 5 - :.- 1


(b) H~)- . .
• 6+6.5z 2+4.5z 4 +z ~
lb verify the pmver-<::omplementaryproperty. write a MATLAB prog:ram to evaluate H(z)H(z- 1)+ H{-z)H(-:.- 1)
and show that tJns expression is equal to unity for each of the transfer functicns given above.

M 4.6 Plot the magnitude and phase responses of the causalllR digital ttansfer functim1

0.05340 + z-1){1- !.0166.::- 1 +z-2)


H(z)= l t
n -0.6S3z )(l-1.446ll +0.7957z
<")'
~)

What type of filter does this transfer function represent? Detennine ~he difference equarion representation of the a hove
transfer function.

M 4. 7 Plot the magnitude and phase responses of the causal IIR d1giral !r.rn>;fer function

What type of filter does this transfer function «:present? Determine the difference equation representation of the above
tram;fer function

M 4.8 Design an FIR low pass filte.c with a 3-dB cutoff frequency at 0.24n us.mg a cascade of five first-order low pass
filters of Eq. (4. tDJ). Plot its gain response.

M 4.9 Csing the result of Problem 4.62 desigr. an FIR highpass filter with a 3-dB cutoff frequency at 0.24n using a
cas.:ack of six first-order highp!L<-s filters ofEq. {4.112). Plot its gain response.

M 4.11) Design a fiTSt-order IIR Iowpass and a first-order UR highpass iiher with a 3-dB cutoff frequeru.:y of 0.3!T.
ll!>in~ MATLAB plot their magnitude responses on the same 5.gure. Using M.-.TLAfl show that these filters are both
allpass-complernentary and power-complementary.

M 4-11 Design a second-order llR bandpass and a second-Drder HR notch filter with a center (notch) frequency
0.411' and a 3-dB baruiwidth Bw (notch width) ofO.I:'i!T. Using MATLA.B plot their magnitude responses on the
Wo '""
same figure. Using MATLAB sht-.w that these filters are both allpass-complementary and power-coinplementary.

M 4.12 Design a stable second-nrder UR bamipass filter wnh a center fTequency at 0.7rr and a 3-dB bandw1dth of
{).l5n-. Plot its gain response.

M 4.13 Design a stable second-order IIR nmch filter w1tb a center frequency at OA.:rr and a 3-dB bandwidth of O.l rr.
Pkrt its gain respense.
4.17. MATLAB Exercises 297

.\f 4.14 Using MATLAB shov.• that the transfer funct;or. p.alrs of Problem 4.108 are both allpass-complementaty and
power-complementary.

M 4. J5 Develop the pole-zero plots ofihctransferfuoctionsofProblem4. 109 using the function zplar..e ofMATLAB
a!KI show that they aJ"e stahl!:. Next, plot rhe magnitude respo11.~e of eacb transfer f.mction usi>og MATLAB and show
th:or it satisfie>. the bounded real property.

:\.1 4.16 Develop the pole-zeroplob.ofthetransfer functions ofProblem 4.122 usingthefunctioll zplar..e ofMA TLAB
and then te~1: their ~tabillty .

.:\1 4.17 Using Program 4_4 test the staln.lity of lhe transfer furu:tions of Problem 4.122.

M 4.18 Using Program 4_ 4 detennine whether the roots of the P'llynomials of Problem 4.123 are inside the unit circle
or not.

M 4.19 The FfR digital filter ~ture of Figure P4.l7 is use-d for aperture correction in television to compensate for
high-frequency losses [Dre90}. A cascade of two such circuits is used, with one correcting the vertical aperture and
the mher c.orrecting the horizontal aperture. Tn the fonner case, the delay z:- 1 is a line delay. \\o-hereas in the latter case
it is. 70 ns for the CCIR wmdard and the weighting factor k provides an adjustable amount of correction. Detennlne
the transfer function of this circuit and plot its magnitude response using MATLAll for tv.-o different values "Of k.

,-------.(++}-~yfn] =red
output
correction
signal
k

---0.25 high frequency


'---.<!';)--<(;)------' signal

FtgUre P4.17
M 4.20 An improved aperture correction circuit foe digltalte~ision is the FJR digital filter structure of Figure P4.l8,
where the delay .c- 1 is 70 n;; for the CCIR ~tandard, and the two weighting f<K.i:ors k1 and k2 provide an adjustable
amount of ._---urrection wtttl k 1 > 0 and k2 < 0 {Dre90~. Deten:nine the transfer fuuction of this circuit and plot its
magnitude response using MA TLAB for two different values of k: 1 and k2.

'in band'
h.f. signal
+ 'in band'
correction
/ signal

OO>"rected
>l•l output
xfn]
input ~ ~'outof~'
conttnon
signal
·-out of band'
h.f. signal

FigUre P4.18
Digital Processing
5 of Continuous-Time Signals

5,1 lntroduction
Even though this book is concerned primarily with the processing of discrete-time signals. most signals
we encounter in the ~e-al world are continuous in Lime. such as speecb. music, and images. Increasingly,
discrete-time signal processing algorithms are being used to process such signals and are implemented
employing discrete-time analog or digital "ystems. For processing by digital systems, the discrete-tine
si!!:nals are represented in digital form with each discrete-time sample a<; a binary word. Therefore, we
need The analog-to-digital and di.gital.-to-analog interlace circuits to convert the continuous-time signals
Into discrete-time digital fonn, and vice versa. As a result, it is necessary to develop the relations between
the continuous-time signal and its discrete-time equivalent in the time-domain and also in the frequency-
domain. i The latter relations are important in detennining conditions under whld1 the discrete-time
processing of continuoU&-time signals -can be done free of error under ideal situations.
The interface circuit performing the conversion of a continuous-time signal into .a digital fo,.-m is called
the analng-w-digiwl (.AJD) converter. Likewise, the reverse operation of converting a digital signal imo a
continuous-!ime signal i\. implemented by the interlace circuit called the digitaJ-to-o.nalog (DIAI converter.
In addi:ion to these two devices. we need several additional circuits. Since the analog-w-<ligitaf conversion
usually takes a finite amount of time, i.t .is often necessary to en.'lure that the analog signal at the input of
the. AiD converter remaing ~onstant in amplitude until the conversion is C\Jmplete to minimize the erro.r in
it.;; representation.. This is accomplished by a device called the sample-and-hold (S!Hi circuit, which has
jua] purposes. It not only sampi.es lhe input continuous-time signal at per:;odic intervals but also holds
•he a..>ulog sampled value constant at its output for sufficient time to permit accurate conversion by the
AID converter, In addition, the output of the D/A convener is a staircase-like waveform. It is therefore
necessary t<J smooth the D/A co.1.verter output by means of an analog reconstruction (smoothing) .filter.
Finally, in most applications, the continuous-time signal to be processe-d has usually a larger bandwidth rhan
the bandwidth of the available discrete-time processors_ To prevent a detrimental effcrt called aliasing, an
analog anti-aliasingfilter is often placed before the S!H circuit The complete block diagram illustrating
the functional requirements for the discrere-time digital processing of a continuous-time signal is thus as
indicated in Figure 5.1.
To understand the conditions. under which the above :;ystem can work, we need to examirle each of the
interface circuit:-; indicated in Figure 5. l. First, we assume a .simpler mathematical equivalent of Figure 5. I,
which enables us to derive !.he most fundamental condition that pem1its the discrete-time processing of
conlinuous-time signals. Th this end we assume that the AiD and D/A converters have infinite precision
word!engths. resulting in the simplified representation ofFigure 5.2.2 In this representation, the S/H circuit
m cascade with an infinite preci~ion AID converter has been replaced with the ideal continuous-time-to-

299
300 Chapter 5: Digital Processing of Continuous-Time Signals

~
Anti-
aliasing
,_. S;H AJD Dimml
p~asvr
H .
!>'A
I
"'""
Flgu:re 5.J· Block dillgram representation of the discrete-time digital processing of a continuous-time signal.

x[n f Discrete- }{n] Ideal


Ideal
sampler
time
proce'""
inter-
polator r-+
FtgUre 5.2: A simplified representation of Figure 5.1.

discret.e-time (CT-Dn converter (i.e., an i.deal sampler), wh:ch develop5 a discrete-time equivalent xlnl
of the continuous-time signal xa{!). Likewise, the infinite-precision D/A ccnverter in .;;::ascade with the
ideal reconstruction filter has been replaced with the ideal d:iscrete-time-to---continuous-time (DT-CT)
converter ( i.e.. an ideal interpolator). which develops a continuous-time equivalent Ya (1) of the proces~
discrete-time signal y[.n }.
In this chapter. we first derive the conditions fot" discrete-time representation of a bandlimited continuous-
time signal under ideal sampling and its exact recovery from the sampled venion. If these cond;tions are
not met, it is :mt possible to recmre-r the original continuous-time signal from its sampled discrete-time
equivalent, resulting in an undesirable distorted representation caused by an effect called allasir,g, Since
both the anti-aliasing filter and the reconstruction filter in Figure 5.1 are analog lo'W"Pa."'l£ filters... we briefly
revie?i next the basic theory behind some commonly used analog filter design methods and illustrate them
usi.ng MATLAB. It should be noted also that the most widely used digital filter design methods are based on
th(~ conversion of an analog prototype, and a knowledge of analog filter design is thus useful i.n digital signal
processing. We then examine the basic properties of the various interface circuits depicted in Figure 5.. 1.

5.2 Sampling of Continuous-Time Signals


As iru,iicated above, discrete-time signals in many applications are generated by sampling continuous-time
signals. We also observed in Example 2.12 that identical discrete-time sequences may result from the
sampling of more than one distinct continuous-ti-me function. In fact, in general, there exists .an infinite
number of continuous-time signai.s, which v.-ilen sampled lead to the same discrete-time signal. However,
under certain conditions. it is possible to relate a unique cor.tinuous-time signal to a given discrete-time
sequem:e. and :t is possible to recover the. original continuous-time signal from its sampled values. We
develop this correspondence. and the associated conditions next by considering the relation between the
spo::tra of the original continuous-time signal and tl:e discrete-time signal obtained by sampling.

5.2.1 Effect of Sampling In the Frequency-Domain


Let ga(t) be a continuous-rime signal that is sampled uniformly at t = nT, generating the sequence g[n]
where
gln1 = g.,(nT), -oo < n < oc (5.1)
with T being the sampling period. The reciprocal ofT is c-alled the samplingjreq«eney Fr ,i.e., Fr = l/ T.
NOVI', the frequency-domain representation of g,.(t) is given by its continuous-time Fourier transform
{CTFT) Ga(jfl.),

Ga(}Q) = £: Ka(t)e-_iQr dt, (5.2)


5.2. Sampling of Continuous-Time Signals 301

p(t)

i ->1 n-
ga(t)Tgpi_f)

p!J)
'
---I I I I I I I I I L,
-2T-T 0 T 2T
{a) (b)
f?p(fl

I g""'
. . ffTf.T2T*)JJJJ···.,
21-TO
0
g.,{4D
{c} (d)

Figure .5..3: Mathematical repre~ntatior. of the unifurrn sampling pmce~.•.;: (a} ideal sampling model. (b) impulse
t>Oli!'l, (c) continuous-lime signal, omd (d) its. sampled version.

wherea5 the frequency-domain representation of gjn 1is given by its discrete-time Fourier transform G(ejc,·;,
~

G(el"') = L glr.]e-1""'. (5.3)


t<=-OC·

To establish the relattons between these two different types of Fourier spectra, G, ()Q) and G(ej"'),
we treat the sampling operation ma£hematically as a multiplication of the continuous-t:me signal KaU) by
:~periodic impulse train p(t):

p(t) = L 8(t- nT), (5.4)


n=-oc

wnsi:.ling of a t:ain of !deal impulses with a period T, as indicated in Fig.ue 5.3. The multiplication
operation yield.s an impulse train. 3p(/):
00

gp\tl = g,.(t)p(t) = L ga(nT)D(t- nT), (5.5}


n=--oo

whi.ch is .,;een to be a continuous-lime :;.ignal consisting of a t.""ain of mrifonnly spaced impulses with the
impulse at t = n T weighted by (be sampled value f5u(nT) -of g,;~ (t) at that instant
There are two different forms of the continuous-dme FJuriertransform Gp(jQ) of gp(!). One fonn
is giv~n by the weighted sum of the continuo-us-tim£ Fourie-r transforms of .'i(t - nT);
oc
G I' ( J'Q•' = "\"'
L., g ln~,.,-jn"T
a, ' ~ . (5.6)
n=-oo
302 Chapter 5: Digilal Processing of Continuous-Time Signals

·1 b derive the second fom1, we note that the periodic impulse train p(t) can be expressed a;, a Fourier :;erie<.
(Problcm5.1):
(5.7)

where nr = 2rr/T denotes the angular sampling frequen:::y. The impulse train gp(t) therefore can be
expresM!£1 as

gp(t) = ( Tt L= .)
.,-Jf2y#! . ga(!). (5.8)
k=-=
Now from the frequcucy-shifting property of the continuous-time Fourier transform, the ccntinuous.-
time Fourier transform o: ej~lJ-kt g,.(t) ~ given by G,. (j(Q- kQr)). Hence, an alternative form of
the continuous-time Fourier transform of gp(t) is given by

Gp(jQ)=
t
T L= Ga(j(Q-Hh)). (5.9)
k=-oo

Therefore, G p(JQ) is a periodic function of frequency Q consisting of a sum cf shifted and scaled replicas
of G,.(jQ), shifted by integer multiples of f:h and scaled by 1/L The term on the right-hand side of
Eq. (5.9) fork = 0 is the baseband portion of G p(jf:l.), and each of the remaining terms are the frequency-
translated portions of G p(jQ). The frequetK:y range -f:lr /2 :::: n ~ Q.T J2 is called the baseband or
Nyquist band.
Figure 5.4 illustrates the frequency-domain effe-cts of time-domain sampling. A.-.sume g0 (t_) ls a
bandlimited signal with a frequertey response Ga(jQ), as skelched in Figure .'i.4(a) where Qm. is the highest
frequency contained in ga(t). The spectrum P(jQ) of the periodic impulse train p(t) wirh a sampling
period T = 2rrj0. 7 is indicated in Figure 5.4(b) and (d}. Two possible spectra of Gp(JQ} are shown
in Figure 5.4(..::) and (e). It is evident from Figure 5.4(c) that if QT > 20..,, there is no overlap between
the shifted replicas of G.a. (jfl} g-enemting Gp{jrl). On the ocher hand, as indicated in Figure. 5.4(e), if
Q 7 < 20m, there is an overlap of the spectra -of the shifted replicas of G,.\jil) generating Gp(jfl.).
Comequently, if!2r > 2Qm, &u(t) can be recovered exactly from gp(t) by passing it through an ideaf
lowpa«s filter Hr (jQ_) with a gain T and a cutoff frequency Qc greater than 0..., and less than Slr - Q,.,.
as i!luJ;trated in Figtrre 5.5. However, if flr < 2r.!m, due to the overlap of the shifted replicas of Ga(jO),
the specrrum G p(jQ) cannot be separated by filtering to recover Ga{jQ) because of the distortion caused
by a pan: of the replicas immerliately outside the baseband folded back or aliased into the baseband. The
freq~ency &lr /2 is often referred to as the folding frequency or Nyquist frequency.
The abow: result is more commonly known as the sampling theorem, 3 \Vhich can be summarized m;:
follows. Let gu(t) be a band:imited signal with Gll(JQ) = 0 for lf'lr > Qm. Then £a(t) is uniquely
determine-d by its. sampks g_-.(nT), -oo.::;:: n ~ oo. if
(5.10)
where
(5.ll)

Equation (5.10} is often reftm:d to as the Nyquisl condition. Given {g6 (nTH. we caa recover exactly
g,. (l) by generating an impulse train gp(t) of the form of Eq. (5.5) and then pass.ing gp(l} rhrough an ideal
lmvpass filter H,.(j.Q) with a gain T and a cutoff frequency Qc greater than r.l:m and less than Q 7 - Qm,
i.e.,
n, < Q .. < {Qr- Q,.). (5.12)
----------------~
3 Abo caliedcitbe• the Nyqu.ist .vampiiJ.g tlreorem o:r Shannon ,sampling tf..earem.
5.2. Sampling of Continuous-Time Signals 303

Q 0 0
" "
(a)

P(JQ_)

l
Q
-o, o,
(b)

GP(JQ)

-~
01"
~
" " "· R
m
~
212T
/ ....
~

3Q
'
Q

Or-Q,.,

(<)
P(jQ)

f
l '
o,
1'
0
"•
l I l l··."
2QT m,
(d)

~
.. .
~Q
Q.,..
.
y
UT-Qm Qm
f DT H:!T 3QT
...

(e)

Figure 5.4: Illustration of the frequency-domain effects of time-domain sampling. {a) Spectrum of original continoous-
time signal g,.:t), (bJ spectrum of the periodic impulse rrain p(r). (c) spectntm of the sampled signal gp(t) with
Or > 2wm, (d} s~-trum of the periodic impulse train p{l) with a sampling period smaller than that shown in (b).
and (e) spectrum of the sampled signal g 0 (t) wiih QT < 20m. [JI<ote: The spectrum of g.,(t) i& shown as not being
an even function to emphasize the effect Of sampling. I
304 Chapter 5: Digftal Processing of Continuous-Time Signals

Qm 0 Qm

H,_()Q)

I 0

Qm 0 Q
m

Flgure 5.5: Reconstn.tctlon of the on gina! continuous-time signal from its sampled version obtained by ideal £ampling.

The highest frequency Q"' contained in ga(t} is usually called the Nyquist frequency since it determines
~he ~inimum sampling frequeoc.y QT = 2Qm that must be used to fully recover g .. (t) from its sampled
version. The frequency 2Qm is called the Nyquist rate.
If the sampling frequency is higher than the Nyquist rate, the sampling operatioll is referred to as
,was'.lmpling. On the other hand, if the sampling frequency is lower than the Nyquist rate, it is called
undersampling. Fmally. if the sampling frequency ;s exactly equal to the Nyquist rate, it is called critical
samp!ing. 4
Typical sampling rates used m practice are, fer example, 8kHz in digital telephony and 44.1 kHz
in compact di~c (CD) music systems. lo the former case, a 3.4-kHz signal bandwidth is acceptable for
telephone comersation. Hence, a sampling rate of& kHz, which is greater than 2 times that of the acceptable
bandwidth of 6.8 kHz. is quite adequate. In high-quality analog music signal processing, on the other hand,
a bandwidth (Jf about 2U kH7 ha<, been determi.neC to preserve the fidelity. As a result, here the analog
music signal j;; sampled at: a slightly higher rate of 44.1 kHz to ensure almost negligible aliasing distortion.
5.2. Sampling of Continuous-Time Signals 305

"\Ve now establish the relation between the discrete-time Fourier transform G(eiM) of the sequence g[n 1
and the (..'Olltinuous--time Fourier transform Ga{}Q) of tbe analog &ignal ga(t). If we compare Eq. {53)
with Eq. (5.6} and make use of Eq. (5.1), we observe that
G(el"') = G p(jD)In=iT. (5.13a)
or equivalently.
G,.(jQ)
-
= G(ei"")l w=UT . (5.I3b)

lberefore. from the above and Eq. {5.9), we arrive at the desired result given by

(5.14a)

which can be alternatively expressed as

G(eJflT} =; L 00

k=-00
Go.UO- jkOr). (5.!4b)

As can be seen from Eq. (5.13a) OT ~5.13b). G(ei"') is obtained from G p(jil} simp1y by scaling £he
frequency axis Q according to the relation
(5.15}
Now, the continuous-time Fourier transform G p(jft) is a periodic function of Q with a period Or = 2:r IT.
Because of the above normalization, the discrete-time Fouricr transform G(eiw) is a periodic function of
ru with a period 2:r .
306 Chapter 5: Digital Processing of Continuous-Time Signals

(a)

_t;,-,: 0 6x
Gz(.iOJ

(b)
1.
''
•I
'
'" '
GJ'/!l)
'"
(c)
X

'

(d)

' '
I
'
I
(e)
'
-20n o,. -..fut 0 ~ &\,
alias
G,.p (]Q)

I .--r---J:rt·· I
(f)

0
-2<Jx 0 6.• R,
I
alias

Figure 5.6: Effect of sampling on a pun: cosine signa(; {a) spectrum of cos.(6.'Tt), (b) spectrum of cos{1411'"r), (c)
&pe<:trum of GOs{26nt) (d) spectrum of sampled version of cos( 61ft) with nr = 20:r > 20m = 12JT, (e) spe...-trum
cf smnpled version of cos(I4.rrt) with nT = 20rr < 2!2m = 28:rr, ami (f) spectrum of sampled version of cos{26nt)
;.vith ilT = 20n" < 2rlm = 52:r.
5.2. Sampling of Continuous·Time Signals 307

5.2.2 Recovery of the Analog Signal


We indicated earlier that if the discrete-time sequence g[n] has been obtained by unifonnly sampling a
bandlimited continuous-time signal g"(t} with a highest frequency Qm at a rate fl:r = 2:r / T satisfying the
condition of Eq. (5.1 0), then the original continuous-time !>.ignal Ea (t) can he fully recovered by pa.;:sing
the equivalent impulse train gp(t) through an ideallowpas.s filter H,(jQ) with a cutoff at flc satisfying
Eq. {5.12} and with a gain ofT. We next derive the expression for the ouiplll ia(t) of the ideallowpass
filter as a function of the samples g[nJ.
~ow. the impulse response h,.{l) of the above ideallowpass filter is obtained simply by taking the
inverse continuous-time Fourier transform of its frequency response H,.(jQ):

1"1 :s ""·
li1:! > n,,
(5.16)

and is given by

h,.(t) = -I
21r
f=-oo
H,.(jf2)ejnt dQ = T
-
2rr
f"' eln:.
-Q
dQ

sin(ilct)
(5.17)
O.rt/2 '
Observe that the impulse train gp(O is given by
00

8p(t) = L g[n]b(t- nT). (5.18)


n=-=
Therefore, the output g.,{t) of the ideallowpass ilter is given by the convolution of g I' (f) with the impulse
response h.,(t) of the analog reconstruction filter:
w
g.,(t) = L g[nJhr{t- nT). (5.19)
n=-=
Substituting h, (!) from Eq. (5.17) in Eq. (5.19) and assuming for simplicity Qc = f2T/2 = JtjT, we
arrive at
, ~ sin[.'r(!- nT)jT]
g,.(t) = L g[n] Jr(t nT)JT (5.20)
n=-D<>

The above expression indicates that the reconstructed continuous-time signal :ia (r) is obtained by shifting
In time the impulse response of the lowpass filter h,. (r) by an amount n T and scaling it in amplitude by
the factor g[n] for a11 integer values of n in the range -oo < n < oo and then summing up all shifted
versions. The ideal handlimited interpolation process is illustrated in Figure 5.7.
Now, it can be shown that when f'"" = QT /2 in Eq. {5.17), h.,(O} = J and hr(n T) = 0 for n f. 0
(Problem 5.6). As a resul[, from Eq. (5.20), ia~rT) = g[r] = g .. (rT) for all integer values of r in the
range -oo < r < oo, whether or not the condition of the sampling theorem has been satisfied. However,
ia(t) = 8a(t) for aU values oft only if the sampling frequency nT satisfies the condition ofEq. (5.10) of
the sampling theorem.
I,t should be noted that the idea] analog lowpc:ss filter ofEq. (5.17) has a doubly infinite length impulse
response and, thus, is llUS(able and noncausal, and does not have a rational transfer function, making .it
unrealizable. An analog Iowpass filter is also needed to bandlintit the continuous-time signal before it
308 Chapter 5: Digital Processing of Continuous-Time Signals

'

FJgUre 5.7: The ideal bandlimited recurumuction by inl:erpolation.-

(•)

·o---~-·~t ,\ ' ,, ,, n'. I' ·~ , 1,/.

'
, I\

]"
,, '' . 1 r ~ I
r'' '', I \ r'
I . I'
I•\ II
r \ ,•
r 1 'I

"<' -0.5 ! --,-~' 1_1~-. I \ iI \ jI


I I I
' • %
I i
~ 0 \
I ' ; II I
I
\I I 1 1
IvI \iJ~_\)_J
l
. \ 1 \II
v ,,v "
'
I .!)_;')
' ' l :
. ' I
---·- w
·•
' ' ' TI~ ' ' ' M
Time
f,.6
''
(b) (c)

Figure 5.8: lllustration of frequency-uansla.:ion effe.;t of sampling.

is sampled to ensure that the cond;tions of the sampling theorem are satisfied. The magnitude response
specfJlcation for both the anti-aliasing lowpass filter and the analog reconstruction filter therefore must
be modified to make them realizable. We review in Section 5.4 some commonly used methods for the
determination of the transfer function of realizable and stable analog lowpass filters approximating the
ideal magnitude characteristic of Eq. (5.l6). In Sections 5.6 and 5.10 we consider specifically the issues
concerning the design of an anti-aliasing filter and the reconstruction filter, respectively.

5.2.3 Implications of the Sampling Process


Consider again the sampling of the three con1inuous-time signals of Example 5. L From Figure 5.6(d) it
sho-uld be apparent that from the sampled version gJp(l) of the continllous.-time signal g 1(t} = cos(6;rt).
5.2. Sampling of Continuous-Time Signals 309

we can also recover any of i<s frequency-translated versions cos[(20k ± 6lJrtJ outside the baseband by
passing EtpV) throll8h an ideal analog bandpass filter with a passband centered at Q = {2::*" ± 6)n. For
example, to recover the signal cos(34Jrt ). it will be necessary to employ a bandpass filter Wlth a frequency

H (·s::n = jo.I. {34- -:");r.:::: 1n1:::: (34+ a)n. {5.2l)


r 1 - 0, othetwtse,
where a is a smaU number. Likewise, we can recover the aliased baseband component cos(6;rt) from the
sampled version of either g 2 p(t) or gJp(t) by passing it through an ideallowpass filter:

H (·g)_ JO.l, (6- 6)::r:;::: ·.R\ ~ (6+ D.):rr, (5.22)


r 1 - ! 0. otherwise.

There is no aliasing distortion unless the original continuous-time signal also contains the component
cos(6rrt). Similarly, from the sampled versions of either gzp(t) or 83p(t), we can recover any one of the
frequency-translated versions, including the parent continuous-time signal, !.~( l4Jr r) or cos(26n t) as the
case may be. by employing suitable ideal bandpass filters.
The frequency-translation effect of sampling is further iUustrated in the following two examples.

Generalizing the above discussions, consider the continuous-time signals g 1(t). g 2 (t), and g3(t) with
bandlimited frequency spectrums G1(jr2"), G2(jil), and GJ(jO.), as shown in Figure 5.JO(a) to (c),
respectively. Each ofthesecontlnuous-timesignals, when sampled at a sampling frequency of nr, develops
a continuous-rime signal &p(t), e = I, 2._ 3, with an identical periodic frequency spectrum. as indicated
in Figure 5.-lO(d). Therefore, by passing the sampled signal through an appropriate analog lowpass or
bandpass filter of bandwidth greater than il2 - Q 2 but Jess than or equal to Gr j2, we can recover either
the original continuous-time signal or any one of its frequency-translated versions. Note that as long as
the spectrum of the continuous-time signal being ~pled at a sampling rate of flr is bandlimited to the
freqt:ency range kflr j2 :S. IQj ::: (k + I }0: T f2, there is oo aliasing dlstortion due to sampling, and it or
its frequency-translated version -can always be recovered from the sampled signal by appropriate fi1tering.
There will be aliasing distortion only if there are frequency components in a wider frequency range than
that indicated.
310 Chapter 5: Digital Processing of Continuous-Time Signals

_,
0
-~,---.:-------;,---;--------;0

\ /\ /\ I 'i 011\ 1~\ [ill 1~ - {' li -~\ {' I~1H


i<_,,': '\' I \ I \ li
I \ I/- I·~,,11
":fi1 ,1\'
I
i /\l\' / 1\/\//\1 \i\t
II I I ,. I I 'II I ''il'
1/ \;
1 =-~"-'--·-~--: ·:ii!,l'rJI·i .·.11.
, L___'.::J,. ----c-c--=--.: _, 1 1 ~ v J v , I v v v J i
1! 0.2 IJA 0.6 Q !< ll Ill !1_4 0.6 0.8
T<me Ti"""

(c) (d)

Figure 5.9: Illustration of lhe effect of undersampling.

5.3 Sampling of Bandpass Signals


The conditions developed in Section 5.2.1 for the unique representation of a continuous-time signal by
the: discrete-time signal obtained by unifonn sampling assumed that the spectrum of the continuOUS--time
signal is bandlimited in the frequency range from de to some frequency Om. Such continuous-time signals
ace commonly referred to as lowpass signals. There .are applications where the continuous-time 10ignal is
bandhmited to a higher range QL ~ !ill :;:: QH, where OL > 0. Such a signal is usually referred to as
a bandpass signal and i,; often obtained by modulating a lowpa.<i.s signal. We can of coarse sample such
a bandpass COfltinuous-time signaJ with a sampling rate greater than twice the highest frequency, i.e., by
emmr.ng
Or 0:: 2QH,
to prevent aliasing. Ho;~'ever, in this case, due to the bandpass spectrum of the continuous-time signal,
the spectrum of the discrete-time signal obtained by sampling will have spectral gaps with oo signal
components present in these gaps. Moreover, if Q!l is very large. the sampling rate also has to be very
large which may not be practical in some situations.
We outline next a more practical and efficient approach [Por97]. Let AO = QH - QL define the
bmrdwidth of t.'re bandpass signal. Assume first thar the highest frequency QH contained in the signal is
an integer multiple of the bandwlclth, i.e.,

We choose the sampling frequency n7 to satisfy the condition


2QH
nT = 2(AQ} = --, (5.23)
M
311
5.3. Sampling of Bandpass Signals

0 ; '
n"' +n,o.,. .... n~
(c)
GfpUO)

~I II o, r1 II -Orl2
l (1
O Q!iJlQTj

f.!T-Q"2
IIt r(1'., II "
"
'
T
0T

-0 I OT+Ql
QT +02

(d)

Figure 5.10: Further illustration oftlleeffectofsampling.

which is smaHerthan 2QH, the Nyquist rate. Substituting Eq. (5.23)inEq. (5.9) we arrive at the expression
for the Fourier transform Gp{JQ) of the impulse-sampled signal gp(!):

I oo
Gp(jfl) = T L c.,un- 2k(A!l)). (5.24}
k=~oo

As before, Gp{}R) consists of a sum of the original Pouricr transform G.,(jfl) and replicas of Ga(jfl)
shifted by integer multiples of twice the bandwidth AR, and then scaled by 1/ T. The amount of the
shift for each value of k ensures that there will be no overlap between all shifted replicas, and hence no
aliasing. Figure 5.11 shows the spectrum of the original continuou~-time signal Ea{t) and that of the
sampled version gp(t), sampled at the rate given by Eq. (5.23j forM= 4. As can be seen from this figure.
g.-.(J) can be recovered from Ep(t) by passing lhe latter through an ideal bandpass filter v.ith a passband
.Jliven by fiL ::::: JQI ::::: nH and a gain ofT.
31~!
Chapter 5: Digital Processing of Continuous-Time Signals

C,jjO)

,_ ·j
I \
D
-iiu -ii.c
"
(a)
"' iiu

(b)

Figlue 5.11: Illustration of the effect in the frequency-domain of W~npling below the Nyquist rate a bandpass signal
with highest frequency that is an integer multiple of its bandwidth: (a) spectrum of original bandpass signal, and (b)
spet:trum of sampled bandpass signal.

"~\)Ql

1\
'\
I -o, ·t I
r\
I

h. ' n
"" '
(a)
Q. "•
"'
<ip(lUJ
M~l

-Os
[\ {lf\1 {y"\ /\
n.
Q
-!:h
' fir
-
..., Q~

"'
(b)

Flgure 5.12: lllustr:ation of the effect m the frequency-domain of sampling below the Nyquist rate a bandpa;;s signal
witt. highest frequency that is not an integer multiple of its bandwidth: (a) ~pectrum of original bandpass signal. and
(b) tpec.trum of sampled bandpass signal.

Note that any of the replicas in the lower frequency bands can be retained by passing gp(t) through
ban~ss filters with passbands QL - k{8Q) -:-:;: !f21 -:-:;: QH - k(AQ). 1 -::;: k ~ M - l providing a
translation of the original bandpass signal to lower frequency ranges. If the bandpass signal has been
obtained by JIK."dulating a lowpass signal, then the latter can be recovered by passing gp(t) through a
lowpass filter with passband 0 :::: IQ! -:::: O.H which retains the replica in the baseband. This approach is
often employed in digital radio receivers.
[f QH is not an integer multiple of the bandwidth QH - QL, we can artificially extend the bandwidth
eifu:r to tbe right or to the left so that the highest frequency contained in the bandpass signal is an integer
multiple of the extended bandwidth. For example, if we extend the bandwidth to the left by assuming
the lowest frequency contained in the bandpass signal to be Qo, then no
is chosen such that the extended
bandw:dth Q.H- G 0 is an integer multipieoHlu. In both cases the spectrum of the sampled signal obtained
by sampling ga(t) will have small spectral gaps between the replicas. 111is is illustrated in Figure 5.12
when the band:w:idth is extended to the left and M is chosen as 3.
5.4. Analog Lowpass Filter Oeslgn 313

,. .,~~'*'~'
1- .. ~~~""\

f---S:opband---

FiguFe 5.13: Typical magnitude specifications for an analog Iowpass filter.

As in tile previous -ease, any of the replicas in the lower frequency bands can be retained by passillg
gp(l) through appropriate bandpass filters.

•. 4 Analog Lowpass Filter Design


""·
There area number of established approximation techniques for the design of analog lmvpass tilten lVta69!.
[Dan74], [Tem73], [Tem77]. We describe four wid'ely used design techniques here without their detaileC
deriva!ions. Further details of these methods can be found in texts on analog filter design. Extensive tables
for the design of analog lowpass filters are also available [Chr66J. [Skw65], [Zve67]. As indicated earlier,
a commonly used technique for the design of IIR digital filtets is based on the conv-ersion of a prototype
analog rransfer function that bas been designed employing one of the methods discussed here. The digital
fi;'fet design techniques are the subject of discussion in Chapter 7.

5.4.1 Filter Specifications


Both the anti-aliasing filter and the reconstruction filter of Figure 5. I are of the lowpass type, and ideally
they should have a magnitude response of the form shown in Figure 5.5. In practice, the magnitude response
characteristics in the passband and in the stopband cannot be constant and are therefore specified with some
acceptable tolerances. Moreover, a transition band is specified between the passband and the stopband to
pennit the magnitude 00 drop off smoothly. For example, the magnitude ]Ha(}R)I of a lOWJmS:S filter may
be· gi_ven as shown in Figure 5.13. As indicated in the figure, in the passband defined by 0 :S Q :s QP, we
requrre
(5.25)
or in other words, the magnitude approximates unity within an error of ±8p. Jn the stopband, defined by
Q,, ::; Q _:s co, we require
JHa(Jfl)! :S 0,,
implying that the magnitude approxlmates zero within an error of&;, The frequencieo. Up aru! Q,• are,
respectively, called the passband edge frequency and the stopband edJ?e frequency.
314 Chapter 5: Digital Processing of Continuous-Time Signals

Passband J l--- S<opb<mdl---

! L____
O(]
~!~--~~~~~~~~::::,
Op
_,
Us
1-
Transition
b-

Figun 5.14: Normalized magnitude specification~ for an analog lowpass filter.

The limits of the tolerances in the passband and stopband, Op and 8~, are called ripples. Usually these
ripples are specified in dB in terms of the peak passband ripple «p and the minimum stOpband atJenllUtion
a.,, defined by
a.p = -20log 10 (1- &p)dB, (5.27)
a,.= -20logw(Os)dB. (5.28)

Often, the filter specifications are given in tenns of the loss fUnction or atJenuation function a(O) in
dB. which is defined as the negative of the gain in dB, i.e., -201og 10 IJ4(jQ:)I.

The magnitude response specification.~ for an analog lowpass filter, in some applications, are given in a
nonnalized fonn, as indicated in Figure 5.14. Here the maximum value of the magnitude in the passband
is assumed to be unity and the passband ripple,. denoted as t;../1 + e2, is given by the minimum value of
the magnitude in the passband. The maximum passband gain or ihe minimum passband loss is therefore 0
dB. The maximum stopband ripple is denoted by 1/A, and tl!e minimum stopbMd attenuation is therefore
given by -201og 10 (1/ A).
In analog filter theory two additional parameters are defined. The first one. called the transition TtJtio
or selectivity parameter. is defined by the ratio of the passband edge frequency flp and the stopband edge
frequency ns. and is usually denoted by k, i.e.,

(5.29)
5.4. Analog lowpass Filter Design 315

Note that tOr a Jowpass filter, k < 1. The second one, called the discrimination parameter and denoted by
k1, is defined a'>
(5 30)

U!!iually, k1 « L

5.4.2 Butterworth Approximation


The magnitude-squared response of an analog lowpass Buuerworth filter lfa(s) of Nth order is given by

-~=0'<i
2 1
IH .,(]"Q)] (5.31)
-- 1 + (Q/flc)2N-

can be easily shown that the first 2N - l derivatives of JH_,{jrl)f?. at r2 = 0 are equal to zero, and as a
[t
the Butterworth lowpass filter is said to have a maximally flat magnitude at Q = 0. The gain of the
n~sult,
Butterworth :filler in dB is givell by

At de, i.e., at r.! = 0. the gain in dB is equal to zero, and at Q = flc, tile gain is

and, therefore, Q.,. is often called the 3-dB cutofffrequency. Since the derivative of the squared-magnitude
response or, equivalently, of the magnitude response is always negative for positive vaJues of Q, the
magnitude response is monotonically decreasing with increasing Q. For Q » Q., the squared-magnitude
function can be approximated by
I Ha ( j Q)"'-
1 ...._.
1
(Qi~c)2.lo,"

Lte gain 9(~) in dB at Q2 = 2flt with Q; » Qc is given by

~~) N = Q(fl:1)- 6N dB,


2
Q(Q2) = -10 logw (

where Q(Q 1 ) is the gain in dB at Q 1 • As a result, the gain roll-off per octave in the stopband decreases by
6 dB or, equivalently, by 20 dB per decade for an increase of the filter order by one. ln other words. the
passband and the stopband behaviors of the magnitude response improve with a corresponding decrease
in the transition band as: the filter order N increases. A plot of the magnitude response nf the normalized
Butterwonh lowpass filter with Oc = I for some typical values of N is shWN11 in Figure 5.15.
The two parameters cornpJete1y characterizing a Butterworth filter are therefore the 3-dB cutoff fre-
quency £2c and the order N. TIJese are determined from the specified passband edge D.p. the minimum
passband nwgnitude 1/~. the stopband edge 51.~'> and the maximum ~1opband rippie 1/A. From
Eq. (5.31) we get

(5.32a)

(5.32b)
316 ChapterS: Digital Processing of Continuous-Time Signals

.-,

Figure 5.15: Typical Butn:rworth Iowpass filter responses.

Solving the abo'.-e we arri>•"e at the expression for the order N as

(5.33)

Since the order N of the filter must be an integer, the value of N computed using the above expression is
rounded up to the next higher integer. Tais value of N can be used next in either Eq. {5.32a) or (5.32b)
to solve for the 3-dB cutoff frequency Qc· It is a usual prnctice to determine Rc using Eq. (5.32b) which
satisfies the stopband specification at Q.l' exactly while the passband specification is exceeded providing
a safety margin atOp [Tem7n However, if Eq. (5.32a) is used to solve for !<lc, then the passband
specification at n P is met exactly while tile stopband specification at n .. is exceeded.
The expression for the transfer function of the Butterworth lowpass filter is given by

(5.34) •

where
_ ~ ejlrr(N->-U-1)/ZN]
Pi - ~""' • £=1,2, ... ,N. (5.35)
1be denominator D.v(s) ofEq. (5.34) is known as the Butterwnrthpolynomial of order Nand is easy
to compute. These pol)nomials have been tabulated for easy design reference [Citrfi6], [Skw65}. rzve67].
The .analog lowpass Butterworth filters can be readily designed using MATLAB (see Section 5.4.6}.
5.4. Analog Lowpass Fitter Design 317

\>
'I'
' '
. '\ ' '
\\ ' ~~1

~
'
1

''~!--c:~--;--""','co~-·-'::::-:::='~·=·~-~-=·~-1
0
0 ~5 l
.,..-_g
15 2 25 3
No-malized freqlleltCY

F'lgure 5.16: Typkal Type 1 Chebyshev Ivwpass filtet" responses with 1-dB passband ripple.

5.4.3 Chebyshev Approximation


In L.ltis case. the approximation error, defined as the difference between the ideal brickwaU characteristic
and the actual response, is minimized over a prescribed band of frequencies. In fact, the magnitude error iii
equiripple in the band. There are two types of Chebyshev tt'WL'ifer functions. In the Type 1 approximation,
the magnitude characteristic is equiripple in the passband and monotonic in the stopband. whereas in the
Type 2 approximation, the magnitude response is monotonic in the passband and equiripple in the stopband.

Type 1 Chebyshev Approximation


The magnitude-squared response of the analog lowpass Type 1 Chebyshev filter Ha(s) of Nth order is
given b-y
H 'Q)I2 - I (5.37)
I a(] - 1 + s2TJ(r.!(O.p) •

where TN(Q} is the Chebyshev polynomial of order N:

T (Q)
N -
=! 1
cos(Ncos- Q},
cosh(N cosh- 1 !.1),
(5.38)

The above pclyn-omial can also be derived via a recurrence relation given by

r:;::: 2, (5.39)

with To(Q) = l and T; (Q} = Q.


Typical plots of themagnitude responses of the Type 1 Chebyshev lowpass filter are shown in Figure 5.16
for three different values of filter order N with the same passband ripple 8. From these plots it is seen that
the squared-magnitude response is equiripple between Q = 0 and Q = 1. and jt decreases monotonically
for all ~ > 1.
318 Chapter 5: Digital Processing of Continuous-Time Signals

The order N of the transfer function is detennined from th-e attenuation specification in the stopband
at a particular frequency. For example, if a[ Q = Q, the magnitude ts. equal to :1/A. then from Eqs. (5.37)
and {5.38},

(5.40)

Solving the above, we get


cosh- 1(v'A 2 1/t:) cosh- 1{1/kt)
N~ -~=~~ (5.40
cosh 1(Q,/Qp) - cosh 1 (ljkj ·
In computing N using the above expression. it io:; usually convenient to use t:he identity c-osh-l (y} =
In (y +/ y2 - I). As in the case of the Butterworth filter, the order N of the filter is chosen as the nearest
integer greater than or equal to the number given by Eq. (5.41).
The transfer function H0 {s} is agairJ of !he form -ofEq. (5.34), with

l=1.2, .... N, (5.42)

where
= -f'lp~ sin [
(2£-l)rr]
. n,. = r.lr{ cos [
(2£-
2 .\'
l)n] . (5.43a)
t.If
2N I
UN
+I
l+~-.
y2- 1 y2
<~ -~. (~-~. Y= (5.43b)
2y 2y ( )

Type 2 Chebyshev Approximation


The magnitude-squared response of the the analog lowpass Type 2 Chebyshev filter, also known as rhe
inverse Chebyshev approximation, exhibits a monotonic behavior in the passband with a maximally fiat
response at Q = 0 and an equiripple behav::or in the stopband. The squared-magnitude response expression
here is given by
jH,(jQ)I 2 = _ (5.44)
2
1- e2 [J":<V(n•.tnp;]
TNI>ldn)

Typical responses are as shown in Figure 5.17. The cransfer function of a Type 2 Chebyshev lowpa~ filter
is no longer an all-pole function and has both poles and zeros. If we write
N
. _ c ne=l (.r - zt)
H a( S
) - 0 N , (5.45)
0f=l (s- Pt)
the zeros are on the )!£-axis and are given :>y

i=I,2, ... ,N. (5.46)


5.4. Analog Lowpass Filter Deslgn 319

-~, ---.-

N=;: ___ N_~:j__----


~j '

o~·~
0 OS l L'i 2 !5
Normah=rl freilue:.cy

Figure 5.17: Typical Type 2 Chebysllev lowpass nit-er responses. with 10-dB m!nimum stopband attenuation.

If N is od<.l, then for t = ( N + l) /2. the zero is at s = ::x.. The poles are located at
Pf.=o-i+)Qe. f=i,2 .... ,N, (5.47)

where
r2sfh
rJ"f = !:.![=- 2 p'' (5.48a)
•' _,_ !3' - at+ t
t '

fl'[=-flp~Slfl
_[(2t ~ l)rr l,, #t = Qp~COS [ (1£ ZN-
- l)n J, (5.48b)
2N _;
If,\'
Y=(A+--..!A2-J) . (5.48c)

The order N of the Type 2 Chebyshev lowpass filter is determined from gi:ven e, n,, and A using Eq. (5.4[).

5.4.4 Elliptic Approximation


An elliptic filter, also knov.n as a Cauer filter, has an equiripple passband and an equiripple stopband
magnitude rec;ponse, <ll! indicated :in Figure 5.18- for typical elliptic lowpass filters. The transfer function
of an elliptic filter meets a given set of filter specifications, passband edge frequency D:p. stopband edge
frequency S:},, passband ripple E:, and minimum s:opband attenuation A. with the lowest filter order N.
The theory of elliptic filter approx:imation is mathematically quite involved, and a detaiJed treatment of this
topic is beyond the scope of this text. Interested readers are referred to the books by Antoniou £Ant93],
Parks and BUirus [Par~], and Ternes and LaPatra [Tem77J_
320 Chapter 5: Digital Processing otContinuous-Time Signals

Figure 5.18: Typical elhptic \owpass filter re~onses with. 1-dH pa:.sband ripple li.lld 10-dB mioimum stopband
attenuat'.on.

The square-magnitude response of an elliptic lowpas1' filter is given by

(5.50)

where RN(Q) is a rational fum:tion of order N satisfying the property R,v(ljQ) = 1/ R,v(Q). with the
routs of its numerator lying within the interval 0 < Q < I and the rooTh of its denominator lymg in G1e
interval 1 < Q < co. For most applications, the filter order meeting a given set uf specifications of
passband edge fre-quetk}' np. passband ripple £, stopband edge frequency ns'
and the minimum stopband
ripple A can be estimated m;ing the appr-oximate formula lAnt93 j

, ~ 21og 10 (4/kj)
N = --, (5.51)
lo&:o(l!pJ

where kt IS the discrimination parameter defined in Eq. (530) and pis computed as follows:

k' ~ JI- k 2 , (5.52a)

1- ../P
pO-::-;:,--C,"" (5.52b)
- 2(1+-...fk')'

p = PfJ + 2{Po) 5 + 15{.A:d' + 150(po) 13 . (5.52c)

In Eq. (5.52a), k is the selectivity parameteT defined in Eq. (5.29).

Ji ""' EU¥7~. M = 4t AG:':s:>J ."·: !' w 0(:;M:t:t.'l!3l:1*'»


t:i:5tf,WJ».
5.4. Analog Lowpass Fifter Design 321

Figure 5.19: The phase te!'pom;es of typicalll0rmalized Bessellowpass ti[ter~.

5.4.5 Linear-Phase Approximation


The previous three approximation lechniques are for developing analog lowpass transfer functions meeting
specified magnitude or gain response specifications without any concern for their phase responses. In a
number of applications it is. desirable that the analog lowpass filter being designed have a linear-pba<:e
::hacacteristic in the passband, in addition to approximating the magnitude specifications. One way to
achieve this goal is lo cascade an analog allpass filter with the filter designed to meet the magnitude
specifications, so that the phase response of the overaJl cascade realization approximates linear-phase
response in the passband. This approach increases the overall hardware complexity of the analog filter and
may not be desirable for designing an ana}og anti-a!iaslng filter in some AID conversion or de:;igning :m
an<tlog recon.srructior. filter -in D/ A conversion applications..
H is possible to design a ]owpass filter rhat approximates a linear-phase characteri~tic in the passband,
Such a filter has an all-pole transfer functton of the form

do du
H t s) ""' - - ~ c--;-~__,--""---;----:--;; (5.54)
, BN(s) do+d1s+· ·+dN-ls+s#'
and provides a maximally flat approximation to the linear-phase characterist.ic at n = 0, i.e., has a
maximaJiy ftat constant group delay at de (Q = 0}. For a normalized group delay of unit)' at de, the
denominator polynomial BN(s) of the transfer function, called the Busel polynomial, can be derived via
t~ recursion relation
(5.55)
starting with B1{s) = s + 1 and B2(s) = s 2 , 3s + 3. Alternatively, the coefficients of !he Bessel
polynomial B ...,. (s) can be found from
12N- ey
d. - ,,c:·f£!(N-
' - 2s ~=c-.:....,,
£)!'
f=O,l, ... ,N-1. (5.56)

The::;e fihcrs are often referred to as Bes.>eifilters. Figure 5.19 shows the phase responres. of some typical
Bessel fiite~. It should be noted that the Bessel filter has a poorer magnitude response than that of tbe
lowpass filter of the same order designed using any one of the preViQUS three l:ecbniques.

5.4,6 Analog Filter Design Using MATLAB


The Signal Processing Toolbox in MATLAB includes a number ofM-files to directty develop analog tr.msfer
functions for each one of the above approximation techniqt.>es. We next review these functions.
Chapter 5: Digital Processlng of Continuous~ Time Signats

Butterworth filter
The· M-tiles forthe design of mwlog Butterworth filters are

b.Jt_La;_=; ·::::-J),
l n. urr_ , .j '-' ~-. Dt:tt.e:::-1:"-1, ltJn, ' s ' )
1 ::-lum,de::-:] Cc':::t.er (~. ~-;n, 'Lype·, 's ·)
I:CJ, ',\in] l;;_;_c_·torci(',-ip,\\'s,Rp,Rs, 's':

The M-lile but_ top(~'-!) computes the zeros, poles. and gain factor of the normalized a11alog Butter-
lowpass ~Iter transfer function of order 0! with a 3-dB cutoff frequency of I. The output files are the
WtFt.'-1
len ;th N column vector p providing the locations of the poles, a null vector z for the zero locations, and
the gain factor~- The form of the transfer function obtained is given by

P0 {s) k
Hc(s) = -- ~ . (5.57)
· D 11 (s) (s-p(l))(s-p(2))· .. (s-p(N))

To determine the numerator and denominator coefficients of the transfeT function from the zeros and poles
~omputed. we need to use theM-file zp2t f ( z, p, :-;,) .5
Altemativeiy. we can use the M-fiie butter ;r.;;, rm, 's' ) to design an order-N lowpass transfer
functiC)O with a prescribed 3-dB cutoff frequency at ~n radlsec, a nonzero number. The output data of this
M-file are the n;.Imemtorand the denominator coefficient vectors, nt;m and den, respectively. in descending
powers of s. If Wn is a two-element Yector ['IE, 1.'12'] with l!Jl < ~·J2, theM-file generates an on:ler-2N
bandpass transfer function with 3-dB bandedge frequendes at Wl and ¥'0:2 with both being nonzero numbers.
To design an order-N bighpassor an order-':~N bandstop filter, the M-filebut tcr (N, Wn, · ty·pe' , ' s ' l
1s t:mployed wftere -:::yp~ = h~ gh for a highpas."l. filter wilh a 3-dB cutoff frequency at Wn or type =
s tcp for a bandstop filter with 3-dB stopband edges given by a two-element vector of nonzero numbers
'Nn - : V\1, Y..l2 ~ with Wl <_ WL
Th~ M-file bul-:___ord 1_~'\p .- ',"Js, Rp, Rs, 's' ) computes the lowest order N of a Butterworth analog
transf<!r function meeting a;e spedfications given by the filter parameters., Wp, Ws, Rp. a;:;d Rs, where Wp
ts tile pa<>Sband edge angular frequency in radJsec, Ws is the stopband edge angular frequency in rad!sec,
F.p is the maximum passband attenuation in dB, and Rs is the minlmum stopband attenuation in dB. The
output data are the filter order Nand the 3---dB cutoff angular frequency Wn in radlsec. This M-file can
also be used tel calculate the order of any one of tte four basic types of analog Butterworth filters. For
lowpa"s de.sign. 'ilp < Ws, whereas for highpass design, Wp > Ws. For the other two types, Wp and Ws
are lwo-elemen: vectors ;;.pccifying the passband and stopband edge frequencies.

Type 1 Chebyshev Filter


T.-,e M-files for the design of analog Type I Chebyshev filten. are as follows:

;' z 'p, k' c1:eblap{N,Rp;


den 1
[ rEJ:D_, chebyl (N, Rp, Wn, ' s ' )
[nu:l'.,den] c:hebyl (N,Rp,Wn, 'type', 's'
[:i>i,/~n~ 6-:eblo::-d (Wp, Ws, Rp, .rts, 's')

The M-ille ci-Jeblap 1?-J, R;-;\ computes [he zero:., poles, and gain factor of the normali-zed analog,
Type ; Chebyshev lov.rpass filter transfer function of order N with a passband ripple in dB given by Rp.
Tht~ normalized passband edge frequency is set to 1. The ou1put files are the C;)lwnn ve-ctor p providing
5.4. Anak>g Lowpass Filter Design 323

the locations of the poles, a nul1 vet.'tor z for the zero locations, and the gain factor k. The form of !he
transfer fur,;;:tion is as in Eq. {5.57).
As in the previous case, the numeratur and denominator coefficients of the transfer function can be
determined using theM-file zp:Z t E ( z, p, k). The rational fom1 of the Type I Chebyshev lowpass filter
transfer function can al;;o he determined directly ·.~sing the M-file cheby 1 ( N, Rp, 1-\'n .· ' s ' ) , where '/Jn
is the passband edge angular frequency in radlsec and F1-'- is the passband ripple in dB. The output data
are the vectors, :-:um and den, containing the numerator and the denominator coefficients of the transfer
function in descending powef'> of s. If Nn is a two-elemer::t vector ~L\'1 , vn] Mth Wl < W2, theM-file
computes the transfer function of an order·2N bandpass filter with passband edge angular frequencies in
radlsec given by Wl and WL TheM-file dHo.lJyl (!~, Rp, 'fin, 'type' , 's' ) is employed for the other
tv.u types of filter designs, whe-re- the rt.ype = h2.gh for the highpass case and type = stop for the
bandstop case. Wn is a scalar representing the passband edge frequency for the high~ass fiiter design. and
is a two-element vector defining the stopband edge frequencies for a bandslop filter design.
TheM-file chel;lord (Wp, v:s, Rp, F:s, ' s ' 1 determines the lmves.t order N of a Type 1 Chebyshev
analog transfer function meeting the specifications given by the filter parameters, Wp, ~"<s, :S.p, and Rs,
where A';; is the passband edge angular frequency, ltJs is the stopband angular edge frequency, Rp is the
passband ri?Pie in dB, and Rs is the minimwn stopband attenuation in dB. The output data are the fiher
order t~ and the cutoff angular frequency Wn. This M-file can also be used to calculate the order of any one
of the four basic types of analog Type 1 Chebyshev filters. For the lowpass design, Wp < Ws, whereas for
the highpass design, Wp :;;- itJs. For t:be other two types, Wp and Ws are two-element vectors specifying
the passband and stopband edge frequencies. All banded_ge frequencies are specified in rad/sec-

Type 2 Chebyshev Filter


'Ibe M-files for the design of analog Type 2 Chebyshev filters are

[z,p,k] cheb2ap(N,Rs)
[num, der:] :::heby2 \N, Rs, '!J"n, 's'}
[num,denl c::hehy7('J,Rs,\>"Jn, ':::{pe', 's'\
[N,l:Jnj c::heb2ord (Wp, \>"fs, R,r, Rs, 's')

TheM-file cheb2ap lN, Rs! returns the zeros, poles. and gain factor of a normalized analog Type 2
Chebyshev lowpass filter of order N with a minimum stopband attenuation of Rs in dB. The normalized
'-topband edge angular frequency is set to ! . The output data are the length-N column -..-ector~ z and p,
providing tb~ locations of the zeros and the poles, respeclively, and the gain factor k. If .:Vis odd, z is of
length N-1. The fonn of the transfer function obtained is given by

H,,(s)= Pa(s) =k (s-;::(l))(s-;:(2)) -·(s-z(N))


(5.58)
Da(s} (.~: p(l)) (s p(2)) ·· · (s p(N)) ·

TheM-file cheby2 (N, Rs, wr:, 's' \ can be employed to determine the transfer function of a Type
2 Chebyshev lowpass filter when Wn is a scalar defining tlle stopband edge angular frequency in radfsec
or a bandpa$5 filter when "i'-ln is a two-element vector defining the stopband edge angular frequencies in
rad/~. The M-fite cl-.eby2 {N, Fs, Wn, 'type·, 's') provides the- transfer function of a Type 2
Chebyshev highpass filter when t}!Pe "' high or a bandstop filter when type = stop. In all cases,
the specified minimum stopband attenuation is Rs in dB. The output data are the vectors, num and der.,
containing the numerator and denominator cneffkients in de;;cending powers of s.
TheM-file c::eb2ord ( J;-.Jp, ·,t-Is, Rp, rts, 's' ) determines the lowest order N of a Type 2 Chebyshev
anal'<g tran~;fer function meeting the spedticatiom gtven by the filter parameters, Wp. Ws, Rp, and Ks, as
d:!fined for the Type I Chebyshev filter.
324 Chapter 5: Digital Processing of Continuous-Time Signais

Elliptic (Cauer) Filter


TheM-files for the design of analog elli.ptic filters are

[z,p,k] el1ipap{N,Rp,Rs_)
~!"'.Um, den) e2_lip(>J,Rp,Rs,Wn, ' s ' )
,:>u:n,den: ellip(~.~p,Rs,Wn, '=ype', 's')
[N,Wn] ellipo:-d!Wp,Ws,Rp,?,s, ' s ' ,I

TheM-file ell i pap :N, Pp, Rs) detennines the zeTO~, poles, and gain factor of a nonnalized analog
elliptic lowpass filter of order N v.i.th a passband ripple of Rp dB and a minimum stopband attenuation
of Rs dB. The normalized passband edge angular frequency is set to 1. The output files are the !ength-N
column vectors z and p, providing the locations ofL'le zeros and the poles, respectively, and the gain factor
k_ lf N IS- odd, z is of length N-1. The form of the transfer function obtained is as given in Eq. (5.58.).
The M-file ell ip 1N, Rp, Rs, ~;....-n, • s ' ) returns the transfer function of an elliptic analog Jow-
pa~s filter when Wn is a: scalar defining the passband edge angular frequency in rad/sec or a bandpass
filt,::r when Wn is a two-element vector defining the passband edge frequencies in radlsec. The M-file
e lliD \N, Rr:, t<.s, Wn, ' type' , ' s' J is used to determine lhe transfer function of an elliptic highpass
when type = high, and !A'n is a scalar defitiing the stopband edge angular frequency in rad/sec, or a
bandstop filter when Lype -= st.op, and wn is a two-element vector defining the stopband edge angular
frequencies in radlsec. In aU cases, the specified passband ripple is Rp dB and the minimum stopband
attenuation i!> Rs in dB. The output files are the vecton;, num and den, containing the numerator and
denominator coefficients in descending powers of s.
The M-file e::_liporC. ( Wp, Ws, Rp, Rs, 's') dete-nnines the lowest order N of an eUipt:ic analog
transfer function meeting the specifications given by the filter paramelers, Wp, Ws, Rp, and Rs, as defined
for the Type I Chebyshev filter.

Beo.ssel Filter
For tl::c design of a Bessel filter, the available M-fifes are

!z.:;::,kJ besselap(Nl
[n;;m,de~] besself {N, Wr:)
{nJm,de~J besself \N, Wr., 'type')

TheM-file be sse lap (N) is employed to compute the zeros, poles. and gain factor of an order-
N Bessel lowpa&--;. filter prototype. The output data of these M-fi1es are the length-N column >'ector p,
providing the locations of the poles. and the gain factor k. Since there are no zeros, the output ve-;;tor z is
a null vector. The form of the transfer function is as in Eq. (5.57).
TheM-file bes.sel E (~J, Wn/ Is used to design an Nth-order analog Iowpa~ Bessel filter with a 3-dB
cutoff angular frequency given by the scalar W::t in rad/sec. It generates the Jength-(:-1+1) vectors, nwr, and
den, containing the numerator and the denominator coefficients in descending powen: of s. If Wr:. is a
two-element vet:tor, then it returns the transfer functrou of an order-2N omalog bandpass Bessel filter. For
designing the other two types of Bessel filters, the function besself (N, Wn, ' type • ; is used. Here
the ty-pe = high with 'iln representing the 3-d.B stopband edge frequency in rad!sec for the highpass
ca:;e, or type = stop with "i'.'r. a two...element vector defining the 3--dB stopband edge frequencies in
radlsec for the bandstop case.
5.4. Analog Lowpass Filter Design 325

Limitations
The zero-pole-gain form is more accurate than the transfer function form for the design of Butterworth,
Type 2 Chebyshev, elliptic, or Bessel filter design. It is recommended rhat the filter design function in
the:.;e cases be used only for filter order;, less than IS since numerical problems may arise for filter orders
equal to or greater than IS.

Analog Lowpass Filter Design Examples


We provide below several examples illustmtin~ the use of some of the above functions in the design of analog
filters. In the first three exampie-., we repeat Examples 5.5 through 5.7 to determine the order of the transfer
function using the respective M-files. In the remaining examples. we determine the corresponding transfer
functions and then compute the frequency response using theM-file freqs {num, den,"""), where n'...l.m
and ::ien are the vectors of the numerator and denominator coefficients in descending powers of s, and w
L_.._ a set of specified discrete angular frequencies. This function generates a complex veclor of frequency

response samples from which magnitude and/or phase response samples can be reOOjly computed.

"#t# :41 d 0tkt'H01M fl!C {ttAw '{i

:tiw '\C\1WS!fjWI¢\t t~ :tiw& 0fw: 4& t { '¥1171 11\7)


J!;;wm XWV +! , a;J Wh ""' £ H 0,tL ·+f.
~:,::'~ l!%.{80''" lq In
N ~ iiP M'fflhWL~lm

I
\k D011 t 1/ f':W\JA!h& %.".": / A u;l ;7 0 j *'"'
:t "ip., P: " >"":" ;; :n;n h
326 Chapter 5: Digital Processing of Continuous-Time Signals

'
Figure 5.20: The gain reSJlOflse of the normalized fourth-order Butterworth lowpass filter of Example 5.ll.

~. }
A:! " 1¥# %3} 7 if'l"J ::?'\ f f ";::: !f y

W:10ikrtV 4\\ 1: t! V
ill
5.4. Analog Lowpass Filter Design 327

-60 :J 1000 1000 3000 400!1 5000 6{1()0


F~.H~

Figure 5.21: The gain response of the Buuerworth lowpass filter of Example 5.12.

4 f"¢4>0t t!0t Y,
'0 li<tk'<Jt: L0! :L'\0 ::::Nt:: I ;y:r> P1'£{"tLtzJ/'l\iliOtt:±! i,q-:;y;v;wr f:i: ±J;:wt:
%
'% ':" \t{::id At; tJtG :Vi j

't 'V't 1'Jifi17-


" - y f,P f:fu(+r f 1¢tj
!I
iL 111!:!w0;d$ i i'j f,Ji)dL f j {X'Aii¥
'\ 0 1'111± S"l\lf0iiffhwfi* '\
~ % 't«

"""
\ %'0LmlCl01l)m 'C111t
f;W # d~wwt;J "' 'lliCIIt:C? I
I tmt±
\\lffill0';0® Y { 1/: "/!!rJr
{:\ "' iz;Jtfiii, <ftH\,
Ill
'Ji)i!WJJh {\)If""* t
328 Chapter 5: Dtgital Processing of Continuous~Time Signals

_,
.51) '

3001J .woe 5000


-----'
6000
' l:JO()
Freq"""cy. Hz

Figure 5.22: The gain response of the Type 1 Cheby)ihev lowpass filter cf Example 5. 13.

J-~\
~-20

~-30
0
\\\
~----·---{
_, (/ j
-W
0 2000 3000 41))(1
Frequer1<:y. Hz

Flgun! 5.23: The gain response of the elliptic lowpass filter of Example 5.14.

+tt :mw7irR1:,
;diil!'fr 1111
;•'~1:n~w~;=G<ilJ0!:"'1Jf·
4;{1 tlth
- yl
~ t>r¥Sit1'11h11Jf\Jf : ;;; :7,;:: "' r 1::1
« ldiiL t%!1 q

±1ft\ ;;o
T $" 4JSJ\f!GT
' { f

j1Hfft
!4~£ f 7 \£1'44J}0 t$$V j ;
5.5. Design at Analog Highpass, Bandpass, and Bandstop Filters 329

5.4. 7 A Comparison of the FHter Types


In the previous four sections we have described four cype:s of analog lowpass filter approximations, three of
which have been developed primarily to meet the magnitude response specifications while the fourth has.
been deveto::>ed primarily to provide a near linear-phase approximation. In order to determine which filter
type lo c~e to mt::et a given magnitude response specification, we need ro compare the performances of
the four types of approximations. To this end, we compare here the frequency responses of the normalized
Butten.vortb. Chebyshev. and eUipric analog fowpass filters of the same order. The passband ripple of the
Type ! Chebyshev and the eq'Jiripple filters are assumed to be the same. while tbe minimum s:topbw.d
attenuation of the Type 2 Chebyshev and the equiripple filters are ass.umed to he the sa.-ne. The lilteT spe<:i-
tkations used for comparison are as follows: fiber order of 6, passband edge at Q = 1, maximum passb:md
deviation of 1 dB, and minimum stopband attenuation of 40 dB. The frequency responses. computed using
MATLAB ru-e plotted in Figure 5.24.
As can be seen from Figure 5.24. the Butterworth filter has the widest transition band, with a mono-
tonically decreasing gain response. Both: types. of Chebyshev fiJten; have a transition hand of equal width
that is smaHer than ~:hat of the Butterworth filter but greater than that of tl":e elliptic filter. The Type 1
Chebyshev filter provides a slightty faster roll-off in the tra.m;ition band than the Type 2 Chebyshev filter,
The magnitude response of the Type 2 Chebyshev filter in the passband i.s. nearly identical to that of the
Butterworth filter. The elliptic filter has the narrowest transition band, with an equiripple passband and an
equinpple stopband re<iponre.
The Butterworth and Chebyshev filters have a nearly linear phase response over about three-fourths
of the passband. whereas the etliptic filter has a nearly tinear phase response over about one-half of the
passband. On the other hand, the Bessel filter may be more attractive if the linearity of the phase respons.e
over a larger portion of the passband is desired at the expense of a poorer gain response. Figure 5.25 shows
tt:e gain and phase re~ponses of a sixth-ordeT Bessel filter frequency scaled to have a passband edge a!
!.1 = l with a maximum passband deviation of 1 dB. Howe~--r. the Bessel filter provides a minimum of
40 dB attenuation at approximately n = 6.4 and, as a result, has the largest transition band compared tc
the otheT three rypes.
Anolher way of comparing the peiformances of the Buiterworth, Chebyshev, and elllptic filters would
~: to compare the order of these filters required to meet the same filter specilkations.. For-example, consider
the specifications of a lcwpass fihcr: passband edge at Q = I, maximum passband deviation of I dB,
stopband edge at Q = 1.2, and minimum stopband attenuation of 40 dB. These specifications are met by
a Buaerworth tilteT of order 29, a Chebyshev Type I or 2 filter of order 10, and an eiliptic filter of order 6.

5.5 Design of Analog Highpass, Bandpass, and Bandstop


Filters
All of the fmu types of approximalions discussed in the previous section deal\ with the design of analog
lmvpass filters meeting rhe prescribed .specifications. Design of the other three classes of anal-og fittcrs.
namely, the highpa.ss, bandpass, and bandstop filters, can be carried out by simple spectral transformations
of tlk frequency variables .[Tem77]. The design process involves the de•.-eiDpment of the specifications
of a pr-ototype analog 1owpass filter from the specification& of the del!ired analog filrer us.ing a frequency
transformation, de~gn of the analog prototype lowpass filter, and then detenninmion of the transfeTfunction
of the desired analog filter by applying the inverse of the frequency transfonnation used to determine the
specifications of the prototype lmvpass filter.
Tc eliminate the crmfusion between the Laplace trunsfOnn variable- of the prototype analog !owpass
trans:fer function H1-p(s) and that nfthe desired analog transfer function Hv(s), we shafl use different
symbob. Thus we shall uses to denote the Laplace transform variable of the prototype analog lowpass
S30 Chapter 5: Digital Processing of Continuous-Time Signals

(a)

''
(b) (c)
Figure 5.24: A compariwn of the frequency responses of the four types of analog lowpass filters: (a) gain Iesponse;-;,
(b) passband details, and {c) phase response.<>-

{a) (b)

Figure .5.25: The frequency respomes of a ~>i:~;th-on:ler analog Bessel filter: (a) gain response. and {bj unwrapped
phase response.
5.5. Design of Analog Highpass, Bandpass, and Bandstop Filters 331

filter Hf.-p(s} andi to denote the Laplace transform variable ofthedesiredanaJog fiher Hv{S). The angular
frequency variables in the s-and $-domains are given by Q and Q, respectively.
The mapping from s-domain to $-domain is given by the invertible transformation

s = F(i).

The transfer functiom HLp(s) and Hn(i} are related through

Hv(S} = HLp(s) ls=F(SJ •


HLp(s) = Hn(i) I.<:=F-'(s.\ -

5.5.1 Analog Highpass Filter Design


A prototype analog lowpass transfer function HLp(s) with a passband edge frequency QP can he trans-
formed into an analog highpass transfer function HH p(i) with a passband edge frequency il;; using the
spectral tran,fonnation
Slpflp
s~ --~-· (5.59)

On the imaginary axis the above transformation reduces. to'


QPQP
£1=--.- (5.60}

"
The above mapping implies that the passband of the lowpass filter in the positive frequency range 0 ~ Sl ::5
Qp is mapped into tbe passband of the highpass filter in the negative frequency range -00 ~ a ::5 -slp.
and the passband of the lowpass filter in the negative frequency range -flp ~ Q:;: 0 is mapped into the
passband of the highpass filter in the positive frequency range 0 P .:5. fa :;: ex:·. Likewise, the stopband of
the lowpass filter in the positive frequency mnge OJ .::::: 0 :::: oo is mapped into the stopband of thchighpass
filter in the negative frequency range G.r :::; f2 :;: 0, and the stopband of the lowpass. filter in the negative
frequency range -oo ::5 Q :::: -Qs is mapped into the stopband of the highpass filter in the positive
frequency range 0::: f.!:;: fiJ. 1be mapping ofEq. (5.59) ensures that the gain value of IHLP(JQ)j of
the prototype lowpass filter in its passband will appear in the passband 1!i1 ~ GP of the desired higbpass
filter. Likewlse, the gain v-alue of the prototype lowpass filter in its stopband lfll ,?: Os will appear in the
stopband o:::: 1R1:::: Q, of the desired highpass filter.
332 Chapter 5: Digttal Processing of Continuous-Time Signals

Highpass Filter

~
-W; I
!'
~ --40":
D
I
-w;"--:---:-
o 2 4
Fr<:quency in
~
kH~ '
(a) (b)

.Figure 5.26: (a} Gain response of the protot:ype analog lowpass filter, and {b) gain response of the desired analog
highpass filter of Example 5.15.

5.5.2 Analog Bandpass Filter Design


A prototype analog lowpass transfer function Hu(s) with a passband edge frequency Op can be trans-
fonned into an analog bandpass transfer function HBp(S) with a lower passband edge frequency iipt and
an upper passband edge frequency ?lp 2 using the spectral transformation

j2 +f't2
s=Op _ ! (5.61)
S(Qp2- Qp!}

On the imaginary axis, the above transformation reduces to

(5.62)
5.5. Design of Analog Highpass, Bandpass, and Bandstop Alters 333

where B,~, = (Qp:z - Q" 1) denotes the width of the passband of the bandpass filter. It follows trom the
above equa~ian !hat the frequency Q = 0 is mapped onto the frequenL·y f.lo which is called the pas};band
center frequency of the bandpass. filter. Jt also fellows from Eq. (5.62) that the frequency QP maps into
t~e frequencies QP 2 and -fzpl· Moreover, the frequency range -QP :::; Q :::: Dp nfthe IO\\o'P3SS filter is
r.rupped into the frequency ranges -fl p2 _:s ti _:: : -f.! P 1 and QP; ::; Q :S Q p2 of the bandpass filter. It can
b.: ~hown !hat
(5.63)
~u~. the two pa<;sband edge frequencies exhibit geometric symmetry with respect to the center frequency
Ro. Likewise, the stopband edge frequencies exhibit geometric symmetry with respect to the center
frequency. If the bandedge frequencies do not satisfy the condition of Eq. {5.63), one of them needs to be
changed loa new value so that it is sati~fied while introducing some safety margin [fem77}. For example,
if
f.lptfip2 > Qslfls2,

then dther tl.pl can he decreased 10 Q_dn~::!/np~ or n.d can be increased to Qplnp2/Qs2 (0 satisfy !he
condition of Eq_ (5.63). In tho: former case, the new passband wdl be larger rhan the original desired
passband, whereas, in the latter case, the left transition band will be smaller than the original value. On
the mher hand, if
np,npz < Q~~n_-.2,
then either f2p2 can be increased or il~2 ::an be decreased to satisfy lhe condition of Eq. {5.63}. Moreover,
if the gain value of the lmvpass filrer at a frequenc}' is a dB. the same gain value is obtained for the bandpass
filter at the positive frequencies t2a and Qb, with the latter frequencies exhibiting geometric symmetry
with res_pe<:t to ila.

tu:\Ju,;Jt{ \Lv01 \H "'ill+Gf"- (F«Jt t &! j; ;,, ,;}!ff "" 'I,;Jiitlii)l; h,41s
i t.h cbv f'1 ;;0rtJ pr 1tD;I1t:W 1¢\li{H#k !\ Ui:« iwV t ~ \J•t #»D1J-' «diij¥ .YJ<'Yl'IMM••»
j;T:tbr hrt!:pJit!'X;' nf thr f\fiw« 'IW fw
,04 r;

;t, 1~h'f f<i£il Mt l 0 siw«'n""


{pt:' 11f'fT\j}(";g}\'111'f 1¢ t!w
Vir tu u \!t..t 11&< 1'iui0 {wnl , , ;
\It t
)n>11)hliJJ fi:XhW Md7 tl:iMIGJli:;Gf"J}try:~~~z~~~~~~~~'~~~1~C~ll'ml@~~;iimsi
;gz? tEJDLXim:!)
, , , , \ "" d+«'HY<t'«' 4!;r qftt- , "',:futr H JMmt
"'"' -r •:w 1: ;>, ff<r:: wv, th:dt> @.t0' UJ:« *'*'¥'1"'-•N,;; -; ;z,
y t} !)u;;;y !T¥ {id!h nnj:; "~<" 1'!:WR¥hJV!J1 cl!e ,Wy 1\i:' 4(401!101\f FillltVI¥101
'" d:w dt +H!IH! +ttlws ! is P \, r vn¥0 n ,pY'0fi Riawr-
334 Chaptec 5: Digital Processing of Continuous-Time Signals

Prototype 1-<>Wpas. Fibr Bandpass Fd1er


w,
"0 ---\ o:'' ,~-,

-W '\ -!G· ! ' J


\
ig_:w 'I• / 'll-2'-ll
< /~'.!
f \ ~ __j
'
.3-30
--40
'1{ :3-30

-4<1/ .' I
I
\(
_,.,
_..,
,,
I ->0
-W
w
' "' ' ' ' ' Fr<'lq<>M<y. kHz "
{a) (b)

Figure 5:27: (a) Gain n:sponse of the prototype analog lowpass filter, and (b) gain response of the desired analog
bandpass filte!" of Example 5.16.

5.5.3 Analog Bandstop Filter Design


An analog prototype Iowpass transfer function HLp(s) with a passband edge frequency Op can be trans-
formed into an analog bandstop transfer function Hsp(S) with a lower stopband edge frequency fz.J 1 and
an upper stopb<md edge frequency n. .
2 using the specb'a11ransformation

s = O.,. idls2 -~fl.,.!). (5.64)


fl+ g~
On the imaginary axis, the above transformation reduces to

O=:a.. _fiBw~, (5.65)


~-Q2

where Bw = cf.:.. 2 - fi_.t) is the width of the stopband of the bandpass filter. Here the frequency n = 0
is mapped onto the frequency ilu which is called the stopband center frequency of the bandpass filter. It
follows from Eq. (5.65) that the frequency range -QP ~ 0 ~ QP of the lowpass filter is mapped into the
frequency ranges -!2:s2 :::: Q ::::: -ilsl and f2sl ::: Q ::: sisz of the bandstop filter. Moreover, as in the case
5.6. Anti·Aiiasing Fitter Design 335

of the analog bandpass. filter, the bandedge frequencies here also exhibit geometric symmetry with respect
to the center frequency, i.e.,

Since the de.'i.ign of the analog band'ltop filter is very o.imilar to that of the analog bandpass filtcr. we
leave it as an exercise for the reader (Problem 5.27 and Exercise M5.l0).

5.6 Anti-Aliasing Filler Design


According to the sampling the[)rem of Section 5.2.1, a bandlimited continuous-time signal 8a(!) can be
fully recovered from its uniformly ~mpled version if the condition of Eq. (5.10) is satisfied, i.e., if g"(1)
is sampled at a sampUng frequency QT ;hat is at least twice the high~t frequency Qm contained in g"{l).
If tblli condi:ion is not satisfied, the original continuous-lime signal ga(f) cannot be recovered from irs
sampled version because of distortion cau~d by aJia:;.ing. In practice. ga(t) is passed through an analog
anti-aliasing Iow pass filter prior to sampling to enforce the condition of Eq. (5.1 0). This analog filter is
the first circuit in the interface between 'he continuous-time and the discrete-time domains and is studied
in this section.
Ideally, the anti-aliasing filter Ha(s) should have a lowpass frequency response Ha(j'Q) given by

(5.66)

Such a "brickwa!l" type frequency response cannot be realized using practical analog circuit components
and, hence, mu,_t be approximated. A practical anti-aliasing filler therefore should have a magnitude
response approximating un!ty in the passband witb an acceptable tolerance, a stopband magnitude response
exceeding a minimum attenuation level, and an acceptable ITansit:ion band separating the passband and the
stopband, v.-ith a transmission zero at infinity. In addition. i.n many applications, it is also desiiable to have
a linear-phase response in the passband. The passband edge frequency Op, tile stopband edge frequency
nf, and the sampling frequency Or must satisfy the relation
<>r (5.67)
O.p<Os-:::.::2.

The passband edge frequency np is determined by the highest frequency in lhe continuous-time signal
g0 (t) that must be faithfully preserved in the sampled version. Since signal components with frequencies
greater than Qy /2 appear as frequendes Jess than Rr 12 due to aliasing, the attenuation level of the anti·
aliasing filter at frequencies greater than !2 7 /2 is determined by the amount of aliasing that can be tolerated
in the pass bar,d. The maximum aliasing distortion comes from the signal components in the replicas of the
input spectrum adjacent to the baseband.6 It follows from Figure 5.28 that the frequency Q 0 = !27 - Op
is aliased into QP• and if the a...---ceptable .amount of aliased spectrum at QP is ap = -20iog 10 (1j A), then
the r.:~inlmum attenuation of the anti·aliasing filter at !lo must also be a.p [Jac96].

.;It is tacitly a:>>Umed here that the m""'"'llilude re:sponse of the anti-ai<a...ing Iiiier is monotonically decreasing in tbe stopband.
336 Chapter 5: Digital Processing of Con1inuous-Time Signals

Spectrum o.f aliased


component of input

Figure 5.28: Anti-aliasing filter magnitude response and its effect in the signal band of interest.

Table 5.1: Appro;dmate minim am stopband attenuation of a Butterwort.i.lowpass filter.

Attenuation (dB) 6.02N 9.54N J2.04N

In practice, the sampling frequency chosen depends on the specific application. In applications requir-
ing minimal aliasing, the sampling rate is typically chosen to be 3 to4 times the passband edge r.!p of the
anti-aliasing analog filter. In noncritical applications, a sampling rate of twice the passband edge Op of the
anti-aliasing analog filter is more than adequate. For example. in pulse-code modulation (PCM) telephone
systems. the voice signal is first bandlimited to 4kHz by an anti-aliasing analog filter with a passband edge
at 3.6 kHz and a stopband edge a1 4 kHz. These specifications are typically met by a third-order elliptic
lowpass filter. The output of the filter is then sampled at 8kHz.
Requirements for the analog anti-aliasing filter can be relaxed by over-sampling the analog signal and
then decimating the high-sampling-rate digital signal to the desired low-rate digital signal. The decimation
process can be implemented completely in the digita1 domain by first passing the higb-rate digital signal
througb a digitai anti-aliasing filter and then down-sampling its output. To understand the advantages of
the oversampl.ing approach. consider the sampling of an analog signal bandlimited to a frequency Q"'.
Figure 5.29 shows the spectra of the sampled version of this signal sampled at two different rates. r.!:r
5. 7. Sample-and-Hold Circuit 337

G/jO) And-ali'""'gfilrer H (jH)

••• ~:f:d:\/~ ,6. •••


(a)
GP:jQ) A:!lt! ah=g fitter H., (JQJ

~··~·L~~~~~~--.~~~-~·--*--·
'lr· u, n,..,
--~L~-~~~~~·~··o 'J r><~r'

(b)

Figure 5.29: Analog ami-aliasing tilter requirements for two different oversampling rates.

and Q~ = 20 7 , where Qy is slightly higher than the Nyqui5t ra.te of 2Qm. The.'!e figures also show the
desired frequency response of the analog anti-aJiasing filter in both ca'>es. Note that the transi(ion band of
the analog ami-aliasing filter in the latter .;::ase is ronsiderab1y more than 3 times that needed in the fonner
situution. As a result, the filter spe.;ifications are met more easily with a much lower order analog filter.
For the design of the anti-aliasing filter, any one of the four approximations described in Section 5.4
can be employed. Of the four types, the Butterworth approximation provides a reasonable compromise
between the desired magnitude response and a linear-ph~e response in the pas.s.band fora given f:lter order.
For an 1mproved phase response at the expense of a poorer magnitude response, the Bessel approximation
can he used. On the other hand, for an improved magnitude response with a poorer phase response, either
the Chebyshev or the elliptic approximation can be used. with the latter providing the smallest aliasing
CITOI" for a given filter order. However, in the latter C<Jses., it is necessary to ensure that the transfer function
has a zero at infinity. Otherwise, the tails. of all the shifted spectra will add up to infinity.
Once the transfer function Ha(s) for the anti-aliasing filter meeting the requirements has been deter-
mined, it can be implemented in a number of way~. such as a passive RLC filter, an active-RC filter, or a
switched-capacitor filter fLar93}. A detalled discussion of these implementations is beyond the scope of
fr;is book, and we Tefer the reader to the texts listed at the end of dtis book [Dar76J. [fem77], [Tem73!.

5. 7 Sample-and-Hold Circuit 8
As indicated earlier, the bandlimited output of the analog anti-aliasing filter is fed into a sample-and-hold
{S/H) circuit, which is the second circuit in the interface between the cont1nuous-time and discrete-time
domains. It samples the analog signal at unifonn intervals and holds the sampled value after each sampling
operation for sufficient time for accurate conversion by the _>\JD converter.
The basic SfH circuil, shown in Figure 5.30, operates ac; follow--s: During the sampling phase the
pe.riodically operated analog switch S remalns closed allowing the capacitor C to track the input analog
signal .xa (t) and to charge up to a voJtage equal to that of the -input. During the hold period. tte switch
remains open, permitting the charged capacitor to hold the voltage across it unti] the next sampling phase
begins. The operation of the switch is controlled by a digital dock signal. The voltage follower at the
output of the SJH circuit acts as a buffer between the capacitor C and the input stage of the AID converter.
Typical input-output waveforms of an SfH circuit are as shown in Figure :5.31. where the dotted l'ine
repre..ents the input continuous-lime signal Xa(t) and the wlid Ilne represents the output XJ(t), assumiDg
instantaneous change from the sample mode to the hold mode and vice versa.
<Th:~ section ba~ been adapt-ed from {Mif8fll by pcrmi%ion of the autho• and t.lx: publisher.
338 Chapter 5: Digital Processing of Continuous-Time Signals

Figure 5.3fl: The basic o;ample-arrd-hotd {:ircult.

xd(t)

~·····;.c_···rc-~--;/
··...

~ ..
~Hold*

Sample

Figtll"e 5.31: Input-output wavefonns of a sample-and-hold circuit.

Practical SIH circuits are often much more complex than \he basic cir-cuit of Figure 5.30. They typically
include an adcitional operational amplifier allhe input to provide better isolation between the source and
the capacitor and better tracking of the input signaL They may contain additional circui( components to
minimize tire effect of the hold voltage decay, which otherwise may occur due to the leakage through the
input resistance of the output buffer amplifier and through the finite OFF resistance of the swilch. The
major parameters characlerizing the perfnnnance of a practical S!H circuit are acquisition time, aperture
time, and droop. The total time needed to switch from ilie hold mode to the sample mode and acquire the
input analog signal within a specified accuracy is defined a<> Its acquisition lime. This parameter depends
on the switching delay time, the time constant of the RC circuit. and the dynamic peifonnance of the output
operational amplifier determined primarily by its slew time and settling time. The tUne taken by the switch
to change from the sample mode to the hold mode is defined as the aperture time. The hold voltage drift
per second due to leakage out of the hoiding capacitor is called the droop. In addition to these parameters,
non ideal properties of the optrational amplifiers used in the design of the S/H circuit should also be taken
into account in detennining the overall performance of the circuit.

5.8 Analog-to-Digital Converter


The next step in the digital processing of an analog signal is the conversion of the output of the SIH circuit
in its hold mode to a digital form by means of an analog-to-digital (AfD) converter. For digital signal
proc~sing appJICations, the output of the .AJD -convertcr is usually in binary code. The ou1put is a sequence
of words, with each worri repre;;enting a sample of the sequence. The wonllength of the AID converter
out·?ul, given by the number of hies, limits the achievable dynamic range of the converter and ifs &.---curacy
5.8. Analog-to-Digital Converter 339

v,~v
"2 - 0

FJgur-e 5.32: Block diagram representation of an analog comparator

in representing the input analog signal. The ac<:ur.acy of conversion of an ideal A'D converter is expressed
in tenns of i~s resolution, which is determined by the number of discrete levels that can be assumed by the
t-VD converter output. For an Dl.ltput coded in natural binary form with anN-bit wordlength. the number of
available discrete level;; is. 2N, and as a result, the resolution or accuracy is 1 part in 2N or 100/2N percellt.
There is a variety of AJD converters that are used in signal processing applications [Lru-93]. In all of
these converters. the .analog comparator is an importanl r:ircuit component. The analog comparator is a
device that compares two analog voltages at its input and develops a binary output indicating which input
voltage level is larger. With respect to its circuit symbol shown in Figure 5.32, the input-output relation of
an analog t:omparator is thus as follows:

Fo=
! v+, ifV1 > V2,
y-. if V1 < V2,
{5.69)

;.,.-here V + > V-. Usually these circuit~ are designed such that their output voltage levels are compatible
wi.t..1c) the logic ievels of conventional digital cir-cuits_
We briefly review next the operations of the following types of AID converters: (1) flash AID converter,
{2) serial-parallel AID converter, (3} succe~.sive-approximation AID converter, (4) COUJJtingA/Dconverter,
and (5) oversampling A1D converter. A detailed discussion of their design and implementation is beyond
the s.cope of this book.

5.8.1 Flash AID Converters


In an N -bit fiash converter. shuwn in block diagrnm. form in "figure 5.33. the input analog voltage VA is
compared simultaneously with a set of 2.N - 1 unifonnly separated reference voltage' levels by means of
a set of 2N - 1 anaJog comparators, and the Jocation of the adjacent comparators for which the outputs
changes from V- to v+ indicates the range of the input voltage. A logic encoder circuit is then used to
convert the location information .into an N -bit bincry code. The set of reference voltages is mmaily derived
by a potential-divider resistor string. Ina flash AID converter, all output bits are developed simultaneously,
a:fld as a result, it is the fastest converter with a com'e!'Sion time given by the comparator switching time pJus
the propagati;;n delay of the encoder circCJil. However, the hardware requirements of this type of converter
increase very rapidly (exponentially) vrilh an increase in the resolution. As a result, flash converters are
employed for low-resolution (typically 8-bit or l~s.} and high-speed conversion applications.

5.8.2 Serial-Parallel AID Converter


Two N /2-bit flash converte-rs in a. serial-parallel configura!ion can be employed to reduce the hardware
complexity of anN-bit Bashconverterataslightincrease in the conversion time [Lar93]. One such scheme.
called a .mbrangint; AID converter, is shown in Figure 5.34. Here, a coarse approximation of the input
analog voltage VA composed of the N j2 most significant bits (MSBs) is first generated by means of one
of the N /2-bit flash converters. These MSBs are then fed into a D/A converter whose analog output is
s.ubtmc:ed from the VA· The difference "\roltage is scaled by an amplifier of gain 2N n. and convened into
digital form by the second (fine) N /2-bit flash converter, which provides the least significant bib (LSBs).
340 Chapter 5: Digital Processing of Continuous-Time Signals

vA
\Anal og inpni)

V+
R
jJJ- ..."" :

•, 0
c

.,
" ''
jfr- ~
~ '

l}- .," ''


.li



--''
~
R

v;;
2
N
-2
l "'
Fjgure 5.33: Block diagr.un representation of a flash unipolar AID convcrtef".

MSBs LSBs

Figure 5.34: Block Uiagrom represenlation of a suhnlnging AJD t..'OJIVerter.

v,----,-------------------,
(Analog input)

v'R
vR N !2-hir
DAC AOC
Xt2
'"~

Figure 5.35: Block diagram ~ematlon of a ripple AID cooverter.

Ttle second scbeme utilizing also two I\ /2-bit fla<;h con'i'erters is called the rippk .VD converter, as
shown in Figure 5.35. 1n this scheme. the coarse N /2-bit AID converter has two functions. It generates
theN /2 MSBs, and it controls a reference voltage generator that develops a reference voltage v; for tbe
fine N /2-bit AID converter generating the N /2 LSBs.
ln both of rhe two serial-parallel AID converters, while one of the N /2-bit AID converte~ is operating,
the other is idle, permitting the use of a single N /2-bit AID converter twice in one conversion period. A
generalization uf the above t-wo schemes employing N l-bit AID converters is caHed the pipelined AID
converter [Lar93].
5.8. Analog-to-Digital Converter 341

s:an
1
'7} ~C~lrol~
_ logiC
Clocl:::
N bit
shift register

~
tJN
N -bit
shift register
N Digctal
outrut

D!A converter ,_ Reference


voltage
VD

Figure 5.36: Block diagram representation of a successive-approximation AID cunverter. (Adapc:ed from (Mit80l by
permission of the author and the publisher.)

S.8.3 Successive-Approximation AID Converter


Jn this type of converter, essentially a rria1-and-error approach is successively used to obtain the digital word
representing the input analog voltage \iA [M:it80]. The basic idea behind the operation of this converter
can be explained with the aid of its block diagrEm representation given in Figure 5.36. The conversion
procedure is an iterative pmcess. At the hh step of the iteration, the digital approximation stored in the shift
regtster is L'nnverted into an analog voltage V D by the D/ A c-onverter in the system. If V D < VA, then the
digital number is increased by ~tting to ONE the (k + 1)th bit to the right of the kth bit, which is assumed
to be a ONE. If, on the other hand, V D > FA, then the digital number is decreased by setting the kth bit to a
ZERO, and the (k+ l)th bit to a ONE. The abovepmcess is followed for all k: fromk = 1, 2, ... , N. After
the Nth bit has been examined, the conversion proces.s is terminated, with the contents of the shift register
representing the digital equivalent of the input arutlog voltage. In praclic~, to round off the approximation
to the nearest discrete le-vel, an analog equivalent '.O one-half of the value oftte LSB is added to the analog
input »ignal before conversion begins.
The successive-approximation AID converter can be designed with high resolution and reasonably
high speed at a moderate cost and is therefore widely used in digital signal pr-ocessing applications. The
pen-ormance of this type of AID converter depends primarily on the performance of its constituent 0/A
converter and Ure analog compamtor.

S.8.4 Coun~ing AID Converter


In the counting A/D cunverter, shmvn in block diagram form in Figure 5.1?, anN-bit counter begins
counting clock pulses when the conversion process is sffirted [Mit80]. The analog equivalent VD of the
cligital word of the counter formed by neans of a DlA converter, at each clock. cycle. is compared with
tfle input analog voltage level VA· The conversion process is te-rminated as soon as Vo > V.q. Here. the
conversion time depends on the value of \/.1 • [tis maximum when the digital word in the counter becomes
all ONEs, and thus, for anN-bit binary counter, the conversion time is equal fo (2N- l)T. where Tis
t1e clock period. This is a long time for N > 10, and hence, this technique can be used only in low-speed
applications_
342 Chapter 5: Digital Processing ofCont1nuous·Time Signals

''A ~---H''­
' l -------,-~ End of conver>iun
(Analog inpotl~' T

Referenc.e
voltage

Figure 5.37: BJock iliagr.un repre$enlation of a counting AiD t:ml':er:er. (Adapted !Tom [Mit80] by permisc~ion of the
author and pubii~her.)

Am~. log
• E t\nalog l-:>it
inpt~! integrator A.lD cu:werter

D!A converttrr f.----_j

Figure 5.38: Block diagram representation of an uversampling sigma-delta AID converter.

5.8.5 Oversampling Sigma-Delta AID Converter


As the name implies, in this type of converter the analog ~ignal is. sampled at a rate much higher than
the Nyquist rate, resulting in very closely spaced samptes. As a consequence, the difterence between
the amplitudes of tv:o consecutive samples is very small, pennitting it to be represented in digital form
using very fe\'t bits, usually by one bit. The sampling rate is then decreased by passing the digital hlgnal
fin;t through a factor-of-M decimator to lower the sampling rate from M Fr to Fr" The decimator is
designed by cascading an anti-aliasing lowpass Mth band digital filter to reduce irs bandwidth to Jf / M
and a factor-of-M down-samplerY The word!ength of the down-~mpleroutpucdetermines the resolution
of the oversampling AID converter, and it is much higher than that of the high-rate digital signal due to
the effect of digital filtering. The bask block diagram representation of the oversampling AID converter
is shown in Figure 5.38. Thi<, lype of convener. oft~n ~ailed a .~igma-delw AID converter. is discus..<;ed·ln
more detail in ScL<ion II. 12.

5.8.6- Characteristics of a Practical AID Converter 10


A practical A/D converter is a noEideal device and exhibit~ a variety of erroTs that are- usually determined
in tenm. of the input analog value,•, at which the transition in the digital output takes place, since these
transitions can be measured accun:.tely. In order to understand the effects oftheseerrors on the performance
of :an A/D converter, consider fir-M its operation as an -ideal device. The input-output relation of an ideal
3-bitA/D converter is shown in Figure 5.39. The error introduced by an ideal AiD convener is simply the
difference between the value of the analog input ami the analog equi·valent of the digital representation:
thi.s difference i;; c-alled the quanti:.ation error. It follow;;; fran: Figure 5.39 that the quanti7.ation error e[nj

1
s.;;;0n
"The -'.kstgn af a de~tma:oc 1s .:nns.>dec-ed m 10.2
~1;ts sect:on ha~ been Waptc-d fwm [MitSC: by penni~~ion ot ·be autlmr ~nd !he p.1bl.isher.
343
5.8. Analog-to-Digital Converter

.·~
ll!
110
tJ;
Input
-3
'
~
0
0
cOO
01:

Inpul

(a) (b)

Figun 5.39: Input-output characteristics of an ideal 3-bit AiD converter: {a) bipolar converter. and (bJ unipular
.:onverteL

§_
-
::~!
JOI
100
...-- Actual
tnm~ition

~
ltl
!10
Wl
100
6 I) t l 8 011
0!0
001
> U-.,---;---:c-c-:---:---:-- Jnput ,uL--,--:-~-:--:;--:c-- rnput
1234567 1234567
(a) (b)

Figure 5.40: Linearity errors in a practical AiD converter.

for an .ideal converter satisfies


; J
<e[rd::;;2, (5.70)
2
where 8 is called the quantization step and is given by
(5.7!)

for an N -bit wo.-rdlength_ Note that .'i is precisely the value of the LSB.
A practical AID converter exhibits linearity error if the difference between two consecutive transition
values of the input is not equal for the complete range of the input as illustrated in Figure 5.40. Tbe
maximum value of this diffe-rence value over the fuJI range is called the differential nonlinearity (DNL)
envr. Note from Figure 5.40(b) that in some cases seven" nonlinearity in tbe input-nutput relation may
n!Sult in missing codes at the output.
The AID converter exhibits gain or scale-factor error if the difference between the last and first
transitions is not equal to the full-scale value minus l/2 LSB. An offset error occurs if all transitions are
shifted by an equal amount from the ideal transition locati<ms. These errors are illustrated in Figure 5A l.
The time needed by the AID converter to generate the full digital equivalent of the input aRalog signal
is called its wvrd-comersion time. whereas the time required to generate a single bit js called the bit-
con\ersion time. In the flash AID converter, since all bits of the output digital wocd .are generated at the
same time, the word-conversion time is essentially equal to the bit-conveTSion time. On the other hand, in
Chapter 5: Digital Processing of Continuous-Time Signals

".I
_,,.::.._
Ill
llO 110
WI
"' lGl
"
~100
0 Oll
& wof
6 011
ow 010
001 001
OCOLL.d.--c"-o~~c--1~1 OOO"--,--,--c;---:-~---:~.--lo""'
1234567 !234567

-
~·Off~!
,._
(a) (b)

Figure 5.41: (a) Offset .and (b) gain errors in a practkal ideal AID converter.

2N - 2

MSB LSB

Flgore 5.42: Block. diagram representatiQn of an N -bit DIA converter.

the successive approximation AID converter, the word-conversion time is equal to the bit-conversion time
multiplied by the wordlength.
It should be noted that an overflow error occurs if the input analog voltage V." exreeds the dynamic
range of the AID converter. It is therefore important to ensure that VA is properly scaled before it is fed
into the AID converter.

5 .. 9 Digital-to-Analog Converter
The final step in the digital processing of analog signals is the conversion of the digital fEter output into an
am.!og form. which is accomplished by .a digital-to-analog {0/A) converter foilowed by a reconstruction
filter. The basic idea behind the most commonly used DlA converter can be explained by means of the
sir:.1plified block diagram representation shown in Figure 5.42, where we have assumed, without any loss
of generality, that the digital sample is. positive and represented in a natural binary code. Here the Cth
switch St is in its ON position if the llh binary bit ae = I, and it is in the OFF position if ae = 0. The
om:put V0 of the 0/A converter is then given by

N
Vo = L2t-la.cVR. (5.72)
.f=l
5.9. Digital-to-Analog Converter 345

,-----.--------------.---~--rov"

. N-2
l I 1•
s.,_, . - s,.
z: . It Z' _,
3 j :saN-J
0

Figure 5.43: Schematic representation of a~: N ~bit weighted resistor unipolar D/A convener.

There are a variety of D/ A converters 1hat are used in signal processing applications [Lar93]. We discuss
below only the follrrwing types: (1) weighted-resistor D/A converter, (2) resistor-ladder D/A converter,
and (3) oversampling D/A converter.

5.9.1 Weighted-Resistor D/A Converter


The schematic of anN-bit weighted-resistor D/A converter is shown in Figure 5.43. Here the operation of
the switches are as shown in Figure 5.42. It can be shown that the output V0 of the D/A converter is given
simply by
N , R '
" -"''-'a t ( -- l )RL
' + l ) V R-
F (5.73}
- ~- (2N

Tite full-scale output voltage Vo.FS is cbtained when all at's are ONEs. Then from Eq. (5.73}.

V _ ( (2!i- llRL ) V
n.FS- {2N- l)RL + l R·

In practic~ usually (2N- I)RL »1 and, as a result, Vv,FS;;; Vg.


Usually a buffer amplifier js placed at the output to provide gain and prevent loading. For a DIA
converter with a moderate to high resolution, the spread of the resistor values becomes very large, making
thh type of conv-erter unsuitable for many applications.
Based oo the same principle as discussed above, a weighted-capacitor DfA converter can be designed.
Such circuits are more popular in JC design.

5_9_2 Resistor-Ladder 0/A Converter


'Tbis type of converter is probably the most widely used in practice. From its schematic representation
shown in Figure 5.44, it can be shown that the D1 A converter output V0 is given by
N
V. ,,,_, ( RL ) VR
{5.74)
o=~,;:. ai 2(R·+R) 2N_,.
t~J L. -

Because of the resistor values used and the ladder-like circuit connection, the structure is often referred to
.lS the R-2R ladder D/A converter. In practice, often 2RL » Rand, hence, the full-scale output voltage

Vo.FS;;: (z·:~ l) VR.

As in the previous case, a bufter amplifier is also placed at the output to provide gain and prevent loading.
346 Chapter 5: Digital Processing of Continuous-Time Sig11ais

Figure 5.44: Schematic representation of an N -bit resistor-ladd<:x (R - 2R ladder) unipolar DIA converter.

Analog
OUtp:.tt

Figure 5.45: Block diagram repn:.sentation of an oversampling sigma-delta D/A CQJNerter.

5.9.3 Oversampling Sigma-Delta 0/A Converter


The basic scheme employed in an oversampling sigma-delta Di A converter :is shown in Figure 5.45. Here.
th(: samphng rate- of the input b-bit digital signal is first increased from Fr to LFr by a factor-uf-L
interpolator implemented by an up--sampler foUowed by an Lth band digitallowpass filter.ll The outpul
of the interpolator is fed in£o a digital sigma-delta quantizer, which creates a single bit output by extracting
only the MSB. The MSB is then converted to analog fonn by a l-hit D/A converter- followed by an analog
lowpass filter. The LSBs form the error signal that is subtracted from the interpolator output in the summer
of the sigma-delta quantizer. A detailed analysis of ihe operation of the oversampling sigma-delta D/A
convener is provided in Section 11.13.

5.9.4 Characteristics of a PracUcaJ D/A Converter 12


A practical D/A converter is characterized by a number of parameters. The effects of these paramet.t::rs un
the performance of aD/A converter are best understood by first examining the input-output relation of an
idealized device. Figure 5.46(a) shows the input-output relation of a 3-bit unipolar D/A converteL Here
the analog outputs for all possible digital inputs are shown as vertical "bars."
The resolution of aD/A converter is defined in a manner identical to that of an AID converter. For an
N -bit DiA converter with the input digital word coded in natural binary fonn, the resolution is therefore l
patt in 2N-l_
Jn an ideal DiA converter, the analog outputs as a function of the input discrete levels will be on a
straight line going through the origin, as shown :in Figure 5.46(a), wid; the difference between the outputs
for two consecutive input digital signals being I LSB. In a practical D/A converter, the actual outputs
evaluated for each possible input digital signal may be unevenly distributed instead of being on a straight
lim~ as indicated in Figure 5.46(b). The integrallinean'ty ( INLJ error is defined as the maximum deviarion
fmTI the straight line. A measure of_t.lte variation in £he difference of the analog outputs corresponding to
1~The de~ign of an interpola!<rr i:s treated m Section l0.2.
12Th is ~e<:lion bas been adapred from [Mit80] by permis;.ion of !be .Juthor ,._.,d pub!i.>ber.
5.9. Digital-to-Analog Converter 347

7 7 /..;.··
_±_
6 6
e.
0
0
5
I LSB
-t:-· /f/
~ 4 .·
0

"<
0 3 ~
<
3

Digital input Digital input


(a) (b)

;!']gure 5.46: Input-output relation of a unipolar D/A ccnverter: (a) ideal converter, and (b) Hneanty errors in a practical
:JfA i:UnverteL

7
6
-·-•
~-.·
Off"<! ·'
7
6 Gain ../
""
0
0
5
"
~
0
5 =~.···
.•
,
0
4 4
../
~
~

~0
< 3 ' ......· "'<..
0 3
.......
.· ..
2 2
.···
.. ··
~
~
0
0
0
~
0
0
8 5
~ ~
°
Digiml input Digital input
(a) (b)

Figure 5.47: Input-output relation of a practicil Dl A <;:onverter; (a) offset error, and (b) gain error_

two con~cutive digital inputs is. called the differential nonlinearity. If the analog outputs are on a straight
Imethat is uniformly shifted by an equal amount for each input discrete value,. as shown in Figure 5.47(a),
1;1e 0/A converter is said m have an offSet error. If the difterences between the actual analog outputs: and
their corresponding ideal analog outputs increase linearly for i.ncrea.'>es in the digital inputs. as indicated
in Figure 5.47(b), the DJA converter is said to exl:ibit a gain error.
1be accuracy of a practical D.'A conYerter is defined by the maximum deviation of its measured output
from tha.t of an ideal D.'A convater. The word-conversion time of aD/A convener is given by the time
taken to deL"Dde a digital word.
The finite tum -ilfi and tum-off times of analog s.witche-:. in the D/A converter. in general, are not equal,
at1d as a result, they give rise to a dynamic error called a. glitch. Consider the situation when al time t = t,.,
the- N -bit word is given as the MSB being ONE and the remaining bit£ as ZEROs. Assume that at time
t = r,~ 1 , the MSB char:ges to a ZERO w;th an other bits becoming ONEs. If the tum-on time of the analog
Chapter 5: Dlgita! Processing of Continuous-Time Signals

0 T }
d\II .." ~~:'
(a) (b)

Figure 5.48: Typtcal output wavefo!Tils: (a) ideal Ul A converter, and (b) a pr.~eti<.:al Dl A convertei.

switch is greater than its tum-off time, then the Nth switch i:-: turned off first and mmnentarily all switches
in the D/A converter are in their OFF po~>itions, I'e'>cJiting in a false analog output equal to zero before the
<;W1tches settle to their correct positions. The temporary sta:e of allliwitches being in the OFF positions
causes a narrow pulse or spike of half the height of full scale to appear at the converter output. These
pulses appearing during the transiti(Jll periods are called Klitches. If such glitches are undesirable., they can
be eliminated by placing an S/H circuit at the output of the Di A converte:i. which holds the previous D/A
converter output until the glitche~ dis.appear and then acquires and holds the new output_

5.10 Reconstruction Filter Design


The output of the D/A converter is finally passed through an analog reconstruction or smoothing filter to
eliminate aH the replicas of the SptX:trurn outside the basehand. As indicated earlier in Section 52.2, this
filter Kkally should have a frequellC}· response such as given by Eq. (5.16). l.f the cutoff fl"equency Q.,. of
the reconstruction filter i~ chosen as Qrj2, where fir is the sampling angular freq_uency, the corresponding
frequency response is given by
I!J:l ~Or /2,
(5.75)
IQI > Cr/2.
If we denote the mput to the D/A converter as yl:n], then from Eq. {5.20}\hereconstructed waloge.quivalenl
y, (t) is. given by
~ sin[.1!(I- nT)/T]
l,.(r} = L..., y[nJ . . . (5.76)
n=-00
l!"(/ nT)fT

Since the ideal reconstruction filter ofEq. (5. 75) has .a doubly infinite impulse response, it is noncat:sal and
tbm: unrealizable. In practice, it ii. necessary to use filt-ers th:tl approximate the ideal lowpass frequency
respon..e.
Almost always, a pmcticaJ D/ A converter unit contains a zero-order hold circuit at its output producing
a staircase-like analog waveform Y:(!), as shown irt Figure 5.48(b). It i.s therefore important to analyze
the effect of the zero-order hold circuit in order to determine the specifications for the smoothing lowpass
filter that should follow the ovemll 0/ A converter structure.
The zero-order hold operation can be modeled by an ideal impulse-train DIA outpul Yp(t) foUowed
by a linear, time-invariant analog circuit with an impulse response h:;;{t) that is a rectangular plllse of
width T and unity height, as indicated i:n Figure 5.49. It follows from this figure that if Y p(jQ) denotes the
conlinuous.-time Fourier transform of y P (1 ), the output of the ideal DJA converter, then the continuot.ls-time
Fourier transform Yz(}Q) of y[(l). the output of the zero-order hold circuit, is simply given by

{5.77)
5.1 o. Reconstruction Filter Design 349

_h_l'h':-(1)-'::-----+• t
Y/1)-1-~:~r---+ Y/1) 0 T
(a) (b)

Figure 5.4'1: (a) Modeling of the zero-Order hold operation and (b) impulse response of the zefO--Qrder hold docuil.
IH/; Sl}i

(a)
!Y,(jU)l

(\
I \
r I
j\··.
, I

J \ t2v;-A..-----
I o n., nl mr

(b) (c)

Figure 5.50; Magnitudere;ponse!i of (a) the zero-orderhokl ciri.-uit, (b) the output of the ideal D/A convene.., and{(;)
the outpu! of ihe practical DlA converter.

where
1-e-j~T =e-j£;[- [sin(OT/2)]. (5.78)
jQ Qj2
The magnitude response of the zero-order hold circuit, as indicated in Figure 550(a), has a lowpass
-characteristic with zeros at ±0.:r, ±2::2 7 , ... , .,.,.me nr = brjT is tbe sampling angular frequency.
Figure 5.50(b} shows the magnitude response IYp(}Q)j, which is a periodic function of n with .a period Or.
Since the frequency response Yz(jQ) of the analog output y~(t) of tbe zero-order hold circuit is a product
of Y p() Q) and H 2 (j:d), the zero-order hold circuit somewhat attenuates the unwanted replicas centered
at multiples of the sampling frequency 0.r, as sketched in Figure 5.50(c). An i11Ullog reconstruction filter,
alsocalfed a SI'IWOthingjiltu, Hr(JQ) thus fo!lows a practical D/Aconverterunit and is designed to further
attenuate the residual portions of the signal spectrum centered at multiples of the sampling frequency Or.
Moreover, it should also compensate for the amplitude diMortion, more commonly called droop. caused
by the zero-order hold circuit in the band from <k to f.l:r /2.
350 Chapter 5: Digital Processing ot Continuous-Time Signals

J·he general speciftcatlons for the analog reconstruction filt~r H, (jQ) can be e<L'lily determined if thc-
dfcct of the droop is neglected. If Qc denotes the highc~t frequency of the .srgnal Jp(r) thai ~hould he
preserYed at the outpu: of the reconstruction filter, then the lowest-frequency component present in the
residll<ll images in the output ofthc zero-order hold circuit i>. of frequency f.!:,, = Rr- f2c- The zero-order
hnld <::ircult has a gain at frequency flo gi:'<en by
. , rsin{QuT /2)] (5.79)
2{)1og 10 IH;o(}Qa)' =~Oiogp),
0 Q , ·
c o/ 2
Therefore. tf the :-.ystem specification calls for a minimum attenuation of As dB of a:l frequency com-
ponent:, in the residual images, !hen the reconstruction filter should provide at leah! an auenuation of
A,-+- 20 log 10 !H~(JQ.,J 1 dB at Q,,_ For example, if the normalized value ofQ,_. is 0.7JT, L'tten the gain of !:he
7.ero-order hold circuit a! 0. 7;r i<; -7.2 dB. Now, the lO\l.·eH normali7.ed frequency of Lhe residual images
i~ given hy 1.3.>r. For a minimum attenuation of 50 dB of all signal components in the re."-idual images at
the output of the zero-order hold, the reconstruclion filrer must therefore pro·vide at least ;m al1enua:ion of
42.8 dB at frequency L3;r.
The dT!XJp caused by the zeiO-order hold circuit can b.: compensated either before the D!A converter
by mean~ of a digital tiller 0!" after the zero-order hold circuit by the analog reconstruction fiiter. For the
latter :lpprnach. we observe that the cascade of lhe zero-order hold circuit and the analog reconstruction
filter must have a frequency re<.;por.se of an ideal recom:tructiOn filter followin£ an ideal D/A converter.
If we denete the frequency response of the ideal reconstruction fil:er as H,(;Q} and that of !he actual
recunstru.ction filler as ii,{jD.). then we require
(5.80)
where H, (jft! 1s as given by Eq_ (5. 16). Therefore, from .Eq. z5.ts0) the de.-<ired frequency response of chc
actual reconslruction filter is given by

IDI .:::: Q,. {5.81 i


jQj > Qc,
:u ensure a fai:hfu} reconstruction of the original s:gnal g" {I). The modified reconstruction filter has also
a noncau'>al impulse re5ponse deiined for -= < f < oo and is therefore unrealizable. As a result, an
an.tlog tilter approximating the. magnltude response of Hr(jQ) mu-st therefore be designed.
Alternatively, the effect of the droop can be compensated by including a digitai compensation filter
G(;:) prior to the D/A t-'lmverter circuit \V!th a modest increase in the digital hardware requirements. The
digitai compenXttion filter can he either an FIR or an IIR type. The frequency response of the d:gital
compensation tilter is given by
- w!2
G(eJw) = -.--'---. 0 :S: :mf :::. ;r. (5.82)
sm(tu/2) ·
1\vn very low order dig:ta! compensa(;on fillers are as fellows [Jac%]:
GFIR(Z) = - ~...;.. ~.:: -~ -- ~liz
1
I -2
(5.83a)
9
GnR(z)
8
=z 1. (.'i.8.1h)

Figure S .51 show& the gain responses of the uncompensared wtd the dmop-compensa!ed D/A converters in
the baseband. Since !he a hove digital compensation filters have a periodic frequency response of period (2T,
th..: replicas of rhe baseband magnitude re.sponse outside the baseband need to be "'-Uppressed sufficiently
to ensare minimal effect from aliasing. Even though the zero-order hold circuit in the D/A converter
pn.wides some attenuation of these unwanted replica1. [see Figure 5.50(c)l. it may be ne.:essary for the
analog recon:s.tmction filter following !he DJA convener 10 pruvide additional attenuation.
5.11 . Effect of Sample-and-Hold Operation 351

--- I

-
····~
------'

Figure 5.51: Gain rcspon<;es of the uncomperu;ated and .:ompensated DAC in the baseb;md.

Figure 5..52: Tl!ustratio11 of the averaging operation caused by the SiH circuit.

5.11 Effect of Sample-and-Hold Operation


Tiu: frequency-domain analysis of the sampling of a continuous-lime signal discussed in Section 5.2.1
assumed ideal sampling generating an impulse train representation of the sampled signaL As indicated
in Figure 5.1, in mo!>1 applicatiOns, the sampling operation is provided by an SJH circuit. In principle,
the S/H circuit samples the analog signal at each sampling instant and holds a constant value equal to the
,;ampled value- for a finite and short period of time to permit the AID oonver:er to C-Of!Vert it into its digital
form. However, in pnlCtice, as indicated m Figure 5.31, the S1H circuit tracks the analog signal x,(t) Ol'er
a small interval r:. The overall effect, as illuR.trat~d in Figure 552, is to develop an average ofthe analog
s1gnal over this interval which JS held constant at the input of the AID converter. We now analyze in the
frequency-domain the effect of the averaging operation of rhe SJH circuit {Por97J.
From Figure 5.52 it f-ollows that the nth sample value x[nl of the impulse train output xp(f) of a
practical SIH .::ircuit JS given by

x[n] =
~ ["T+o
.t,. (t)dl. (5.84)
£ ~nT

To understand the effe.:t of the <:~hove averaging operation. denote

ga(i) = f_r= x"(r)dr + K, (5.85)

where K is acunstant of arbitrary value. Then. from Eq. {5.84) we get

I
xl_nj = -jg.,(nT
£
+ £)- 8a(nT)}. (5.86)
35~? Chapter 5: Digital Processing of Continuous~ Time Signals

p!r)

Figure 5..53: Equivalent rep1 esentation of a practical S/H circuit.

The. impulse train gp(t) with sample value ga (_nT) is obtaioe-tl by an ideal sampling of the analog -.ignal
g.,(rL and it follows from Eq. !5.85) and the differentiation property of the cominuous-time N.>urier
tnu~s.tOnn (CTFT"l, that !he CTJ.J of gp(t) is simply ,~ X"{jQ}, where Xa(JQ) is the CTFf of .\,_,(t).
Using lhe time-s.hifting property. the CTFT of the impuise train with sample value gu.(nT +£)is therefore
given hy .,-~~~~ X,(jQ). H<.-"nce. from Eqs. (5.14a) and (5.&6). the diM::rete-tfme Fourier transfonn (OTFT)
of the di»crete-time 'i.i.gna! x fnl appearing !1~ the input to the AID converter -can be expressed as

I
X(<'J'") = - '"' X
TL.._;
=
"-
i - (' w-
- T ilrk) . !5.87)
1.=-X \

:'(- t ·n l -- I - .,....
, u .}
e- if:i.,c
X " (J"Q)-
- e
-.i!:<r;/:!
(' sin(Qs /2))
·
"~I"
X u ( J·w, . (5.&8)
_I ~~f' ' .0(,~, ._

It I~JIIows from the above equation that the averag:ng oper<Jtion performed by a practical S/H ..:ir~uit is
equivale~t to passing the continuous-time ,;ignal X a (t) ti:rough an LTI discrete-time syst~m with a frequency
re:-;ponse e -- jQ,f!(!Sin(Q..,-;2})j{QE/2J folbwed by an ideal impulse train sampling as indicated m F1gure
55]. N"ntc that :he frequency response of the di~rete-time system is similar in form tnthatofthc zero-nrder
hold circuit as given in Eq, {5.7!5) as shown m Figure 5.50(a). Thus the discrete-time system of Figure
5 ..5] acts like a narrowband lnwpass filter which performS the averaging operation. If the traddng period
F IS much smaller compared to the sampling period T, as is usually the case, the effect of the lo'-"'])aSS l"ilter

can be neglected and the practical SIH c-in:uit can be considered as .an ideal sampler_

5.'12 Summary
Various p.,;;ue<; ce~nccrned with the Jigiral processing of continuous-time signals are studied in th1s c-hapter.
A di\C!elc-tlme signal is obtained by uniformly sampling a continuous-rime signal. Yhe dix:rete-time
representation is uoique if the sampling frequency is greater than twice the highest frequency nmtainet.1
in the coutinuous-tinre signal, and the latter can be fully recovered from its discrete-time equivalent by
passing it through un ideal analog lowpass reconstmction filter v.·-tth a l"Utoff frequency that is half the
!>am piing frequency. If the ~mpling frequency io; :ower tillm twi.:C Lhe highest frequcn.:y contaln<.:d in
the cm·,tinuous-llme signal, in general, the latter cannut be recovered from ils discrete-time venion due
lo abas-ing. ln practice, the continuou-<-time signa! i:o fint pahscd through an analog lowpass anti-aliasing
filter wid: the cutoff frequency chosen a~ half of the sampling frequency whose output is s2.mplt:d to prevent
alia~ing. It is also shown t!Jat a btmdpas;; t--ontinuouo;-timc signal can be recovered frum ih discrete-lime
'-'qUi valent by untler>ampling provided the highest frequency i;, an integer multiple of the bandwidth of the
nmtinuous-time sig:'lal and the sampling frequency is greater than twice the bandwidth.
A brie!' rev lev.' of the theory hehind some popular analog lowpass tilter design techniques is included
and their design using M.\TLAB is illustraled. Ahn d!SCUs»ed are the prm.:cdures for designing analog
highpa~:-;. bandpu~s, and bandstop filters. ar.d their it1plementations. using MATI.AB. The specification;. of
the Jna!og: filte;s arc usually given in tcrmso:· the h~cions-nfthc passband and srophand edge frequencies.
5.13. Problems 353

Jnd the pil:>s.band and :;topband ripples. Effect:> of the)!.e par:1;neiL-r;; on <he performance;; of the antJ-alia..,lng
and recuHstructmn tiller..; are examined.
Other lntaf2ce devices involved in [he digital processing of continuous-time >.:gnah arc the ;.ample-tind-
hoid circUit, comparator. analog-to-digiwl converter. and digital-to-analog converter. A brief i'ltrodur.:tion
ln these dev1ces is included for completeness.

5.13 Problems
5.1 Show that the pcriodiL· impulse train p(l i defined in hj. (SA; can ht expre'<:ed a,;. a 1-'ouncr ;;~ric-;; a,; g\v<On b:;
5.1- (il).

5.2 A ccmim;<JU\-l:tne ~ign.al x, ll J i~ comp<:~scd uf a linear combination of sinusoidal ~ignah pf frequende~ 150 H/ _
450 HL ! .0 k.Ht- 2. 75 kHz. and 4_!)5 kH:.::. The ~;gnal.<a (I J l~ sampled at a !.5-kHz rate and rlk \J.mpkd sequctu:c i:-.
pa:;;;ed tbruug~ an ideal lo""P"'o"S fiit.or with a cu1oti frequency of 750 Hz, generating a coruimh)u-.-tim~ ,;gnaJ .';-,it J.
W!ml ;(f.O the fR-qucncy cun>ponem:. p.n;~senl in the n:construch:d -;igual Ya(l l!

5.3 r\ ;,:orditmnu.~-time Signal x" (f) i.._ compu'ied of ii linear ..:ombinatioll of sinusoiduJ ~ignn:s ;::f frequenn.;., F1 H.:.
F'I Ht.. F-, liz. <lHd f4 HL The signal -'a (t) '"- ;;ampkd at an il-k Hz: rate. and the :.a.'Tlplcd ~et1uencc i~ ~hen p;L--..~ed
lhrou)!h ;m JJeaJ low pass tiller with a cutoff frequenc:'l' qf 3.8 kHz, g.enenrting a conlimmus-:imc .signal _.,.,if! o;om)X1Sed
of three o.inuo.o1dal signals of frequen<:ie.s 45l] H.t.. 625Hz. -and 9=0 Hz. rcspe-::tJVely. What are ·,he pru;\!bk v-..~tue;; ui-
F l _ F:. h, a:-u.J F+ 7 I~ your answer unique? If no!, indicate anolher ~et of po~sible value-; of these f.rtquen._-ic-;.

5.4 Tilt• <:on>inuon~-time stgnal x<, 11) = 3 cos(41Xhr t j + 5 <>m( :200n:-1 l + 6c0l->(4400n t) + 2 s:nt5200Jr r) b :;:ampled
:.1 a 4-kJ--iz rate generating lh<:e l--equerK(! rlu j. Delenlllne the exm;~ e><prr::ssiQn of xjnj.

5 ..:; TOC k-f( ;;nd right dumnte!s of an analog stereo audio sigrta! are ~ampkd at a 4S.2-IcHz rate with eo.Kh -.:hannd then
hcing cun''erh.'d imo a Jif:ita! h!L '-tfcam using a IS-b11 AID converter. Derermine the combined hit r..ltc uf the two
,:h:wneb ;d't::>r '-Umpling and digi1iz.o,tion.

!",(, Show th.o.t the mlpub~ response hr (1) of Btl idf"~'ll lowpasl'; fill.;r ;~~ derivo;:d in Eq. \5.1 ?) m\l,eec tht value hrlll T} =
i;j I'! f,lf alJn jf the ._:utofi frequency Q,. = n-T /2. where Qf b Ihe 'ump!ing frequt:n;;y.

3.7 (\l!~\ftkr tf_e ~ystem cf f'igure 5.2. where the mput continuous-lime ~ig,nal ,,.(t) has a bandlimited spectrunl
X a ( 1 [l) ;b o.J...t'ichcd in Fi_p::n: P."i.l !a J and i~ ~ein_g sampled Ht the 1\'yquist rate. The discrete-time processor is an ideal
lowpu~~ f1l!t:r witi: a frt'quency re~pon,c 11 (ej'~} ns shov.n in Figure P5- ! Ib) <md hm; a <.:<Huff fre(JUcn..:y "''- = n, 1'/ .i.
where 1 :~ -;he '-,;Hll?ling p;:-.riod. Sket.c:h us accur.J.tdy a\ possible rhe spectrum Y, {j R) nf ;he ou!pul contiJ:uous-lime
q!Uuti \,,(!).

X _(jil)
"

'" ' 0
(a) (h}
l<'igure P5.l
354 Chapter 5: Digital Processing ot Continuous~Time Signals

5.8 A continuous-;:ime signal xu(!) has a bandlimitedspectrum Xa(jQ) as indicated in Figure P5.2. Detemiine !he
smalleS! sampling rate Fr that can be employed to sample x,,\t) so (hat it can be fully recovered from its sampled
vt:rsion xl_n) for each of the foliowing set:r. of values of'ile bandedges n1 and !.12. Sketch tbe Fourier rramform of the
:;;;.mp!ed version xln J oblained by sampling Xa:U) at the smallest ~ampling rate Fr and the frequency respon::,e of the
ideal reconstrunio:t filre; needed to fuily recover x,. (!) for each ;:ase.
(a) Qi =200.rr,Q2= !60n:: (b) Qj = 160?r,Q2 = !20;-r: (c) n, = 150r.,n~ =liOn.

Figure P5.2

S.!.l For each set M desired peak p<>ssband deviation a p and the mimillUm stopband attenuation et: 5 of an analog ;3wpa~"
filter given below. determine the corresponding passband and ~topband npples, 5 p and l>s:
(a) ap = 0.15 dB. a;; = 43 dB; (b) ap = 0.1)4dB. a, =57 dB; (c) ap = 0.23 dB, a, = 39dB.

5.10 Show thai the analcg transfer function


a
H1(s) = - - . 1/ > 0,
S-<-G

has a lnwpass magnitude re.~ponse with a rr.onotonicaHy de.:.:rcasing magnitude response with IHa(jO): = I dnd
;H... (joo)i =C. Determine the 3-<:IB cutoff frequency Qc at which the gain respome is 3 dB below th~ maximum
value ofOdB at n = 0.

5. Ll Show that the analog transfer function

. = --.
H2•.s) ' a-,. 0,
s+a
ha~ a highpass magnitude re~nse with a monotonically incrcJsing magnitude response with IHa(jO)I = 0 :md
:,J:J,(joc)! = l. Determme the 3-dB cutoff frequency r.!c at wh1ch the ga:n response is 3 dB below the maximum
value c>fO dB <~I Q = oo.

5.12 The lowpass !ranskr f111dion H; l_s) ofEq. {5.89) and !he higl-:pass transfer function H2(•·' of E{j. (5.90) can be
apre1;,;ed in !he form

where il.J {s _l and kz{s) ale stabk analog allpl&> transfer function>.. Detcrmine A 1 (s) and A2 is).

5.13 Slhlw that the analog trano;fer function

H1(sl= "'
s 2 +bs+O,2
. b > 0, (5.911

ha• a bandpass magnitude response with 1Hu (}0}( = IH, (p:x:>}l = 0 and !H., (jQ0 .L 1. Determ:ne the frequencies
n; and f:!2 at wbk~the gam is J dR bt:low the maximum vaiueofOdB at Q.,,_ Show 1h<tl Q1!!2 = n~. TheJifferenc~
0"2- !.11 is. called the 3-dB bandwidth of the bandpa'>~ transfer function. Show that b = n2- n 1.
5.13. Problems 355

5.14 Show that the analog transfer function

h > 0, (5.92)

has a bandstop magni1ude respor.se with IH" (jO)! = ,Ha (j oc ): 1 and : Ha(jfl:o): = 0. Since the magnitude is
exao..:tly zero at Q 0 , it is called the notch frequency, and H2(5) is often called the notch transfer function. Determine
th~ lrcqucncics <J; and n 2 at w!lich t~ gain is 3 dB below the maximt:m "alue ofOdB at Q =
0 and Q =co. Show
that Q I ~!2 = nf,. The diffen:nce D2 - :Q 1 JS ;;alled the 3 ·dB notch bandwidth of the bandpass transfer function.
Show that b = n:- r.!t-

5.15 The bandpass tmm;fer ;unction H1 Cd nf Eq. (5.91) and the bandstnp transfer function H2(.~) of Eq. {5.92) cmt
be expreJ~ed in the form

">Vilo:re- A; (s) and A2 (s l are :;.table- analog all pass transfer functions. Detenmne Aq (s) and .4.2(S ).

5.16 Show that the first 1N - ! derivatives <Jf the squared-magmtude response i H<J. fjQ)j 2 of a Butterworth filter of
order N as given by Eq. (5.31) .are equal to zero at n = (I_

5.17 Using Eq. (5.33) determine the Iowest order of a lowpa'is flutterwonh filte:- with a 0.5---dB cutoff frequency a!
2.1 kHz and a minimum attenuation of 30 dB at 8 kHz.

5.18 Csmg Eq. (5.35) detenn:ne the pole locations and the coefficients of a fifth-Qrder Butterwurth polynomial wit!'.
unity 1-dB cutoff frequency.

5.19 Show that the Chebyshev polynomial TN(Q) defined in Eq. (538) satisfies tbe recurrence relation given i~
Eq. (5.39) with Tu(Q) = I, and T 1 (Q~ Q_ =
5..20 u.,.ing Eq. (5.41) determine the lowest order of a towpass lypE ! Cheby;hev filter wi!h a 0.5-dB cutoff frequency
at 2.1 kHz and a minimum dltenuation of 30 dB at 8kHz.

5.21 ijs:ng Eq. (5.51) determine the l<Jwcsr order of a low pass elliptic filter with a 0.5--dB cutoff frequency at 2.1 kHz
<'illd a 1t1inimum att-enuation of 30 dR at 8 kH.~.

5.22 Doermine the Bessel po!ynmri:ab B,_, (s) fOr the following v4lues of N: (a) N = 5, and (b} N = 6.
5.23 The transfer function of a second-order mtaiog Buf':erworth lowpass filter with a passband edge at 0.2 Hz and a
pa-;~han-d ripple of 0.5 dB is g;ven hy
4.52
HLp(s) = ---·~-;-cc;
s1 + 3s 452
Odennme the tran~fcr function H H p (.;-) of an analog highp3s~ filter with a pas~band edge at 2 Hz and a passband
nppk r>f 0_5 dB by applying tflc s~trultransformation ofEq. (5.59i.

5.24 Th<: tnmsfiT P.:<nc:ion of a second-order analog elliptic lowpa~'> filler with a passband edge ::n 0.16 Hz. and a
tm'>~,barrd
ripple of I dB is given by
O.il-56{:.· 2 + 17.95)
HLp(s}= .
~ 2 + L06•__,..l.l3
Dctenllinc t~.c tr:atH-.fer function H I:Jp(Y) of an analog bandpass fi Iter with a center frequency at 3 Hz •md ;;; bandwidth
nf O.'i Hz by apply:ng the spcclnJ trans~·orma1ion of Eq. (5.61 ).
356 Chapter 5: Digital Processing o• Continuous-Time Signa~s

5.25 A BuUer».wth analog higbpas.\ filter ili to be designed with the fOllowing spe;;ifu:arion.~: Fp = 5 MHz, f~, = 0.5
MHz, iY P = {13 dB, and a 1 = 45 dB. What are the bandedges and the order o~ the currespcnding analog lowpi:l~~
filte-r? What is the orde of the highpass filter? Venfy you• results using the fun(:ti.On bu ~tor d.

5.26 Ar. elhptic analog handpa~s filter is to be designed wnh the following specificacions: p;.tssband edges at 20 kH.-:
and 45kHz, stcpband e>.lge" a! 10 kHzand60kf-lz, peak passband ripplcof0.5 dB. and minimum stopband attenuation
of 40 dB. What a~ the band edges and the order of the corre:>ponding analog low pass filler? What is tbe -orde• uf the
hlndpass filter" Verify your resulr>, using the function e ~- _ i pcrG_

5.27 A Type 1 Cheby~ht'v .ma!og bandstop filter is to be designed with the following speciticatlom: pa~~band edges at
i :JMH-z and 70 MH;:, stopb-and edges. al 20 MHz and 45 MHz. peak p<~.:;;;;band rippleof0.5 dB, afld minimum stupbnnd
wtcnuatmn of 30 dB. What are the handedge. and the order of t.'w COITe!>ponding an1log Jowpas.." filter? What io; the
oi"!Jer of the banth-1:op filter"' Veri(y your result'> using the fUlK'Iion c-:---.o2bl or d.

5.28 Verify Table :'i_{.

5 . 1'J f.kri\·e ELf- (5.73.1.

5.31 An alternative to the zero-order ihlld circuit of F1gure 5.49 U5ed for signal rec-om.1:rucuon Jl the output n:· ll DIA
c;mverter is tru: lint-order hold cin:-uil which approximate~ y., (I l according l•) the following relation:

Yp(n.T/- Yp(nT- n
Yt (f! = .Vp(nTJ + T P - nT). nT S t:; In+ J)T.

As indkated by the above.equntion. the fir:o;1-otder hold circuit approximates l'u (!) by slraight-Iine segments. T!'.e slope
of the segment between 1 = nT and: = (r, + I)T is detenninetl from the sample v2.lue~ Yp(nT) and y.,(nT - T).
Dctennine t!'lc impulse re.pon~e h f\t) and the frequency re<.pon'>IC II f (J Q) of the ftP.it-mder hold<-ircui(, ;Jnd compare
im pcrfonnam.:e with lhat of the .r.em-order hnld circuit.

5.32 A more imp~oved ~ignal recor:~t.ruclion a! the oUlput of n DIA •.:onvener is provided hy a linear ioterpolarion
,;;rcuit which :1pproxirnates -,'.:!(1) by connecting succe~"Sive :-.ample points of yp(t) with str<~igtt-line segment:-.. The
input-oolput relation uf this cin:uit is _given by

}" (nT)-y (nT-T)


" ' T -
Yf{l) = lp.n 1") - P P "
•t- n T) , nT ::01 ::": (rt+ t}T.
T "
D.:tennine the impube re-.p.1nse h f (1) and t::.e frequency respon~e H t (jQ) of the linea• interpolation .::in:uit, :md
~.:ompareits performance with that of lhe fint--{]rder hold circuit.

5.14 MATLAB Exercises


M 5.1 W rik a MATLAB pro _gram !o compute rhe required order of a low pass Butterworth analog fiJt.er accordlug 10
Eq {.:' .:.<3). Using !hi~ progri:!m determine the lowest order of the low pass filkr of Problem 5.17.

!\-·t 5.2 Write a M.o, fLAB program to compute 1he R"quired order of a Jowpass Chebyshev analog filter according to
Eq. !:'S.lf.}. U~in,; !h1s pnYlram de1enn;ue the lowest on:ler of the lmvpass filter of Problem 5.20.

2\-(5.3 Write a ~1ATLAB program to compute the required order of a lnwpass elliptic amllog filter aec:mding to
Eq. (5.5l ). U:-.i11g !his p:og.ram determine the !owe~t order of the Jowpass filter of Problem 5.21.
5. 14. MAT:..AB Exercises 357

M SA Delerminc the transfer funclion of a Jowpass Butterworth analog fil[er v.ith ~pecifintions as given in Prob-
km 5.17 usiag Program 5_2. Plot the gain re~ponse and \cerify that the til~ designed meets the given speclficatinns_
Shnw all steps.

M 5.5 f.lelerrnine the tran~fer functlon of a lowpas;; Type I Chebyshev analog tiller with specifications as given in
Probkm 5.20 u~ing Prugmm 5 J. Plot !he gain respome ;md verify thai the filter designed met:ts !he given ~pecificatiom;_
Sthlw all step~.

:\15.6 Modify Prograr,J. 5.3 to 'ksigu low pass Type 2 Chebyshc\" analog filters. Using this. program, dett>nnine the
lr.tnsfer furn:tinn of a Jowpass Type 2 Chehyshev an,1log filter •.11ith specifications as given in Problem 5.20. Plot !he
gain res.pm:sc and verify that the filter designed meet~ the given :.pecitk."<llions. Shew aH steps-.

M 5.7 Detcnnine \he transfer funt:tion of a lowpass cliiptic am.log filter with specifications a>- given in Problem 5.21
u:siog Program 5 _4_ Pltlt th<'" gain response and verify that th.e filter designed meets the given specifications. Shov, all
step'>.

M 5.8 Design a Butterworth analog highpass fthec with specilications given in Problem 5.25. Show the tran~fn
functions of the ?fO!o'iype 2naloglo>.Vpa<JS aud the high pass filtt:I,.,_ Piot their g'Jln respun>.es <Hld ver"rl'y tha1 bL>~:h filters
mcel !heir respective specificllinns. -Show all steps.

'VI 5.9 Design an elliptic analog bandpass filter with specificaticn'i given ill Pmb!em 5.26. Show the tran~fer functions
of the p:-olotype Jnal<)g !owpass ;md the b.111dpass fil!er->. Plot rheir gain res'POflSC'> and verify that both ti!tcn meel
lOOr l"C'>pcctivc spet::ifn.-<tticns. Show aU step:-;.

l\1 5.10 D.::sign a Type I una!ng bands!op filter v.rith Sj'le('ilicatinn,; g;ven in Problem 5.27 _ Show tlu: tnm~fcr fun..:tions
of fnc pn;tn~ype ;ulltlog !o..,pass .and the h<!nd~top filter»_ Plut their g<Jin response~ and verify !hat bt:••h liltcrs meet
lheu !C~pcc!h-e speciticatons. Show all steps.

M 5.11 Write J i><fATL>\B program 10 verify the plots of Figure 5.52.


Digital Filter
6 Structures

Th·:: de~cription of the discrete-lim..:: system t'f Eq_ (2.64a) t>r !2.64-b) expresse~ the nth output sample a;;
a convolution o;um of the input \\'i:h the impulse rcspoo;;e of the sy<>tem, and i:~ some sense. it is the most
fundamcntlll characterization of an LTl digital tilter. The convolution sum. in principle, can be employed
to implement a digital filter with a known impulse response, and the implementation involves addition,
mLltiplication., and dcl:ly. which are fairly simple operation:._ For un LTl system with an infinite-length
im:::mlse response, the approachjs notprac!ica!. However. for ':he infinite impulse respome(IIR) LTJ digttal
filt[Or described by a con:-,tant coefficient difference equation of the form of Eq. (2.82), and for tf:e frnite
im:-rul~ respur..sc (FlR) LTI digital filter d.o-scribed by Eq. (2.Y7). the input-output rdation involves a tiniu~·
sum of produces. and a duect implementation based on these equations is quite practicul. In this text we
deal eoly with these two type-S of LTI digital filters
Now, the actual implementation of a digital filte~ could he etther in software or hardware form, depend-
ing on applications. In both type~ of implementation. the signal variables and the f.lter coefficients cannot
be represented with infinite prec1smn. As a result, a direc! implementation of a dig~tal filter tmsed on either
Eq (2.82) or 1:--:q. (2.97) may not p~nvide ~atisfactory ?erformance due to the finite precisinn arithmetic. It
is Lhus of interest to develop altcma:ive reali7atiom based o::~ other types. of time-doUU!in represemations
with equivalenl i:1put-ontput reladons to eirher Eq. (2)Q) or Eq. {2:97). depending on the type of digital
fllt·~r being implemented, and choose the realization that prm-idcc. satisfactory performa:Jce under finite
precision arithmelic.
ln this chapter, we consider t!1e realization problem of causal IIR and FIR trano.fer fur,ctions and out}inc
r<!alization methods based on both the tmJe-domain and the transform-domain representations. (n Chapter
9, we develop the methods for the ;maly\is of such strm_,tures when implemented wt:h finite precision
ari1hmeti.c and present <J.ddltionnl realizations that ha\·e been developed to minimize the- efk-cts of finite
word\cngth.
A o.mJctuml representaiion using ioterw!lnecteC bac.ic building blocks is the first step in the l:ardw<.~rc
or mftware implementation of an ITI digital filter_ The structural repre>,entation proYides d1e relations
between some ,errinent internal nriablcs with the input a_rul the output thiit in tum provide l.he keys to the
imple;ncntation. There are various forms of th~ structuml repre:>entation of a digital filter. We re\iew in
thi10 chapter two such representations, and then describe some popular schemes for the realization of real
cat: sal HR nnd FIR digita: filtero._ In addition. v..-e outline a method for the reali7.ation '-'f HR dig1tal filter
:;twcturcc. that c.ao be u,;ed for 1hc gener.>..tton o:· a pair of orthogonal .<-inusoidal s...--quences.

6. i Block Diagram Representation


As indicated earlier. the input-output relations ot an LTI digi:<:~l filler can be expressed ir. various wavs. ln
the time-domain, it is given by tffi: convo!utlun sum ·

359
360 Chapter 6: Dig:tal Filter Structures

Po
x!n I ~-,--~)-'----..0+~--~--T- Jfn]
'
$p
Hgure 6.1: A first-order LTl UigJt<~l llller.

=
y[n] = L h!k!xln- kl, (6.L l
l=-·=
ur by the linear constant coefficient difference equarion

:v M
y[n.! = - L duln - kl +L PkX[n- k(. {6.2)

A digital filter can be implemented on .:t general-purpose digital computer in software fOrm or with
special-purpo~e hardware. To this end, it is necessary to Ue;.cribe the input-output relationship by meillls of
a computational algorithm. To illus.trare wh<H we mean by a cmnputationaialgorithm,consider a fin;t-«"der
causal LTf HR digltal ti:Itcr described by

}'[n J = -dtyln - ll + p;p;[n j + P\.tln - lj. (6.3)

Using Eq_ (6.3} we can compule .v[nl for n = 0. 1. 2 ..... knowing the initial condition y[ -I] .and the
inputx[n]forn. =-1,0,1.2 ..

yiOI = -d~.vl-11 + pox!OI + p;xf-11.


y[l] = -d1 y[OJ-.;... pox! l J + P!X(O!.
_vi2! = -dt_vll I--:-- p 0 x[2l + Ptx\1!.

We can continue this cakuJat.ion for any value of n \H~ desir~. Each step of the calculation process requires
a knowledge of the previously calculated value of the output s.ample (delayed value of the output). the
present va[ue of the input sample, and th,; pre....-iou~ value of the input sample {delayed value of the input).
Knowing the data values, we :nuitiply each of them appropriately with coefficients -dl, po. and Pb and
then sum the products to compute the pre.sent value of the nutput. A,; a result. the difference equa[ion of
Eq. (6.3} can be interpreted as a valid computational algorithm.

6.1.1 Basic Building Blocks


The ccmputauonal algorithm of ·,m LTt ;ii.gital filter can be conveniently represented in block diagr.tm
t{mn w>ing the basic building blocks representing the unit delay, the multi.plier, the adder, and the pick-oft
node as shown earlier in Figure 25. Thus, a block diagram representation of the first-order digital tiller
descnbed by Eq. (6.3) !:; as indicated in Figure 6. L The ndder shown here is a three-input adder. Note aJs-n
the pick-off nodes at the input and the output.
6.1. Block Diagram Representation 361

There are sc\'eral advant;Jges in re;Jre!;enring !he digital filter in block diagram fonn: (1) it JS easy
co write down the comput-<:tional aJgorithm by inspec-tion, (2J it is easy to analyze the block diagram to
detmnine the explicit relation between the output and the input, (3) lt is easy to manipulate a block diagram
to derive other "equivalent" block diagrams yJelding dit-:erent compulationaJ algorithms, (4) it is easy to
det..:nninc tt.c har-dware rcqu-irernents. and finally (5) lt i~ t<Jsier to develop block diagrnm representations,
~'rom lhe transfer function directly leading to a varic.ly ot ..equivalent" representatior.s.

6.1.2 Analysis of Block Diagrams


Digital filrer structures represen1cd in block diagram form can often be analyzed by writing down the
expressions fur the output signals of c<Jch adder as a sum of ir:;, input signals, developing a sel of equations
relating the filte-r Input and output signals in terms of all internal signals. Eliminating the unwanted internal
vari3bles tt>.en result,;. in the cxpres&ion for the output s1gnal us a function of the input signal and the filter
parameter;, that are the- muhlplier coefficients.
The following two exampks illustrate the analysis approach.
362 Chapter 6: Digital F:lter Structures

-4>-~ Gl\~1 Y(z)

G2 lz)

.Figure 6.2: A s.in.gk-loop digital filter structme.

Figure 6.3: A cascnded lank:e digital filter :-;tructure.

w[nj u(n l
+
I
~A

Figure 6.4·. An elUimpfe uf a Celay-free loop.

6.1.3 The Delay-Free Loop Problem


For physical realizability of the digital filter structure, it is necessary that !he bJock diagrnm representation
comains no delay-free loops, i.e., feedback loops without any delay elements. Figure 6.4 illu;>trates a
typi~al delay-free loop that may appear unintentionally in a ~pecific structure. Analysis of this structure
yiefds
y[ni =B {A (w[nl + y[n!)-+- v[n]}.
The ab:)ve cxpres!->ion implies that the dete:-mination of the CL.urent value of yfn] requires the knowledge
of the same value that is physicaity impossible to achieve due to the finite time required to carry om all
arifrmetic operations in a digital machine.
A simple graph-theoretic-based method hac; beea proposed to detect the presence of delay~free loops
in a:1 arbitrary digital filler structure, along with the methods to locate and remove these Joops without
6.2. Equiva!ert Structures 363

l ~ AB ulnJ
+

_,
+ v!n]
yfnl

Figure 6.5: An cquiv::knt realizarwn of t:<e ~~ru{Clme of F1gurc 6.4 with no delay-free ]<)up.

altering the overall input-output rdations lSzc75l The remm·al is achieved by replacing the portiDn ofthe
s:ructure containing the delay-free loops by an equivalent realization with no delay-free loops. For exampl-e,
Figure 6.5 shows an equivalent realization of the block diagram of Figure 6.4 without the delay-free loop.

6-.1.4 Canonic and Noncanonic Structures


A d:igita! filter suucturc is ~aid to he canonic if the number of delavs in the- block diagmm representation
io eq~al ro the order or the difference equation (i.e., the order of the transfer function). Otherw1se, it is a
n::momonic ;;tructure. For ex.amp!c, the ~tructure of Figure 6.! is a noncanonic realization since lLemploys
two delays to realize t'1e first-orC.-er difference equatiOn of Eq. (6,.3).

5.2 Equivalent Structures


Our main objt<:tive in this chapter is to develop various realizations of a given transferfunctiPn. We define
two digital filter struc;ures to be equivalent if they have the same transfer function. We outline in th1s
chapter a.nd in. Chapk-'f 9 a number of med:.ocJs f{)r the ge:1eration of equivalent Mructures-. Howc\'eL a
f;;,irly simple way to gener7.tc an eq:livale-ntstructure from a given realization is via the IJVnspose operation.
which is as follows {Jac-70aj:

{i1 Reverse all paths,

tii1 Rt:place pi.d-off node". by adders, and vice versa, and

(1ii) l:ncrchange the input and the outp~t nodes.

All other method:· for dev.;_,!oping <-'quivalent structures :lfC based on a specific algorithm for each
s.tmcture. There are literally an infinile number of equivalent structures realizing the same tram.fer function,
and it i:; impussib!e tD ,Jevdop all sucb realizations. Moreover, a large variety of algoC:thms have been
:~dvan.ccd by variou~ author-;, and space limitations prevem us from reviewing each method in this text.
\Ve therefore restrict oursehes to a discussion of some ~ommonly used structures.
1! should be noted that under infinite precision arithmetic any given realization of a d1gital filter behaves
idcntica;ly to any other equivalent ~tructure. However. in practice, dtK to the finite wordlength !imitations, a
specific realization behaves !otally differently frorr: Its otherequivate:lt realizations. Hence, it ts -.mportam
to dmo,e .a structure that has the least quantizatinn effec!s fiDm the finite wordlength ~mplementation.
(hte W<ty to arrive at such a structure is to determine a lar<.;.e number cf equivalent struc!ures, analyze the
finite "Nordkngth eikcts of each one, and then selec! the one showi.:1g the least effects. In ccr<.:;in -.:ases.
it is j)fF,"iblc- to d~velop a struuurc that by construction ha<> the least quantization effect$. 1be analysis.
364 Chapter 6; Digital Filter Structures

<)f quantiL<~tion eiTtXb i<> the suhje<:< of Chapter Y. whicll aho descrihe;; additjo11::l .;.til.tcture:-. \recificaH;
developed to mimmi..e certain quantiLation ~:ffect;.. In thi:-, chapter. we di:-~.-u .. ~ som~ -.m~p!e !cal!L~lt;on ..
thH in many npplicatiom, :1re adequah:. \Ve (I<J, hm'>c\'n. n1mp;1re each ,,flhe reali/a;:i<Jns disw:-.sed here
wi:h regard to their computation;t] compk:>..il v detennincd m hTnh or the total number of muh.ipllcrs and
the tot;:! numbt:r of ::odders n:qu~rcd for thC"ir i rnpkm;:nt;tt[o;i Thi;, b!te:- i!.'-lk i-; Hnportarl! where the co~t
of imp!e;neotJ1ion Is critical.

6.3 Basic FIR Digital Filter Struc!ures


We tir-.t eon'>ider 1hc i-eJh7ation of FfR digiw.J filter~. Rcca:: that a <:.lU:<.i:!l FlR iiltcr of orJer N i~ charat-
lt.:>i7.:d by a lranc.f~r function II!.~ l,

H(::_)
'
= Lhlkj_· A (6.91
k=t:

wh1ch i" a p0lynom:iai1n ::- l of d.:-grtt N. ln the limc.:-Joma;n the input-output rdalion of the dxn.: FIR
tiher i~ gwen hy
y

y[nj = Lh!k)xil'- k].


k=O

where yfn 1 and xlnl arc the output and inp'Jt ,-.cu,uc:w.:-~. re">J-'<:ctively. Since FlR filters can be dcsigne.:l to
provide exact linear- ph:t~e over the whole frequency range a:ld are always BIBO stable indept•mlem of the
filter coefficients, such fi Iter.; arc (lften preferred in man:: ap;;iication<. \V~.:: now outline ;,c\'Cr<t! reali.-<ation
method" for srwh filters.

6.3.1 Direct Forms


An HR ttr:cr of order ilj is char<;~ctcri...:cd by ;V + l coefticitnh and. in general, requires .·V -i-- l multipJicrs
and N twO--input adders for impk:m:ntation. Struc~ures 1:1 \Vhich the mul!iplier ;;;oefficienfs are pre..:i:«!ly
:be c(lCfficlenr~ of the Iran;, fer function are caHeti dirt'cl fonn ~trudurc-.. A direct form reali7.af1on of :m
FIR filter can he ~eadily de·•doped ~-rom l-~ UdO), a~ mdic,lleG in Figure 6.6la} for .'V = 4. An amtlysis
of lhi~ structure yield<>

which is pre..:ise\y uf the form ,,f Eq_ (fdOJ.


Tht: st:-uctun; of Figure 6.6{ai J\ also called'' ~l!JfWt! d.d(:y line or a !rtms•.·ersaJflhc_r_ Its ln.m<;po:;c a;,
;,ketchcd in Figure 6.6(b) i-; the "ceond direct fomJ stmdure. Both direcr form <;tructure.~ are canonic with
1..:-;,pec~ 1odelay~.

6.3.2 Cascade Form


A higher-order FJR transfer funct:nn can .aho h>: r>:J.iiz~d "'~:.. ..:ascade ofFlR stttion.s with t:.ach section
characterized hy eithtT a tirst-ord~r or a second-order transfer function. To this end, we fact<Jr the HR
transfer function If { ;:) of Eq_ (6.lij ~ond writ-L' it in the form

H (:)=hi OJ nK

"--1
(I +- fl;A:-J ---:- i:i2;,_:- 2 ). (6.ll)
6.3. Basic FlR Digital Filter Structures 365

"t(><l--T~

h!Ol~
yin)
(a)

{b)

Figure 6.b: Dire(._'J fo!"!n FIR structures.

Figure 6.7: Cascade fonn FtR bi.Tm:turt for a oix.th-order FIR filter.

Wh<!re K = N i2 if N is even, and K = (N + l)/2 1f N i;; odd, with {3zK = 0. A cascade realization
of Eq. (6.11) for N = 6 is shown in Figur<! 6. 7 requiring thr<!-e second-order sections. Each second-order
stage in Figure 6.7 can of course also be realized in the transposed diTect form. Note that the cascade form
is ...·anonic. and also employs N two-input adders aml N + 1 mu1tipliers for an Nth-order FIR transfer
function.

6.3.3 Polyphase Realization


Anuther interesting realizanon of an FIR filter is based on the polyphase decomposition of its transfer
function and results in a paraUel structure [Bel76]. To illustrate this appcOacll, consider for simplicity a
cau.;;al FIR transfer functi-on H (z) of length 9:

H(z) =hlO!+h[l]z- 1 +h[2J.:- 1 +h!3Jz- 3 +h[41z-~


+ h[51z ·- 5 -t- hf6]z -o + h[7lz- 7 + h[8Jz- 8 . (6.12)

The above transfer function can be expressed as a sum of tv."O tenm;, with one term containing tlle even-
irldex:ed coefficient<> and the other containing the odd-indexed coefficients, as indicated below:

H(z.) = (h~OI + hl2}::- 2 + hf4)::--+ + hl6]z- 6 + h[S]z- 8)


+ (h[llz- 1 + hr3jz- 3 + h{5!z- 5 + h!7lz- 7 )
Chapter 6: Digital Filter Structures

+ Z: -I (h'ioI'+ h['[ -- 2+ h!5!7-4- + h[7]?_6·)"


.l;: ·- -, (6" !3)

By -J!ii!Jg the notatiom

Efl(Z) = h[O] + h~2!:::-i + h[4]z-? + h(6]z-·' + hfS]z- 4 ,


£1 t:) = h[ l J + h~3]z-: + h(5]z-l + h(?jz- 3 • {6.14)

"·T (;:m rewrite 1-.q. (6.1J) as


(6.15)

In a si:ni:ar manner. by grouping the term" of Eq. (6.12) differently. we c-an reexpr~ss it in the foffil

(6.16)

where now

Eo(z}=hlOJ~M3iz- 1 +h(6]z- 2•
E1 (z) = hll_l + hr41z-- + hl7]z l, (6.171
E2(z:) = h(2_1--c- h[S]z-- + h[S]z-2 .

The decomposition of H(z) in the formofEqs. (6.15} and (6. 16) is more c-ommonly known as thepolyphau
decomposition. In the genera! e<t~, an L-branch polyphase decomposition of the tran!'>fer function of
Eq. (6.9) of order N is of the form
L-:
~,
n ~Z
) = ~
~ Z
-mE-m ~z
, L) , (6.1 S)
~

'"=0

l:N+lJf.j
£,..,(::) = L hlLn + m]z-". (6,19)
n=tl
with h;n j = 0 for n > N. A realization of H(z) based no the decomposition of Eq. (6.18) is called a
P'Jlyphai>e· realization. Figure 6.8 shows the four-branch. th:: three-branch. aOO the two-branch polyphase
wallzatiom; of a transfer function_ As indicated in Eqs. (6_14) and {6.17). the expres&ion for the transfer
function Eo(z) ;s ddferent for each structure and~ are the expressions for £1 (z), etc.
The subfillers Em (zL) in the polyphase realization of an FIR transfer function are also FIR filters and
can be re.:~lized using any of the methoCs described earlier. However, to obtain a ca:mnic realization of
tbc overall structure. the delays i.n all subEiters mtmt be s!Mred. Figure 6.9 illustrates<: canonic polyphase
realization of a length-9 FIR transfer function obtHined hy delay-sharing. U should be noted that in
deveiupi:-~g thi"- realization, we have used the transpose of !he structure of Figure 6.1-:{b). Other canonic
polyphase realizatiOIL'- can be similarly derived (Problems 6.11 to 6_13).
The po!ypha.,.;~ structures arc often used in l!lultirate digifal signal processing apphcations fOf' compu-
mtiomtUy effic-ient realizations (see Section IOA3).
1
Lx~ istn.:ointcgcr?"I10fx_
6.3. Basic FIR Oigita• Filter Structures 367

(c)

Figure 6.8: Polypha:-.<: reali7atiou.~ of an FIR transfer function.

hl2J

h(11
''.;'__ ,
--+----.J --3 f---------.J _, f---+---1

~ h[Oj

Figure 6.9: Canoni;; thr~-branch polyphase n:aluation of a length-9 FIR filter.

6.3A Linear-Phase FIR Structures


We showed in Section 4A3 that a line:rr-phase HR filter of order N is either characterized by a symmetric
impulse response
h[n] = h[N- n}. {6.20)
or hy an antisymmetric impube rt.•sponse

h[ni = -h[N- nj. (6.20


The symmetry (or antisymmetry) property of a lint.o.>r-phase FtR filter can be e.xploited to reduc-e the
total number of multipliers into a!mos( half of that in the direct form implementations of the transfer
function. To this end cons1der the realization of a leogth-7 Type 1 FIR transfer function with a symmetric
impulse response:
368 Ch-apter 6: Digital Filter Structures

h[3]

(a)

hfO] h[l] lii31


+ +
(b)

Figure 6.10: Lmear-pbase F!R ~Cructurcs: U,) Typt 1< and (b} Type 2.

which can be rewritten in the form

Hi:)= h[O] ( l + z-- 6) + h[l] ( z- 1 + .;:- 5)


+hl2J ( z- 2 + .-:- 4 ) -- hf3]z _ _,- (6.22)

A :·ea!ization of H (z.) based on the decomposition of Eq. \6.22) j:; shown in Figure 6.10(a). A :-.imilar
decomposition can be upphed for tb~ realization of a T)'p.; 2 FfR transfer function. For example. for a
!ength-X Type 2 FIR transfer function. the pertinent dct:omposition is given by

H(z) ~ h[O] {t +,--')+hili (c-' +z-•)


+ hl21 (z-' + ,-') + h[3] {z-3 + ,-'). (6.23)

kaling to the n:alization :-.hown in Figure 6. iO(b).


It illould b~ no!cd tha: the structure of Figure 6.1 O(a) requires 4 multipliers. whereas a direct form
realizatiOn of the original length-7 FIR filter wouid ::-equire 7 multipliers_ Likewise, the structure of
Figure 6.IO(b) requires 4 multipliers, compared to 8 multipliers. in the direct form realization of the
original length-R FIR filter. A similar <>aving>- OCCUI"S in lhe case of an FIR filter with an anti'iymmetric
!tnpulse respon."'e.

6.4 Basic IIR Digital Filter Structures


The causal HR digital fil:ers we are concerned with in this text are characterized by a real rational transfer
fun·::tiun of the form of Eq. (4.48) or, et.juiva!ently, by the constant coefficient difference equation of
Eq. (2.82). From the difterence e.quation representation, it can be seen that the comp'.ltation of the nth
out;)ut sample requires lh.e knowledge of several past samples of the output sequence, or in other words,
the rcnl"!Z.atlon of a {;ausa\ IIR fiher requires some form of feedback. We outline here several simple and
stnightforward real'izru:ions of IIR filters_
6.4. BasiC IIR Digital Filter Structures 369

~ Y(z)
FiJ;U~ 6.11. A JXlSSlb\e liR l\lttr rea!i"Lali-on '>\.":heme.

v.:fnl

(a) (b)

Figure ti.12: (a) Realizanon of tht: transfer ftmction H; (~) = W:.-u X{:;:) .md (b) realization nf the tramfc-r fundkn
H;::l:J = Y(;;-)/W(.:;).

6.4.1 Direct Forms


An Nth-·order fiR digital filter transfer function h; charar.:teri1.ed by 2N + ! unique coefficients and. in
general, requires 2N + I multip!ie~ .and 21V tv.u·input adder'> for implementation. As in the CJ.se of FIR
fi Iter realization. IIR filter ;.!ructures in which the multiplier r.:oeffir.:iellts are precisely fue coefficients of the
transfer furu.:tinn are called dinrrform structures. We now llc-M::rihe the development of these structures.
C ucsider for simplicilj a third -order HR filter chamcte!i:Lt:d by a transfer function

(6.24)

wh1ch we can implement as a ca!>cade of two filter sections a~ shown in Figure 6.1 1 where

W(z)
H; (:::) = = P(.":) =PO+ fJlZ- 1 + PF-7.-+- P3Z-J. {6.25;
X(:)

(6.26,>

The filter »ecti.on H; (Z) of Eq. (6.25) is seen to be <Ill FIR filler .and can be realized as shown m
Figur<! 6.12(a}. We next consider the realization of Hz(z} giYen by Eq. (6.26). Note that a time-domain
reprcsectatlon nf this tmnsfcr function is given by

y[n] = u:.•1n]- d1.r/n ~ 1]- du!n- 2]- d:;y[n- 3]. (6.27)

result<ng in the realintion indicated in Fig;1re 6.12(bj.


A cascade cfthe slruo:tures ut Figure6_ 12(aj and {b) as indicated in Figun: 6.llleads w a realization of
the originalllR transfer function H (z) ufEq. (6.24). The r.::si.tlting structure is sketched in Fi2:ure 6.13(a)
and is comrnon.ly known as 6e d;rer_t (f.um 1 structure. Note that the overa!! realization i."> -noncanon.ic
since 1t employs six delay.-.. to implement a third-order transfer function. The transpo..<;e of this structure is
370 Chapter 6: Digital Filter Structures

(b)

F~ure 6.13: (<>) Direct form l, (b) direct form I1 • (>;;) and (d) additional noncanonic diyect form Mruclures.

-.ketched in Figure 6.13{b). Various other noncanonic direct form~ can be derived by simple block diagram
manipulations. Two such realizatmns aTe shown in Figure 6.l3(c) and (d).
To derive a canonic realizalion, we observe that in Figure 6.13(d) the signal variables at nodes 0 and
C).ue thl': same, and hence, the two top delays can be shared. Likev.i~e. the ~igna~ variables at nodes ®
and© are the same. which pcmuts the sharing of the two middle delays. Following the same argument,
we can share the two delays at the bottom leading to the final canonic structure shown in Figure 6.14(a),
which is calitxlthe diret:tform ll realization. The transpose of this is indicaled in Figure 6.14(b)_
The slructure.-, for direct form l and direct form II realizations of an Nth-order llR transfer function
>,hould he evident from the third-order structures of Figures 6.13 and 6.14.

6.4.2 Cascade Realizations


By expressing the numerator and !he denominator polynomials of the transfer function H (z) as a product
of polynomials of lower degree. a digital filter is ofte-n realized as a cascade of low-order filter ;;ect:ons.
Consider for example, H(z) = P\:::.)/D(zl expressed a:;

H(z) = P1(Z)Pz{z)P3 (z) _


(6.28)
- D1\;:)D2 (z)D3(Z)
Variom different cascade realizations of H(::J can be obtained by different pole-zero polynomial palnngs.
Some: examples of such realization;; are shown in Figure 6.15. Additional cascade realizations are obtained
by stmply changing the ordering of che -s.ectiom;. 2 Figure fd6 illustrates examples of different structures
6.4, Basic IIA Digital Filter Structures 371

(a) (b)

l'igure 6.14: Direct form ll and H1 ~tructurcs_

! P; (<)
~
P.: (,z)
- P_, (:::)
f-
1 D 1(<l D2(z) DJ\;:}

fj( zl 12 (:) P_, (z) i 1j \z!:l- f2 (z_} '3 (.:::!


D:.C::) Dl ( z_) D2V! l Dz(d l D 1(z)
f---
DJ!:::)
f-.

f! (:::) P,(z!' _j~;I•.~ P2 (z)


f---
P.,(z)
_o_
f-.
D;! ,,
~!
Dl\::;} D2 (z) D (z)
1

Fib,<ure 6.15: Examples of different equh·.ilent cascade realizations obtained by different pole-zero pairings.

~
1j (zl:
Dl(z) /
~ P2(;)
D~(z}
P (z)
3
D3(z)
~
fi(::J
I D 1(z)
fj(<.)
~l(zl
ri f1\z)
D2 (z)
f-.

f--" 'i !?-) P,(;) l)(z! f';(z}


! Dz(z!
"''" qw f-.
- P,(c \ ~
'-+ ~
f--- - · -
~(_z) Dl(:::} 11, (;:) D:,(z)

w•l_,
"' ,,
/j(.:::J P~ (z-)
- ~
(
f--- D1 ~z) i
~

D2 (z.) f-.
Figure 6.16: Different c-ascade realizations obtained by changing the ordering of the sccliuns.

obtained by cltanging the ordei.ng of the sections. There are altogether a total of 36 cascade realizations
fc.r the factored form indicated :n Eq. (6.28) based on pole-zero pairings and ordering {Problem 6.19). In
practice due to the finite wordlength effects, each such cascade realization bel-caves differently from others.
372 Chapter 6: Digital Filter Struc1ures

Figure 6.17. Cascade realization of a third-order JIR transfer function.

Usually, the polynomials are factored into a product of first-order and second-order polynomials. In
this case, H(z) is expressed as

H( ) ~ [1 (I+ /laC'+~"'-') (6.29)


z /){! .~ ~1 +aa: l +a2kZ 2 .

In the above, for a first-order factor, a:;; = f32k = 0. A possible realization of athird-ordertransferfunction

is shown in Figure 6.17.

6.4.3 Parallel Reatizations


An IIR transfer function can be realized in a parallel form by making use of the partial-fraction expansion
of the transfer function. A partial-fraction expansion of the transfer function in the form of E<l_. (3.l31)
leads to the parallel form f. Thus, assuming simple poles, H (z) is expressed in the form

H•:z) =Yo+ Lk ( I+ YO.i: + YHZ-1


n'J.l:Z 1 + a2k;:
2)· (6.30)

In the above, for a real pole, au = Y!k = 0.


A direct panial-fraction expansion of the transfer function H (z) expressed as a .ratio of polynomials in
z leads to the second basic fonn of the parallel structure, called the parallelfonn lllMit77cj. Assuming
simple poles, we arrive at

(6.31)

Here. for a real pole. azk = Szt = 0.


The two ba5-ic parallel realizations of a third-order IIR tr-lllsfer function are sketched in Figure -6.1 g_
6.4. Basic IIR Digital Filter S1ructures 373

+ y

Yoz
+

(a) (b)

Figure 6.18: Parallel reali£ations of a l.hlrd-order HR transfer function: (a) parallel form land (b) parallel form Il.

0.44

(b)

Figure 6.19: (a) Direct fonn U realization, and (b) cascade realization based on direct form H realization of each
section.

-
374 Chapter 6: Digital Filter Structures

-0.1

06
+

(L2f'-
+
-05
+ + +

-08 -U2 0-2


+ + +

-0. 8 0.25
''
(a) (b)

F1gure 6.20: (a) Parallel fonn I realization. and {b) parallel form II realizatioo.

__ .,
6.5 Realization of Basic Structures Using MATLAII
The basic FIR and llR structures described in the previous two sections can be easily developed using
MATLA.B. In thls section we describe this approach.
The ca.scaCe realization of aa FIR transfer function H(z) involves: its factorization in the fonn of
Eq. (6.11). Likewise, the cascade realization of an IIR transfer function H(z.) involves its factorization in
the form ofEq. (6.29). The factorization of a poly"Domial can be carried out in MATLAB using the function
roots. For example, r = roots (h) will retum the mots of the polynomial vector:-: containing the
coeffic1ents of a polynomial in z- 1 in ascending powers of z:- 1 in the output vector r. From the computed
roots, the coefficients of the quadratic factors can be determined.
A much simpler approach is to use the function zp2 so s which determines the second--<lt'der factors
directly from the specified transfer function H (;::).The function sos = zp2sos ( z, p, k} generates a
matrix. sos containing the coefficients of each second-order section of the equivalent transfer function
H(z) detennined from its zef.o-pcle form. sos is an L x 6 matrix of the form
6.5. Realization of Baste Structures Using MATLAB 375

sos =
I paz
POl

P~L
PI'
pn

p,L
P1l
P22

p,
dot
dnz

dnr
d!l
dn

dJ;,
a,]
dn
-
&u.
'

who<>e ith fO\i\.' contams the coefficients, {P«} and {du }. of the numerator and denominator polynomiab
of tbe ith second-order section with L denoting the total number of sections in the cascade. The form of
the ;:~verall tr<lns.fer function is tlrns given by

L L + -: , _,
H( Z )~nH( ·~npo;
, Z1
PliZ J _,_P2iZ-
.., •
do ...... d~-~ +d2 z -
t=l !=l ' ' .... '

We illustrate this approach in the following two t::xamples. We first consider the factorintion of an FIR
transfer function.

fttAii\\1r;Y~} f
lX

- n u"'
+Ju " t:Y:4L4.

-1
:::r ftL i:it'i-1'" i\711%.1 ;;;--· 1 10 C:·0{i0 r;.::; r; ·>0 t·fr\':0-
V>JC:· 0 :>ti ,;:;:pJ;,) 000:::- l f.t ht::n> tv:::'.: :: o{}tf\:it:
\HhitttHJt-. ;t;::, r.i< \v;<· t'f-><'i>D:><·'" f; ::1\.ft/·
376 Chapter 6: Digital Filter Structures

') 0: 7 t \: \/00\f ;; 8 ;:(;f{ff}


"Jj

The function z-;:::2 so s works only for a stable transfer function. As a result. the denominator pol)'nomial
must have all its roots inside the unit circle. Hence. the above methode an be used only for a minimam-phase
FIR transfer function. In the case of a nonminimum-phase FIR transfer function, me first- and second-order
factors can be obtained by computing the zexos of the transfer function using the function rcots and then
combining the complex conjugate root pairs using the function conv to form the second-order factors.
The factorization af an IIR transfer function is demonstrated next.

£
$L
1,
I

l
"'"\/,-

"
"

The two parallel realizations described above can be developed readily in MATLA:B using functions
residc:e and residuez. We illustrate their use in the following example.
5 .5. RE?"al izat.on o Basic S ruc:lu~s Using,.. ' TL -..B 377

l~ . • \MPLE 6.
i'c-Og:~:art 6_2
~ Par<c l Lei :!i.'Nloli :z.~~iontJ. of. &.rl !I~ To~.<11nsf.er F IJ nc;t;ion

num. .. ln,vut.: ( • •rw-nert.l• or c;:...,l ffir.1~nt v-cc;to:t;" • . ) ;


dP.n = inv.: ( • Or.l'lQmintH:or t.:oeff~c1ern: \'ecr:o.r • I.;
I rl. pi. ~i l .. :~:e.s .due2 (nun•. ae.n1 ;
L12 , p2. lQ ) "" :rof:c'si due I n• JTI,.fler..) ~
dis~·(· Parallel Fox r)
di~p\'RQ~ldue~ ~r~'l:dis?l~l}~
di~p~'Pule~ ~r~ a~·J~di~~[p l •:
iep,·cone~ant va1ue••;di8p l Kl';
dj,:;p{' Pnr i]el For.rr. 11: • ~
di$pi'R~i~un~ ~rn'l~~i~plr2)~
is~I'Polee 6L~ a~·)•di-plp2)~
di~~~·~on~ nt v~l~~·t;~;R~ l k2):

'I'M •np~~llbta fl!qt:te'l.ted try rile J1ftlfUl'IID &re 1bc VC!tkV!i nur- Uld -den, tontain.mg lhl: num.eamr II.Dd ~um1 ~Wor
11:iealll, n!lopl!letm:d)'. Fcit" our -:·urnpk., tlw:$-: .f1R Jil~ 17)'

de" ~ Ll Q,( 0.18 w6.2].

Pi!lrtt.llel Form 1
ilt!:!si .d.ut.!8 a.re
-0.2500 - .OOOU1
-0.25CO + ~-~000~
0. &oOO
wO.lOQO

?<Jli.!"!IO 11rc ~:~t


-0 .4000 • ~ - S~JJi
o. ·ooo - o.sBJ _i
0. 4(1 0~~
:()

Constan~ \alu~
· 0.100\:1

II
Rt'!l!lldU !
O.iOOrJ - C·.U~Bi
0.101~(.1 + . 1 ' 56-
0.2:100

Poles !lre. at::


-o.·~oo ~ c.~B311
- 0 . 4J00 - 0.5B3&i
O.dJOO

Cons:ca:-~:. valu~
378 Chapter 6: Digital Filter Structures

6.6 Allpass Filters


We now tum our attention to the reaiizru:ion of a very special type of IIR transfer function. the allpass
function, introduced earlier in Section 4.6. We recall from its definition in Eq. (4.126) that an HR allpass
transfer function A(z) has a unity magnitude response for all frequencies, i.e.. [A(ei"')l = 1 for all values
of w. The digilal :o~llpass filter is a versatile building block for signal processing applications. Some of its
possible applications have already been pointed out earlier. For example. as indicated in Section 4.6.3. it is
used often as a delay equalizer. in which case it is designed such that when cascaded with another discrete-
time system. the overall cascade has a constant group delay in the frequency range of interest. Another
application described in -Section 4.8.4 is in the efficient implementation of a set of transfer functions
sati~fying certain complementary properties. For example, a pair of power-complementary first--{lf"der
!~·pass and highpass transfer functions can be implemented simultaneously employing only a sbgle first-
order all pass filter. Likewise, a pair of power-complementary second-order bandpass and bandstop transfer
functions can be implemented simultaneously employing only a single second-order allpass filter. In fact,
we shall demonstrate later tn Section 6.10 that a large class of power-complementary transfer function pairs
can be realized as a parallel conr,ection of two all pass filters. It is thus of intere;.t to develop techniques for
the computationalfy efficient realization of all pass transfer functions, which is the subject of this section.
An Mth-orderreal-coefficient allpass transfer function is of the form ofEq. (4.127), with the numerator
being the mirror-image polynomial of the denominator. A direct realization of an Mth-mder allpass transfer
function requires 2M multipliers. Since an Mth-order allpass ~ransferfunction is characterized by M unique
coefficients, our objective here iii to develop realization methods requiring onJy M multipliers. We outline
here two different approaches to the minimum-multiplier realizations of aJipass transfer functions.
379

x, Multipl in-(.:::,~
two-pai1

.Fignn' 6.21: A ::-:nu!t;p!i~r-less two-p:1>· cou'irai,led by a single multiplier.

6.£01 Re;slization Based on the f\:lultip!ie-r Fxtract~on Appro:1ch


Since an arbitrary allpa<>S rrans.fcr function can be ex:;xes:;eJ a.-; the product of second-order andior fir;,t-
onkr allpas....; transfer functions, we consider the n:aiization of theso;: lower-order transfer function~ here_
l'>,en rhough <m a! Ipass transfer function can be realized using <=.:ly of the methods discussed in this chapter,
ocr objective here is to develop structure_<.; thz.t remain ailpa;.c. despite changes in the multiplier coetliclents
thn may oc;:ur due t-o coefficient quamizat:on [Mit74a~.
Com;Cer first the real i7ation uf a fir;.;t-order al:pa,s tnm-fcr functio-n given by

(6.341

Since the above transfer function is uniq;;dy t:hamctenzed by a single constant d;, we attempt !o realize
it LJsing a structur~ comai:1ing a <.,;ngle multiplier d1 in the form of Figure 6.21. Subslit:.tting G(:) = d1 in
Eq. (4.18 I b), we e"Xpress the input transfer fr.;nction A 1(.::) = Y!l X 1 in terms of the lran<.;fe-r parameters of
th~ two-pair a~
f;::t2ld[ !11- d 1ULt22- tnt:!!)
A.J(Zi=f·t+----~ (6.35)
:-thf22 djf~~
A cnmpa1is<Jn ofEq..;;. (6.34) and (6.35} yields
_,
t'' -
"" -
.
~ \6.36)

(6.37)
Substituting Eq. (6.36) ln Eq. (6.37}, we arrive at

whiclllcads to four possible solutions~

Type lA 2 (6.38a}
t 11 ::: t:1 = I:
Type !B t 11 ,_-I izz = ~-! tn = 1 +,;:- 1 • tn -1 -;:.:-I_ {6.3Sb)
Type lA 1 : r 11
Type IB,: t11- ~-;
-· tzz=--=- 1 t12 =' t21 = l - z - 2 :
122 = -::- 1 tr2 = l -z-l r21 = 1 +z- 1
(fi.3lkl
(6.38d)

\~/;;; nnw oJtlinc th~ development of the two-pair structure implementing the trans.fer parameters Of
Eq. (6.JXa)_ F~om lhese equativm, we amvc at

1
}-; ""' x 1 - :::- xz,
/; J ---:--- (l - : -'lX~
Y 1 =:: - l v ~, 2 = .:: -I,- 2 + X 2-
A reahLatinn of l.he ahove i:-.. sketched in Figure 6.22. By constmining the Xz. Y2 terminal-pair wi[h the
multiplier d1, we arrive at the ... ingk multiplier re:tli7::J.tion of the fin.t~order alJpas:; transfer fun"--"tion, a-;
3BO Chapter 6: Digital Filter Structures

x, + r,

Figure 6.22: Devc:opment of the Type IA first--<Jrdcr ;ll:pass structure.

(b)

{c)

Figure 6.2..1: 1-'mt-ordcr single-rn;;.itiplier all;Ja;;S structure~: (a) Type JA, (b) Type IB, {c) Type I At, {d) Type lBr

sketched in Figure 6.23(a). In a :>imilar fashion, the other Lhree single~rnultiplier allpass structures can be
re11i7.ed frum thei.r transfer parameter descriptions ofEqs. (6.38b} !o (638d) (Problem 6.35). The final
rcalizntJoni. are indicated in Figure 6.23(b) to (d). These ~tructures have been called the Type 1 a/lpw:s
network.1·. It should be nuted that the structures of Figure 6.23(c) and (d) are, resper.:lively, the transpose
of the structures of Figure 6.23(a) and (b).
\Vc next consider the reaiiz<Jtmn of a second-order aJJpass tran&fer function. Such a functi{ln is char-
ac'.er:ized by two unique coeftici~:nts and, hence, can be- realized by a structure with t~o multipher-. with
:nultipller constants d1 and d:. Variom; fonns of the second-order allpass transfer functions exist. Circuits
re~tll:zing the transfer funcrion of the form

2
,. = d 1d 2 + dtz- 1 + z-
A 2 (,,1 I ' d
{6.39)
---t- ; Z
l
+ dt, d 2Z "'
are c<Jiled the 'f.;pe 2 allpass nennJrb and are sketched in Figure 6.24. Additional Type 2 allpru;s _;truc-rure»
can b¢ Jcrived by transposing the structure;, ofFig:.~re 0.24.'
ATJother form of the second-Mder allpass transfer function is given by
_ d? +diZ-i T C 2
A2{:0) =- .., . (6.4D)
1 +dtz 1 +d z - 2
111~ corresponding circuili> are called the T)pt' 3 alipass structures and arc indicated in Figure 625.~
3 The bbe!mg of the ~tructures here is a.~ given in ~:vfi:74al
4 11le bbe:ing of the stn:cture'- here;,. as !!tv-en in :Mit743].
6.6. A!ipass Filters 381

{b)

x, X
' '
L.. .r'
,>---<+B

(c) (d)

FiJtule 6.24: Stxxmd-order two-multiplier Type- 2 all pass ~rructun:s. (:a) Type 2A, (b) Type 2D, (c_) Ty;>e 2B, and (d)
Ty.x: 2C

x,

(b)

x,

+ X
' '
~'
'------~ +)----J

(c_) (d)

FiJ!:url' &.25: Sccond-o.rdc: two-multiplier Type 3 a!lpass structure~. fa) Type 3Ao (b) Type- 3D, ~c) Type JC and (dJ
7ype :m.
382 Chapter 6: Digital Filter Structures

+
-OA 'i--~-)----,
-I
+

Fi~ure 6.26: A thrce-mulupher renlization of the all pass transfer func1ion. of Eq. (6.42).

6.6.2 Realization Based on the Two-Pair Extract~on Approach


The stability test algorithm described in Section 4. !2.2 also leads to an elegant realization of an Mth-order
;:;Epass transfer function AM (z) in the form of a casc-aded Two-pair [Vai87e}. This algorithm is based on the
development -of a series of (m - 1 )th-order ali pass transfer functions A,._ 1 (z) from an mfh-order aD pass
Lram.fer function Am(.::}:

A ( . d,. -r d.,-2Z-l --l- • ·- + d1z-1m-l) +z-m


+ dm-IZ-!
(6.43'r
rnZJ= !+dJz-· +d2Z 2 +-··+dm-lZ (m l}+dmz m
1

using the recursion


·H~ rAm(Z)-km] m = M, M - 1, ... !, (6.441
A,. -, --· zL l - k m A m (·.__} '

where k, = A,_, (oo) = dm. 1t has been sl:own in Section 4.12.2 that A.M(Z) is stable if and only if

k;;,<l, form=M.M-l, ... ,l. (6.45)

If the allpas.-. transfer function A,_ 1(z) is expressed !n the forrn

A , )
d-'
m-l
+d'm-2'·_-l+-··-d'z-(m-2)+z-(m-J;
J
m-li.Z = l+dl.,. l +···+d' .., ;m 2)+d; (m 1)' (6.46)
1'- m-2'" m-1-7

then the coefficients of A,_l (::) are simply related to the co..-fficients of Am(<:) through the expression

i=1,2, ... ,m-l. (6.47)


6.6. Allpass F1fters 383

~ [- I.~
Am r .:l

.Figure 6.27: Realiz.±uon of Am(Z.) by lwo-pair enraction.

To develop a n:ali.zatiOn of Am(d using the above al_g_('lrithm. \\e rewrite Eq. {6.44) as

km ..... ::.- 1 >'~m-tCd


li,(:::t = 1 +,~-~~--(').
m~ '"'m-l --.

We n:alize A"'(_;:) by extracting a two-pair con~trained by A,_ 1(z) in the fonn of Figure 6.27.
~ow from Eq. (4.18lb), A,,(;) can be expresi>ed in term>. of the transfer parameter;, of the two-p-.o~ir as

lJ J - (lJ! tn - li?lll )Am-!(;:)


A. m(;::) = ·------ {6.49}
1 -tnAm-tC::)
Comparing Eqs.. (6.48) and (6.49), we readily obtain

{6.50a)
- ;
(6.50b)
Substituting Eq. (6.50a) in Eq. (6.50b), we get

As can be seen frcm the above, there are a number of <>olutions for tu and t2 1 leading to different
realizations fur the two-pair. Sum:: po~sible. ;,ulutiorn are a;; ir,dicated below:

t;J = k,,... t 22 = --k,::-•, = 1,tn (6.52a)


f;t = k m· tn = '· -\ ,
-.'<,~Z 12) = (1 + k,.,J, {6.52b)
r1,=k,. tn=-k,z- 1. r12 =vi--k;,z- 1, r 2 1 = -JI -k~. (6.52c)

fJt=km. tJ.:=-km::- 1. tn=z- 1 t;>j=(l - km).


1 (6.52d)

The mput-oulput relations of the two-pair describW by Eq. {6.52a) are given by

Y1 = kmXJ + (l
~ k~);- 1 X;: . (6.53a)
Y2 = Xt- k..,z.- 1x2. (6.53b;

A direct real:i.cation of the above cquatwns lead>; to the three-multiplier two-pair shown in Flgure 6.28\a). in
a sJmilarmanncr. direct realizatitms based on.Eq;;. (6.52b) and (6.52c} result in the four-multiplier structures
of Figure 6.28(b) and (c), respective.ly. 5 A direct realization of Eq. (6.52d) results in .a three-multiplier
struchtre and is left as an ex.ercis.: (Problem 6.42).
3B4 Chapter 6; Digital Filter Structures

x, + y
2 x, y2

km -k m km -k m

Y, + x, y
I + x,
1-km

(a) (b)
Jl-k_!
x, + y
2

kn -k m

lj + x,
JH!
(c)
...
Fig~~n 6.28: Dnect realization of the tvm-pairs described by Eq. {6-52a) through (6.52ct (a) The two--pair described
hy Eq. (6.52a), (b}the two-pair described by Eq. (6.52))), and (c) t~ two-pair described by Eq. (6.52c).

y
2

..
x, x,
(a) (b)

Figure 6.29: (a) A two-multiplier realizatior, Q( the two-pair deM:ribe-d by Eq. (6.52a), and (b) a oae-multiplier
realization of !he two-pair deo.cnbcd by Eq. (6.52b).

A two-multiplier realization can be derived by manipulating the input-output relations ofEqS. {6.53.a)and
(6.53b). Using Eq. (6.53b). we can rewrite Eq. (6.53a) as
{6.54)

A realization of the two-pair ba.-:ed on Eqs.. (6.53b) and {65~) is given in Figure 6.29(a). The two-pair of
Figure 6.29{a) Is often referred to as a lattice Jfructure. A two-multiplier lani.ce realization ofEq. (6.52d)
can be derived accordingly and is left as an excercise (Problem 6.43) [Lar99J.
The two-pa1r described by Eq. (6 ..52b) can be realized using a single multipller. To this end we first
write its input-output relation from Eq. (6.52b) as
Y1 =k,..,X, +0-km)~-!Xz. {6.55a)
Yz=(l+km)X, -km.c- 1X2. {6.55b)
Defining
(6.56)


6.6. Allpass Filters 385

Y,
" {a)
+
-kM~l
AM(::}-+ .
'M
+
(b)

Figure l:i.30: (a} Realization of Am(Z) by extracting the lattice two-pair of Figure 6.29(a), and (b) cascaded lattice
realization of AM (d.

we can rewrite Eqs. {655a) and (6.55b) as

Y1 = V1 --r- z-i X2. (6.57a)


Yz = X1 + V1. (6.57b)

A realization based on Eqs. (6.56), (6.57a), and (6.57b) leads to the single-multiplier two-pair of Fig-
ure 6.29(b). Note that the two-input adder with an incomingmuhiplier with a coefficient - I is implemented
as a subtractor.
The reali:zation of the mth-order aHpass transfer function A_,. (z) is therefore obtained by constrain-
ing any one of the two-pairs of Figures 6.28 and 6.29 by the (m - I }th-order allpass transfer function
Am-!{Z)- For example. Figure 6.3{}(a) shows the realization of A,..(z) by extracting the lattice two-pair of
Figure 6.29(a}.
Following the abo-ve algor-ithm, we can next realize Am-1 (z) as a lattice two-palr constrained by
the allpass transfer function Am-l(Z). This process is repeated until the constraining transfer function is
Ao(z) = I. Tberompleterealizationoftheoriginal allpasstransferfunction AM(Z) based on die extraction
of the lattice two-pair of Figure 6.29(a) is therefore as indicated in Figure 6.30(b). It follows from our
discussion in Section 4.12.2 that AM(Z) is stable if the magnitudes of all multiplier coefficients In the
realization are less than unity, i.e., Jkm! < 1 form = M, M- 1, ... , 1.
Note that the above allpass structure requires 2M multipliers, which is twice that needed in the rea1-
izat.i.on of an Mth-order allpass transfer function. However, a realization of AM(Z) with M multipliers
ls ob~alned by extracting the two-pair of Figure 6.29(b). Here, a]so_ the stability of AM(Z) is ensured if
ik.,.i<lform=M,M-1 .... ,1.
386 Chapter 6: Digital Riter Structures

(a) (b)

(c)

Figure 6.31: Cascaded lattice realization of the tbird-ordet allpass transfer funcrion of Eq, (6.58): k.3 = dj = -0.2,
k2 = d2 = 0.2708333, andk! = df' = 0.3573771.

TheM-file po 1 y2rc in MAT LAB can be employed to realize ana1lpass transfer function in the cascaded
lattice fonn. We illustrate its application in the following example.
6.7. Tunable IIR Digital Filters 387

6.7 Tunable IIR Digital Filters


In Section 4.5.2., we described two first-order and two second-order IIR digital transfer functions with
tunable frequency response characteristics. As we show next. these transfer functions can be realized easily
using allpass structures, and the resulting realizations provide independent tuning of the filter parameters
sucb as the cutoff frequencies and band'Width [Mit90a].

6.7.1 Tunabte Lowpass and Highpass First-Order Fitters


In Example 4.14, we showed that the first-order Jcwpass transfer function Hu(z) of Eq. (4.109) and the
first-order highpass transfer function HH p(z) of Eq. (4.112) are a doubly-complementary pair and can be
expressed as

H
LPZz
1
()=.!_[1-a+z- -az-
1
1 -az <
]=.!.[ 2!+!
-a+z-i]
-az 1
=![I +A 1(z)], (6.63)

l[l+a-z- 1 -az-:] '[ 1 -a+z- 1


]
HHp(;;:_) =2 1 - az 1 =2 - l az 1

=! (1- AJ(Z)], (6.64)

where
-a+ z- 1
A1(2) = I
az 1
, (6.65)

is a first-order aiipass transfer function. A combined reaHzation of HLp(z) and HHp(zJ based on the
decomposition» of Eqs. (6.63) and (6.64-) is as indicated in Figure 6.32, in v;.1rich the allpass filter given
by Eq. {6.65) can be Tealiz.ed using any one of the four single-muJtiplieraHpass structures of Figure 6.23.
388 Chapter 6: Digital Filter Structures

1
'

Figure 6.31: Allpass-b.ased rcal.iz~aion of the doubly-complementary first-order lowpass and highpas;; fi!rers,

Figure fi.33: A tunable fin;t-orde:r lowpassihlghpass filler structure.

,----1
: n~U4 :

10,(, 1-- a,.oJl5


, ---
!j
I

if)4 ' i
-.,"'
---"
~--: -"
OAA 0 (ll! QJht
Xom-..lwed fr••<pen<}' '
Figure 6.34: Magnitude respon'iCS of the lowpasslhighpa:ss filter \trucwre of FigUTe 6.33 for two different values of
the pa~~uneter a.

Figure 6.31 shows one wch rc-a.lmnion in which the 3-dB cutoff frequency of both. filters can be simul-
tanwusly varied by changing the multiplier coefficient CL Figure 6.34 shows the composite magnitude
responses of the two filters for t\liO different values af a_

6.7 .2 Tunable Bandpass and Bandstop Second-Order Filters


The second-order bandpa:>s transfer function H B p (z} of Eq. (4. 113) and the second-Order bandstop transfer
fun.:twn Hes(z) of Eq. (4.118) also form a doubly-complementary pair and can be expressed as

HBp(::.) =; !l - A2\z)], (('.66)

H 8 s(zJ = ~ (1 + A::•:ul, (6.67)

wbere A2(:::} is a second-urder allpass transfer function given by


a- tJ(l + a)z- 1 + z- 2
A2(Z)=- . (6.68)
l fl(1+u).:: l..,...az 2
6.8. IIR Tapped Cascaded Lattice Structures 389

1
2

Figure (1.35: Allpass-based realization of doubly--eomplementary second-order bandpass/bandstop fi1teT.

1
IN 2


\ Figu.re 6.36: Tunable seeond-o:'der bandpasslbandsiop fi1ter structure.
..
I
(If ~

(a) (b)

J<'igure fi.37: Magnitude responses of the bandpass/bandstop fillcr stmcture of Figure 6.36 for different values of the
p<•r.:uneters u and fJ. (a) j3 = 0.5 a.Jld (~)a = 0.8.

T:tlerefore. the bandpass transfer function Hsp(Z) ofEq. (4.113) and the bandstoptransferfunction HBs(z)
ofEq. (4_118) can be realized together, as indicated in Figure 6.35, where the allpass filter A2(z) is given
b:r Eq. (6.68). A tunable bandpass/bandstop filter struc-ture is then obtained by realizing the allpass A2(z)
b:r means of a cascaded Iartice -;tructure described in Section 6.6.2, with the lattice two-pair cealized using
its single-multiplier equlvaleru of Figure 6.29(a). The final structure i.s indicated in Figure 6.36. Note tha.£
in the stmcture of Figure 6.36, the multiplier f3 coo trois. lhe center frequency and the multiplier a contmls
the 3-0B ban,dwidth. Figure 6.37 illustrates the parametric tuning property of the bandpass/bandstop filter
~tructurc of Figure 6.36.

6.8 . IIR Tapped Cascaded Lattice Structures


Seveml simple straighu-orw.ard realizations of FIR and HR transferfunctions have been outlined in Sections.
6.3 and 6.4;re-spectively. In most applications, these realizations work reasonably well even under finite
390 Chapter 6: Digital Filter Structures

wcutlcngth constraints. Hcwevcr, in some cases. d~gttal filter\ with more robust properties. are needed to
provide satisfactory perfonnances. We describe in this sectio:1 two such realizations.
Tile cascaded la~tic-e structure of Figure 630(b) can also realize an all-pole IIR digital filter which finds
applicution in the power spe~trum estimation of random signals (see Section 11.4.2). ltalsoforms the basis
of an often used method for the realization of an arbJrary Mth-order transfer function H(z) original!)'
propc-sed by Gray and Markel {Gra73]. We first den:onslrate the realimtion of an all-pole- HR transter
fuuct1on and then outline the Gr.ay and Markel realization method. We also provide a MATLAB program
implementing hoth HR s:n.<cmrc~.

6.B. t Realization of an AH-Pole HR Transf0r Function


W(· now -:;how chat the tram.. fer function w1(z)/ X j(z) of the cascaded lattice &rructure of Figure 6.3J(c} i:<
an all-pole tram:fer func:ion with the ,;arne denominator a.-; the aU pass transfer function A:o.(z) ofEq. (6.58).
w~, figt obsen-e thar a typical lattice two-;mir here is deseril:k-d by the equations:

W;(l) = w,+:tz) -k,z-- 1S;(zL


S,+J {::.) = k,· W 1 (zl + z- l S;(z}.

Fmm Lh(' above we obtrin the chain matrix description of the two-pair as

(6.69)

Tb.1s, \Ve can express the chain matnx of !he cascaded lnttict> structure of Figure 6.31(b) a;;

[
X,(,)
Y; (Z)
J [i
- k3

1 + kz(kt + k3}z- 1 ...;... k;,k;z · 2 Jq::: .. 1 -;- k]_(! + ktklk- 2 + kJC 1 l


~[ k] + k:;{l + klk3):- 1 ~ k; z.- 2 ktkJz- 1 + k2(k1 + k:J)C 2 + c 3 J[
WJ(Z)
Stiz.} j"
(6.70)

From Eq. (f-:.70) we finaHy arrive at

X J\.::) = ( l + lk1 (I + k7) + k:k,J]:- .. J + :k: + k 1h(l + k1 )]z- 2 + hz- 3) W1 {::).

where \Ve have u;.ed the relation S1ld = w, (;:}.The traru;l'er function ~V'; (z)/ X 1i.Z) is rhus an ail-pole
tran;.fer functio11 with the same denominator polynomial 3;:. the third-order aHpass transfer function of
Eq. (6.58). i.e.,

(6.71)
l+fkdl+k2)+k2ko,lz 1 +ik2+k 1k:-O+h}1:: 2 +k3 z 3
J
(6.72)

v.hne Wt: have u,.cd k1 = d;'. k2 = d;, and k.>, = d-,, anti Eq>.. {6.60aj. (6.60b}, and (6.62}.
It foJlnws from the above Ji.,cussion that, in the general ~·a,e, the cascaded laUice stmclure of Figure
6.30(hJ realizes ao Mth-order all-pole transfer function with [he same denominaror m; A.\f(Z} if the output
h Utpped from the input of the rightmost delay. Su::h an all-pole HR str:.tcture results from modeling an
autoregn:s.-..;we proce>;s a~ described in Sect:on 1 i .4. The multiplier coefficients {k, j of the -.:ascaded hutice
stmcture arc ca:led reflection coefficients. .
b.R llR Tapped Cascaded Lattice Structures 391

Figure 6.38: The Gray·Markd :.tro<.:ture fcof"'" third-order tmm;fer funcuon.

6_8_2 Gray-Markel Method


The method of realiLing .an IJR lran::;.fer function H(z) = PM(Z)/DM(Z) consi~ts of two steps. In the
first ~tep. an intermediate allpa.'>s tmnsfer function AM(t} = z.-M DM(C 1)/ DM(Z} = DM(Z)/ DM(":.) is
realized in the form of a cascaded lattice structure. 6 A :.;et of independent variables of this struc-ture are
theil ;;ummed in the second step, with appropriate we"1ghts to yield the desired numeratO> PM(;).
To illustr.a:e the method of renli7jng the numerator, w:; consider for simplicity the implementation of
a third-order liR tn:msfer functior.

PH+ PIZ-: + P2Z- 2 + P3Z-J


{6.73)
J -t-d;z 1.+d2 z 2 +dJ? 3

[n the iiTS.t ~tep. we fmm nn mtermedJ<lC~ ai:pass transfer function AJ(Z) = f 1 (z)/ Xt{<.) = i>:.\<.)/ D3(z.).
Realization of A.>((:) has been iJiti::,.trated in Example 6.8 resulting in the structure of Figure 6.31(c).
Our (•bjective is to sum the linearly independent signal variables Y1, S,, S2, and S3 with weights {u, l
as shown in figure td8 to arrive .at the desired numerator P3(z). To this end, we need to analyze the
digital iiller slructure of Figure- 6.3I(c) and determine the transfer functions 5 1 (z)/ X 1 (Z), S2{::)/ X: (.z.l,
and S::~(;:J/ X l(Z).
From Eq. (6. 71) we have
s, (z)
(6.74)
X 1 (z) = D 3\z.1 ·
Next, we observe from Figure h.33 that S:dz) = (d;' + z- 1JSt (z) and, hence, from the above we get

(6.75;

Finally, from F1gure 6.38 we have .\'J(Z) = d2 VV::(d + C 1S2{z) .and St (z) = W2(t.)- d~' c 1S1C::.j. From
these cquatiom, the reJatinn S2(.::) = (d;' + z- 1)S1 (z). and Eqs. (6.603), (6.60b), and !,6:62), we arrive at
c ' )
d.',,;::.= (d'2 T' d' ::: -' + :.'. -- 2 }~!
. .._. (.z l )!IC
< ld<mg
1

d~+d;z- 1 +z- 2
-
DJ(Zi

it .\hot~ld be noted !:hat the numentor of the transfer function S,: (";..)/ X 1(::}is precisely the numerator of the
all pas~ transfer funclion S; {<.) / W, ( z.).
392 Chapter 6: Digital Filter Structures

\Vc now form


Ya(z) }''!(ll S,_(z) S:(z) S1\:.:.!
(6.77)
--·-=at--- +etl--+~t•--+a,--,.
X 1 (;:) XJ(Z} X;(.7) -X,(:.:) XJ(Z.I
Substituting Eqs. {6.74) to tf:J.76) and the expression for }'I{;:)/ XJ(.::) = A-d~) as given by Eq. (6.58) in
Eq. (5. 77}, we arrive at

U') (d .. + d 1:::. ' + d 1-' -2 + z - -1)


Y.,(z) .
-,a;; (d'~ + a•'--1
1
_ '- ~-2,+ a3 (d t '+--·,-~
..,.__ . _..
_,_ ~-
{6.78)
X; (z) D}{;)

Comparing the numerators of the right-h:.md :-.ides ofEq">.l6.?3) and \fi.7X). and by equilting the cnefficie-nh
of like power:. of z- 1 • we thus ubt<Jin

a1d; +a2d; +a3dj' -<.t'4 =pn.


a1d2 +a:::d; +ay. = PI•
O'tdt +a::= P2•
O'j = f13·

Solving the abuve we get

a;= PJ·
<l':! = P2- O'J.:!t,

a:;= Pl - O'td::- a::d;.


a'"= pv- a1d-,- rx2d; - iXJd~'- (6.79)

which is in a form .;uitable for slcp-by-;,k'P eakulation of the feedfOnNard tap coefficients u;.

- EXAMPJ.E 6.10 Let us. realize the transfer function- of Eq. {6.32) in Example 6.3, repeated below for 'l-'Qnve-

f'3(z.) o.44c 1 +0.362z-2 +O.m:--3


H(;.:l= - - ~ , -f6JID)
· OJ<=> t + o.4.c I + o.I8z 1 -n.z.-::----3
- using the Gray-Markel method.
We-form the intermediate aUpa.'istransfer ~nction AJ{Z} having !he SlUlle denominator as H (.':)in Eq. {6.80). Its
--reall:zatlon has been .;:arried out in E!l:ID!ple 6.8 and is shWn-in Figure 6..3 t{c;), where d) = -0.2, (fl = 0.2708333.
trod dj' = 0.3513771.
To-~ the numerator. we compute fue tap coefficient;; ja-;} using Eq. '(6-. 79). ~bstiwting the vahi{:s of
~--di = o-.4. d2 =· o.l&. dj ;, -0.2: iiJ = 0~4541667. d'{ = 03573771. di ;,.
-6.27tJs,tn. and the valUe~> or the
'~~clerit$. -P{t_ ;,_, 0. Pt = 0.44, P!." ~ 0.36, -and P_'l- = 0.02 in Eq; (6. 79). we·smve at

a1 = 0:02. az = 0.352, a3 = -0.2765313. · «4 = -0.!90t6_-

--1'he fi~ :realization-is lhus as indi~ in FigtJ.re 6.38 .,..·ttb· ihe multiplier values as given above.

6.8.3 Rea!!2:at!on Using MATLio,B


Both the pole-zero and the all-pole IIR cascadcJ lattin! .strm:tureh. can be developcJ from their &pecified
kansfer functions using the M-file t f 21 atx in the Siwwl Pmn:sJin;:: Toolbox. Variom; forms of this
6.8. I lA Tapped Cascaded Lattice Structures 393

function relevant to IIR transfer function~ are

~k,alpha] = tf2la:..c{n'..lm,denl
k = tf2latci1 ,dec)
ik,v] = tf2latc\1 ,den)

where k -is ~he laHiL'<! pararlkter vector, alpha 1s the feedforward coefficient veetor, and the vectors
rn..m, and der1 contain, respeclrvely, rhe coefticients of the numerator and the denominator polynomial;; in
aM:endlng po.,.,crs of :::- 1 •
The function latc?.t l implements the reverse prn;.."ess and can be used to verify the realization .
Various form;,. of this function are

[nu.m,denj ~al.c7LftY. ,alphaJ


Lnurn,denJ .:..atc2t.. .: (:..;:, ' i i r ' l

We illustrate the use of the above two functions in the next several examples.

EXAMPLE 6.11_ Using .MARAB we determine the bntice and1he fcxd~ard Parainetets
Q( -t}Jt- Gmy~Markd
struciureforlhe~tni.tsferfunctiOnofEq-.·{6.80-). Tc-thisend,.Pi~n~m:tt_,\{~aii·lJet·~~:: .=.: ·
·. . . . ~-<'

% Program &_4
% Gr~Y-~rkel Cascaded La~tice Structur~
:% be.Y:elQP!l'let}t
% oo·n is ·t~ denominator cOeffi:oient:. ve-Ptoi··"'
% num is· the numerator ~f£icient vect"Ql:- ·
·%- k is the 1~':tice pa:rame'ter ve.Ctox .,_ ...
%· alpha is t.he· vector ·of· feedfo'iward m0.1.tiplo;t€-i'!i' .
%
for.nat -long
% Read in Lhe trans(er func~ion coefficient:.s
n'-lm .cnput( 'Num«>ratcr cc<O<f£icient. vector ~ •);
den input{'Denominator coefficient vector; ·:;
num num!den(l);
den den/den{l);
[k,alpha! ,_ tf2lat.c{num,den.l;
disp( 'Lattice parameters are' l ;dispik');
di:'>p\ 'Feed forward mul tiplie.r.-s are' 1; d:.sp { flipl r.(alpha'}};

The :input data called by the .above program are

nuro iO C.44 0.36 0 .. 02]


den ll 0.4 O .. .l8 -0.2)
which results in the following output data~ by the program: ..

Lat.t..:ce par,;nneters are


o . J57377C49180.33 o.27aB33333333.J.1 ...-<>-2

Fel'>::Horward multipliers are


0.02 0.352 0 .. 27653333]33333 -0.19~1€

which ~r-e seen lo he identical to those derived in Example 6..7 .


394 Chapter 6: Digital Fitter Structures

Fig.Uft ~= The FIR cascaded lattice structure.

Program 6_4 can also be employed to develop the cascaded lattice realization of an all-pole IIR transfer
function as illustrated in the following example.

Program 6_5, given below, can be used to verify the cascaded lattice strucrure developed using Program
6_4.
% Program 6_5
% Transfer Functio:c:. of Gray~Markel Cascaded
% Lattice Structure from the Lattice and
% Feedforward Parameters
% k is the lattice parameter vector
% alpha is the vector of feedforwarQ ~ultipliers
% den is the denominator coefficient vector
%: num is the numera::or coefficient vector
'format long
% Read in t~e lattice and fee~forward parameters
kl = input('Lattice parameter vector = ')j
alphal = inp:1t ( 'Feedforv~ard parameter vector = ');
[nurn,den: = latc2tf(Kl,alphal);
di..sp ('Numerator cnefficients are') ;disp (num)
disp('~enornir.ator coefficients are') ~disp(den)

The following example illustrates the application of Program 6_5_


6.9. FlR Cascaded Lattice Structures 395

6.9 FIR Cascaded Lattice Structures


There are two types of cascaded lattice structures for the realization of FIR transfer functions.. In this
section. we describe method." fnr their realizations.

6.9.1 Realization of Arbitrary FIR Transfer Functions


The cascaded Jattice structure that can be used to realize .an arbitrary FIR transfer fi;n.ction is shown .in
Figure 6.39 [Mak.75]. The realization algorithm is developed first followed. by its MATLAB implementation.

Realization Method
For the cascaded lattice realization, the J\"th-order FIR transfer function is assumed to be of the form
N
HN(Z) =I+ _Lpnz-". (6.81)
n=l

An arbitrary FIR transfer function H(z) of the fonn of Eq. {6.9) can be expressed in the form H(z) =
h[O]H,v(z), where p,. = h[n]/h[O], 0 .:::: n .:::: N. Hence, by placing a multiplier h[OJ at the input of the
structure realizing HN(Z) we obtain a realization cf H(z}.
The normalized transfer function H,v(z) is realized in the form of Figure 6.39. To develop the real-
ization algorithm, we first analyze lt to detennine the relations between the input X(z) = Xo(.;::) and the
intermediate variables Xi (z) and Y,(z), i.e., between the series of transfer functions Ht(Z} =X, (z)/ Xo(z)
and Gi (z) = Y; (z)/ Xo(z). 1 :S: i :S: N. Note that XN(Z) and YN(Z) are the output vari<ibles.
From Figure 6.39 we obtain
Xt(z) = Xo(z) + ktz- 1Xo(z), (6.82a)
Y1 (z) = ktXdz) + z-i Xo(z). (6.82b)
The corresponding transfer functions are given by
Xt(Z) -l
Ht(z) = Xo(z) = 1 +k1z , (6.83a)

Yt (z) 1
Gt(z)= Xo(z) =kt+z-, (6.83b)
3fl6 Chapter 6: Digital Fitter Structures

which are seen to be first-order FlR transfer functions. Moreover, it can be seen from Eqs. (6.83a) and
(6 83b) that
i[J(z.) = z- 1 H; (z"" 1) = z-• + k1 = Gd;:},
indicating that the FIR transfer function GJ{Z) of Eq. (6.83b) is the mirror image of the FJR transfer
function H!{Z) ofEq. (6.83a).
Next. we observe from Figure 6.39 that

X 2(2) = X 1(z l + kg-l Yt (z), (6.84a)


Y2(2) = k2X1(z) + z-l Y!(Z). (6.84b)

The corresponding transfer functions are given by

(6.85a)

{6.85b)

Substituti.ng Eqs. (6.83a) and (6.83b) in the above we observe that H1(zj and G2(z) are second-Drder FIR
transferfunctions. Moreover,sinceG!(Z) = if,(z),itfollowsfromtheaboveL'mtG2(Z) = z- 2 H2(z- 1 ) =
ih{z:•. implying that Gl(Z) is the mirror image of H2(z).
Now, the input-output relation of the ith lattice section is given by

X,{z) = + k,z-· 1 Yi- 1(z),


X,-1 (z) (6.86a)
Y;(z) = k,X;-!(Z) + z.- 1 Y;-t(z}. (6.86b)

The corresponding transfer functions, H,(z) = X; (z)i Xo{z) and G;(z) = Y;(z)/ Xo(z), are related through

H,-(z) = H,_ t(z) + k,::.·- 1G;- !(Z). {6.87al


G,-(z) = k,H,-_ 1(z;) + .:--lGi-t(z}, {6.87b)

where H,_l (z) = X1-1 (z)/ Xo(.:) and Gi-1 (z) = Yi-1 (z.l/ Xo(z)_ It should be noted that the relations nf
Eq:;.. (6.87a) and (6.87bj hold for all I ::: i ~ N.
If we assume that Hi- 1(:) and Gi-l (z) are (i - l)th-order F[R transfer functions, then it follows
from Eqs. (6.87a) and (6.87b) thar H;(z) and G,{z:) are ith-order FIR transfer functions. Moroover, if
Gi--l(<.} = Hi-l(Z). then it als.o follows from Eqs. (6.87a) and (6.87b) that G;(Z) = if;(z}. We have
alwady shown that these two observations hold for i = 1 <lnd i = 2 Therefore, by induction, they also
hold for all I ::: i ~ N.
To develop the realization .algorithm, the above process. is reversed. That is, given the expressions for
Hti.Z) and iii(Z). we find the expressions for H;-r{z) and H, _J(Z) fori= N, N- 1, ... , 2, I. We now
develop a recursion relation to compute these intermediate transfer functions.
Solvi.ng Eqs. (6.&7a) and (6.87b) fori = N, we obtain

1
H_"'-l(z.)= - Iz- 1HN(:::.l-kNz- 1iiN(z)}. (6.88)
(1 - kfv)z 1 l
iJ.v-!("z)= (l-k\}:::. l {-k,~vHNI,::)+JfN(Z)}. (6.89)
6.9. F~R Cascaded Lattice Structures 397

It can be easily verified that HN_J(z} of Eq. (6.89) is indeed the mirror image of HN-I {z) of Eq. (6.88).
Substituting the expression for HN(Z) of Eq. (6.81) in Eq. (6.88) we get

HN-J(Z)= (1-lk:,) {0-kNps)+(pt-kNPN-l)Z-l+

+ (PN-1 - kN Pl k-(!>i-l 1 + {PN - kN )z-N I· (6.90)

If we choose kN = p_.v. HN-1 (;::) reduces to an FIR transfer function of degree N - l and can be written
in the form
N-1
HN-t(z) = 1 + L, p~z-". (6.91)
n=l
where
, p., -kNPN-"
p,. = (6.92)
1- k1
Continuing the above recursion a1goriilim, all the multiplier coefficients of the structure of Figure 6.39
can be computed. The following example illustrates the procedure.
398 Chapter 6: Digital Filter Structures

Figure 0.40: Cascaded lattice realization of the FIR transfer function of Eq. (6.93). t, = 0.5, k:; = 1.0, k3
0.2173913, andk.j =
-0.0&.

Realization UsJng NATLAB


The function tf2la tc in MATLAB can again be employed to compute the multiplier coefficients of the
cascade lattice structure of Figure 6.39. To this end, we make use of Program 6_6 given below.
% Program 6_6
% FIR Cascaded La~tice Realization
'format long
num =- input {'Transfer fu!;.ction coe:':ficients "" ');
k "' tf2latc {nmr.) ;
disp('Lattice coefficients are'); disp{fliplrfk)');
The input data called by the program i:s the vector num of transfer function coefficients entered in ascending
powers ofz- 1•
We :illustrate its application in the following example.

The coCfficients of the FIR cascaded lattice can also be computed using the M-fi)e poly2rc. To this
end, the statement to use is
k ~ poly2rc(num};
The F1R cascaded lattice structure developed using Program 6_6 can also be verified using the function
latc2tf. To this end, the pertinent forms of this function are
num latc2tf(k,'fir')
num = latc2tf(k)
6.9. FiR Cascaded Lattice Structures 399

Figure 6.41: Power-symmetric FIR caM:aded lattice structure.

where r:um i:; the vector of transfer function coefficients in ascending power;; of z- 1 and k is the vectur of
the FIR lattice coefficients.

6.9.2 Power-Symmetric FIR Cascaded Lattice Structure


Another type of a cascaded lattice structure for the realization of a real-<X)effident FIR transfer function
HN(Z) of order N is shown in Figure 6.41 fVai86b]. However, for realizability, the transfer function must
satisfy the power-symmetric wndition given by Eq. (4. J46) and repeated below for convenience:

(6.94)
where Kv is a constant. We first analyze the structure of Figure 6.41 and then develop the synthesi<>
procedure.
We define H;(z) = X;(z)/ Xu(z) and G,(z) = Y, (z)/ Xo(z). From Figure 6.41 we observe that
Xt(z) = Xo(z) + k1z- 1Xo(z),
Yt(Z) = -kJXo(z) + .::·-l Xo(z). (6.95)

Titerefore.
HJ(Z)= I +k1 ::- 1 ,
Gt(z)= -kt +z-'. (6.96)
It can be easily verified that
(6.97)
Next, from Figure 6Al, i.t follows that the transfer functions Hi(Z) and Gt(Z) can be expressed -in terms of
H; -2{Z) and G; -2{Z) as
H,(z) = Hi-z(z) + k,-z- 2G;-z(z).
G;(Z) = -k;Hi-l(Z) + z- Gi-l(Z)-
2
(6.98)
(t can be easily shown that
(6.99)
prL)vided G;-z(z) = z-U-l) H;-2( -z- 1).
However, as can be seen from Eq_, (6.97), Eq. (6.99) holds for
i = l. Hence, the relation of Eq. (6.99) holds for all odd integer values of i. This result a1so implies that
N mast be an odd integer.
Il is a simple exercise to show that both H;(;:) and G;(z} satisfy the power-symmetry condition of
Eq. (6.94). In addition, H, (z) and G;(z) are power-complementary, i.e.,

+ ~G;(ejw)r
2
IH,(e 1 w)j = c,_ (6.100)

where c,, is a constant.


400 Chapter 6: Digital Filter Structures

To dev-elop the synthesis equation, we invert Eq. (6.98) to arrive at

(1 +kJlH;-2(Z.) =Hi(?)- k;G;(z),


2
{1 + k!)z- G;-z(z.} = k;H,(z) + Gt(z). (6.I0l)

Note that at the i th step, the multiplier coefficier:t k, is chosen to eliminate the coefficient of z-•, the
highest power of z- 1• m H;(::) - k; G; (:.). For this choke of k; the coefficient of z-i+ 1 also vanishes
making H,--2(2) a polynomial of degree i - 2.
We start the process by setting i = N, and compute Gx(z} using Eq. {6.99). Next. we detennine the
transfer fWlctions HN-2(<J and GN-2(Z) using the above recurrence relations. This process is repeated
until all coefficients of the lattit:e have been determined.
We illustrate the above synthesis melhod in the following example.

II&
%0
_, ,,
-
• tAR""* +·

=--tt'l-
' ~·
T•

t
*"+• c.. fltl'N - i

im tAm
-
'ft!iJ;;?tt!j)R
d
6.10. ParaHel Allpass Realization ot ltR Transfer Functions 401

l
---{'

Fi~un; 6.42: RealiL:nion of J J.,ubly-cmnplemen!JJ;-" tram.I"Cr function pair using a pan1llEl all.pa~ slructure.

6.10 Parallel AHpass Realization of IIR Transfer Functions


rn &ctioH 6.6, we h11ve d~monstrated that<..~ pair of doubly-complementary first-order lowpass and highpass
tnm.;fer functions can be realized as shown :n Fig..m~ 6.32. L1kewise, we have shown thal a pair of doubJy ..
c-nmpiernenwry :;ccond ..orJer bandpas:'i anU bant.btop transfer function,<; can be realized, as iadicatcd in
f<igurc 6.35. These two structures. can he considered as 8pecial ca.:;es of a structure composed of a parallel
connection of two stable allpass litters of the form of Fi.gure 6.42, where one uf the aBpass sections is a
zeroth-order transfer function. We c.msider in this section the reaHzation of an Nth-order tr.rnsfer function
G(z) in !he form of Flgur~ 6.42. ulung ·with its power-complementary transfer function If (z) [Vai86al. As
we shall point c>ut later, >:uch ::;tructures have a numtxr of very attractive properties from an impiementation
p.lint of view.
From Figure 6.42 we nb;.crve that

G(::) = i {Ao(::) +A 1{z)J, {6.l02aJ


H(::.l = { {A.n(d- A tiz)}. (6.!02b)

.It is a simple exercise to show that a nece:.sary condition for G(z) and H(z) to have the sum of allpass
decomposilions of the form of Eqs. (6.102a) and (6.ID2b) is that each be a bounded real (BR) IJR transfer
function (Problem 4.94)J Th(• BR condition can be ea:-:;il:t satisfied by any -.table transfer function by a
simple scaling. Let
. , P(zl '
po+p1::. '+·-·+p.vz- N
G \:""I=--- (6.103)
D{zj 1 7dlZ. l +·-:-:-.-+dN!. l'-.'

be an Nth-order BR transfer f~_;nction with irs power-complementary transfer function given by

HI-)= Q(.zl = qo+q;~-t+· ·+qNZ-N


(6.104)
~ D(z) J +d1z· 1 +· ·+dN::: N-
The power-com?lement;;ry property implies tha!

(6.105)

From Eq, (6. 102a) il follo'.':S that G(z) must have a symmetric m.unerator, i.e..

{6.I06)

and from Eq. (6.!02b} it fdlow,; :hat Ht::) must have an anti->ymmetric numerator. l.e ..

---- ---· ··--


1 See St>niun 4.4.5 fcor the dt>fimtwa ol c. OO:.~ruled rt>:>l transfer fundion.
402 Chapter 6: Digtta\ Fi\ter Structures

We develop next the procedure to .idemify the two allpass transfer functions fmm a specified tran'>fer
func-tion G(;::). Kmv. the symmetric property of the numerator of G(z) as given by Eq. (6. W6) impliefi that
(6.108)

Uke\vise, U:e antisymmetric property of the numerator of H(z) as given by Eq. (6.l07) implies that

(6.109)

By amolytic continuation, we can rewrite Eq. (td05) as

(6.110)

Substituting G(z) = P(z)fD(:.) .and H(:d = Q(z)/D(z) in the above equation, and making use of
Eqs. {6.108) and (6.109). we arrive at

{6.111)

From the relations of Eqs. {6.Hl8} and (6.109), we czn write

P(z) - Q(z) = z-Nr P(z-- 1 ) + Q{z:- 1 H- (6.112)

lfwe denore the zerO':i of [P(z) + Q(z)J as z = 1;1;, 1 :S k .::::: N, then it fol!ows from Eq. (6.1ll) that
z = l f·h. 1 :s_ k :S N, are the zeros off P(z)- Qtz) J, Le., [P(z)- Q{z)jls 1he mirror-image polynomial
of [P\z) + Q(::)J_ From Eq, (6.1 12) it a!so foUows that the zeros of rP(z) + Q(z)] inside the unit cirde
arc uros of D(::;_!. wherea-s- the zeros oi I P{z) + Q(z)J muside the unit circle are zeros of D(z-• ), since
G(z) .and H (<J have been a-;sumed to be stable transfer functions. Let z = ~I« l ::'5 k _s r, be the r zeros
of[P(z) + Qf.::}1 insidethemlitcircle. and the remaining N - r zeros, z = l;~:.,r + 1 _s k :S N, be outside
the unit <:irde. Hence, it can be seen from Eq. {6.112) that theN zeros of D(z) are given by

I::;k::;;;r,
r+l_sk:::;N.

Arl we need tu do now is to identify the above zeros with the appropriate allpass transfer functions
.4.o(z) l'.fld A: (::l. To this enct we obtain from Eqs. (6.102a) and {6.J02b)

(6. l l3aJ

(6.113b)

Thcrdore, the IWO allpa~s transfer functions can be expressed as

{6.114-a)

(6.ll4b)
6.1 0. Parallel Alfpass Realization of IJA Transfer Functions 403

Jn order to arrive at the above expressions for the two .allpass transfer functions, it is ne...--essary lD
determine the transfer function H(z) that is po'>':er-complementary to G(z). Denoting the 2Nth--degree
polynomial P 2 (z)- z-N D(Ct}D(z) as U(z),
:N
P\z)- z-N D(z-
1 )D(z) = U(z) = Lunz-". (6.115)
!1=0

we can rewrite Eq. (6.111) as


2N
2
Q (z) =L UnZ -n. (6.116}
.~

Solving the above equation for the coefEcients qk of Q(z) we finally arrive at

qo =Fo. q1 = -,-- "' (6.117aJ


-qo
"t-1
Ut- L.i-1 qeqn-t
qk = -qN-k = 2qo • k ::: 2, (6.1 17b)

where we have used the antisymmetric property of the coefficients. After Q(z) has been determined, we
fonn the polynomial [P{z) + Q(z)], find its zeros z = ~band then determine the two allpass transfer
functions using Eqs. (6.114a} and (6.114b).
It can be shown that IIR digital transfer functions derived from the ana1og Butterworth, Chebyshev,
and eJiiptic filters 8 via the bili~ transformation approach discussed in Section 7.2 can be decomposed
in the form of Eqs. (6.1 02a} and {6.1 02b) [Vai86a]. Moreover, fO£ lowpass-highpass filter pairs, the order
N of the transfer function must be odd with the orders of Ao(z) and A1 (z) differing by L Likewise. for
bandpass-bandstop filter pairs, the order N of the transfer function must be even with the orders of Ao(z)
ami A1(z) differing by 2 [Vai86a).
In the e&e of odd-order digital Butterworth, Chebyshev, and elliptic lowpass or highpas.s transfer
functions, there is a simple approach to identify the poles of the allpass transfer functions Ao(z) and A1 (.z)
from the pole!i J.._4:. 0 s k S N - 1. of the parent lowpass transfer function G(z) or H{z). Let fJK. denote
the angle of the pole A_t. If we assume that the poles are numbered such that 8k < Bk+!, then the poles of
Ao(z) are given by A2.t and the poles of A 1 (z) are given by A2Hl [Gas85]. Figure6.43 illustrates this pole
interlacing property of the two a11pass. transfer functions.

Sut/'IC ~ • !m ~ 'H Y"' d 'ju:>" "'i""r l!tfit ~>'it !L\tlhlH1 , '""CUY


""T'L·• 1i"¥\'YA11 '"11'" "' "'! 1ttJ i\:U::/1 dY-«H¢ +dfif '\il+ (L\11 hrm diF"r h 4-!f hY'"'H" "1 fd t
404 Chapter 6: Digital Filter Structures

lmz

/
---4-~-------o~·~-----h-t--~,
.
/.

-1 0 ~ ...

'. 1
' ''
'

Figuft 6..43: Pole interlacing property in the case of a sevenlh-order digital Buttetvlorth iowpa'is filter. The poles
marked "+~bel eng to the all pass function Ao(z) and the poles marked·· x" belong to the aUpass function A 1 (z:).

Tiw parallel allpass structure has two other very attractive properties. As indicated in Section 6.5. an
Mth-order allpass transfer function can be realized using onfy M mu1tipliers. Therefore, the realization
of an Nth-order UR transfer function G{z) based on the allpass decomposition of Eq. (6.102a) requires
only r + (N- r) = N multipliers. On the other hand, in general, a direct form realization of an Nth-
order llR transfer function uses 2N ...;... 1 multipliers. Moreover, with an additional adder and no additional
mu1tlp\iers, one can easily implement its pm\fer-compJementary transfer function H (z), also of Nth-order,
-6. ~;. D~ital Sine-Cosine Generator 405

'·-

VAn liM"
Norrn<liled f<t<j<l<fiq

l''iguw6..44: Magnilude :esponses of a pairot prr#er-compkmentary seventh-order Bunerworth low pass ar.d hlghpa~<;
tnms!er functions with a 3-dB cutoff frequency at 0.3lf.

as indicated in Figure 6.42. As a re<>ult. the parallel.alipass realization of an llR transfer function, if it exlsls,
i1> -compatationally very efficient. We ,;haH show later in Section 9.9.2, that the parallel aUpass re.alizati<-m
also ha,;; very low pas-sband sensitivity with respeci to mu\tiplier coefficients if the ailpas.s sections are
realized m a ;rtructumlly loss less form.

6,11 Digital Sine-Cosine Generator


\V~ ll0W cons.icier the realization of digital sinusoidal oscillators. In particular, we consider here ~he design
of digltal sine-cosine generators that produce two sinusoidal sequences that are exactly 90 degrees out of
phase with each other [Mit75]. These ci.rcuits have a number of applications, such as the computation
of the discrete Fourier transfonn and certain digital commani.c<ltion !i.ystems. In some applications, it is.
more convenient to use numerical algorithms for the computation of the sine or cosine functions. The>;e
an: discussL-d in St:ction fUL 1.
Lt:t SJ fnJ m1t.l s2[n j denote the two ou;puts of a digital sine-cosine generatJr given by

:q[n] =a sin(ntl), (6.12la)


><[»j =fJcos(nt!_), (6.12lb)

From the above we arrive al

sd1; +II =t:¥Sin((n + t)8)


= 0! sm(n&l cos8 + ct cos(nfl) sin e. (6.l22a)
s2[n. ..._ lj = tkos((n --r 1)0)
= fi cos( nO) cos e - fi sin(n6') sine. (6.J22b)

Mak.in~ u~e ofEqs. (6.l2la) and {6.121b), we can rewrite F4s. (6.122a) and (6.122b) in a matrix form as

s 1 ]n+l]] [ C0>0
(6.123)
[ szin+il = _flsin!J
"
To generate s1 tn i from s1 \n + 11 we need a delay, a'ld similarly, to generate s2ln] from s2 fn + lj we need
n delay. Thus, mch a ~>tr.>eture mu:-;t have at lea:.l two delay.\.
406 Chapter 6: Digital Filter Structures

Figure 6.45: A general second-order digital filter structure with no input node.

In O£der to arrive at a realization of the sine-cosine generator, we need to compare Eq. (6.123) with
the equivalent expression of a generd.l second-order structure with no delay-free loops. Such a structure is
characterized by the fol1uwing equation..o;

[,[n ++I]]~ [0 A] [''[n ++11] ~ [C D] [s1[n]]


sz[n 11 0 0 szln 1) E F S2[n]
{6.124)

and can be implemented using five multipliers, as indicated in Figure 6.45.


From Figure 6.45 we readily arrive at the time-domain description of this structure:

AF
F
+ D] [''[n]].fz[nl -
(6.125}

Comparing Eqs. (6.123) and (6.125}, we get

E =_!!_sine. F=cosO, (6.126a}

A.E +C =cosO, (6.I26b)

From the aNn:e two equations, we obtain after some algel»a

CcosB+Df!.sinH = 1. (6.126c)
a
Equations {6.126a) and (6.126c) ensure that the structure of Figure 6.45 will be a sine-cosine generator.
Expressing the muJti:plier constants A and D as a functiun of the multiplier constant C. we obtain after
some simple manipulations, the five-multipJier characterization of the s.inc-cosine generator as

r
+L_tsinfl
c <1(\ -C·CO&f{)
f.lsi"iJ
cos.B
l f
Ls1[nj
]
sz[n1 ·
(6.117)
a'

To reduce the total number of multipliers to 4, we can choose the folJowing specific values for the multJplicr
constant C: cos.B, 0, + l, and - L For example, if C = cos G, then Eq. (6_127} reduces to

s;Ln+l]l=f cose< ~sinB][srfn]] (6.128)


[ sz[n+l]J l-£sin8 cosB -'2[n1 '
a
6.11 . Dlgi1al Sine-Cosine Generator 407

-I

Figure 6.46: A single multiplier sine-cosine generator.

~lfn]

5
I} !0 20 30 4C so
Time i r!dex n Time inde,., n
(a) (b)

Figure 6...47: S1ne-cosine sequence-; generated by th~ structure of Figure &.46- for cost; = 0-9.

whtch for f3 = ~. leads to rhe foUI-multiplier sine-cosine generator described by Gold and Rader
[Go!69b]. On the other hand, if we seta tinB = ±j3, thenEq. {6.128} leads to a three--multiplierreaJization
(Problem 6.54). Another three-multiplier sine-cosine generator is obtained if we set a = ±p sin (J in
Eq. (6.128) (Problem 6.55).
A single-multiplier structure can be derived by setting fi sin (J fa = 1 - cos{) or, equivalently, f3 =
a tan(O /2), in Eq. (6.128) resulting in

s;ln+ IJl _ [
[ :dr. + l ] j -
cO>& co~ A+ 1 J [sJfn]J {6.129)
cos8- l cos£!: s:![n} •

leading to the realization of Figure 6.46. Anothe:- single-multiplier realization can be derived by setting
-fJ sin B ,fa = (1 + cosil) or. equivalently, a = fi tan{&/2}, in Eq. (6.128) (Problem 6.56). Various other
single-multiplier sine-cosine generators can be developed for C = 0 and C =±I and are left as exercise~
(Problems 6.57 and 6.58).
It should be noted that to start the generation of the sinusoidal sequences at lea<1t one of 1he signal
variables j"l [nl and .l"2(n] should have initially a nonzero value. Moreo'i:er, the actual amplitudes and the
relative phases of the cosinuwidal and the sinwmidal seqtJence.-. generated by the sine-cosine generator
rlepe'ld on the initial values chosen for the signal variables :>Jini and_f2[n].
Figure 6.47 shov.·s the plots of the sequences J 1[n] and -'2ln] generated hy simulating Eq. (6.129) in
MATLAB for rostl = 0.9. As can he seen fmm this figure, st(n] and s2/._nj are, respectively, cosinusoidal
and ;;_inusoidal sequences. 1'\ote that the maximum value of the amplitudes of the two sequences can be
made equal by scaling one of the sequences appropriately.
The single-multiplter structure;; retaln their characteristic roots on the unit circle under finite wordlength
constrain!:<;. On the other hand, in other realizations of the sine-cosine generators, roots may go inside
408 Chapter 6: Digital Filter Structures

'Th.ble 6.1: Computational :::omplel!.ity comparison of vario\13 realizations of an FIR filter of order N,

Structure No. of multipliers I No. of two-input adders

-----
Direct form N +I N
Cascade form N+l N
Polyphase N+l N
Cascaded l.attice 2(N + 1i 2N 1
II
T

Linear phase
1
l N;2J N

or outside the unit circle due to the quantization of the multiplier coefficients, causing !he oscillations to
decay or tn build up as n increases. In addition. due to product round-off errors, the sequences generated by
th<": sine-cosine generntor may not retain their sinusoidal behaviors, even in the case of a single-multiplier
generator. It is therefore advisable to reset the variables St[nJ and sz[.n] after some iterations at prescribed
time in.,;tants. so that the accumulated errors do nolbecome unacceptable.

6.12 Computational Complexity of Digital Filter Structures


The computational complexity of a digital fitter structure is given by the total number of multipliers and
the total number of two-input adders required for its implementation which roughly provides an indkation
of its cost of implementation. We summarize this measure here for various realizations discussed in this
chapter. Jt should be emphasized, though, that the computational complexity measure is not the only
.criterion that should be used in selecting a particular strncnu-e for implementation of a given transfer
function. 1be perfonnances of all equivalent realiz:ttions under finite wordlength constraints should also
be COllsidered together with the costs of implementation in selecting a structure.
Tables 6.1 and 6.2 show the computatirma! complexity measures of all realizations discuss.ed in this
chapter. As can be seen from these tables, in general. the direct form realizations for the case of both lhe FIR
and the JIR transfer functions require the least number of multipliers and two-inpm adders. However. in
the case of the llR traJJsfer function, the parallel aHpass realizaticm is the most efficiem if such a realization
exists. Likewise, it is possible to realiu certain special types of FIR transfer functions with a lot fewer
mult:iplieN and adders than that Indicated in Table 6.1.

6.13 Summary
This: chapter considered the realization of a causal digital tcansfer function. Such realizations. called
stmctures:. are mruaUy represented in block diagram fonns thar are formed by interconnecting adder~
multipliers. and unit delays. A digital filter structure represented in block diagram form can be aLaly7.ed
(()develop its input-outpct relationship either in the time-domain or in the transfonn-domain. Often. for
analysis purposes, it is coovenient to represent the digital filte! structure as a signal flow-graph..
Seve:ral basic FIR and liR digital filter structure.s are then reviewed. These structures, in most cases.
can be developed from tile transfer function description of the digital filter essentially by inspection.
6.14. Problems 409

Table ().2: Computational complex.ity comparison of various realizations of an HR. filter of order N.

Strucmre No. of multipliers No. of two-inpul adders

Direct form
2N +I 2N
II and II,
Cascade form 2N+l 2N
Pa.t"allei form 2N+l 2N
Gray~Markei 3N+l 3N
2N+I 4N
Parallel aUpass N 1 5(N + 1)

The digital aUpass filter is a versatile building btock and has a numbei of attractive digital s.ignal
processing appli<:ations. Since the numerator and the denom:nator polynomials of a digital transfer function
exhibit mirror-image symmetry, an Mth·order digital allp<'..''s filter can be realized with only M distinct
multipliers. 1Wo different approaches to the minimum-multiplier realization of a digital allpass transfe:
function are described. One approach is based on a realization in the form of a cascade of first- and second-
order allpass filters. The second approach results in a cascaded Iauice rlo"alizalion. The final realizations
in both cases remain allpass independent of the actual values of the multiplier coefficients and are thus
less sensitive to multiplier coefficient quantization. An elegant application of the first~ and second~orde
minimum multiplier allpass structures, consldered here, is in the implementation of some simple transfer
funclions with parametrically tunable properties.
The Gray-MarUl method to realize any arbitrary transfer function using the cascaded lattice form of
all pass structure is outlined. The realization of a large class of arbitra.-y Nth-order transfer functions using
a parallel connection of two allpa."<s filters is described. The final structure is shown to require only N
multipliers. In Section 9.9.2 we demonstrate the low passband sensitivity propeny of these structures to
small changes in the multiplier coefficients.
The cascaded lattice realization of an FIR transfer function is considered. The realization of a digital
sine-eosine generator is then described and varicus osciH<Uor structures are systematically developed.
Tile chapter concludes with a comparison of the computational complexities of FIR and IIR digital filter
structures.
'The digUal filter realizatiQfl methods outlined in this chapter assume the existence of the corresponding
transfer function. The following chapter considers the development of such transfer functions meeting the
given frequency response specifications. The analysis of the finite wordlength effects on the performance
of the digital filter structures is treated in Chapter 9.

6.14 Problems
6.1 Tt:edigitaJ filter s1rut::ture of Figure P6.l has a delay-free loop a'ld is therefore unreallzabJe. Detennine a realizable
equivalent structure with identical iflput-output relations and without any delay-free loop. (Hint: Express the output
._.ariables y(nl and u;(n] in term.<; of the inpUI variable" x[n] and «[n] only and develop the correre:sponding block
diagrnm :-epresentatinn from these input-output relations.)
410 Chapter 6: Digital Filter Structures

A
xfn] --;(+tl---i)----r~w(nl
D

.v[n] ~-L---:<
c
FigtueP6.1

6.2 Determine by inspe<:cion whether or not the digital filter structures in Flgure P6.2 have delay-free loops. Identify
these loops if they exist. Develop equivalent stroctures without delay-free loops.


X(.::) + + Y(z:)

(a) (b)

Figure P<J.l

6.3 An unsrahle discrete-time system G 1(z) can be made stable by placing it in the fQIWani path of a single-loop
feedback structure as indJcated in Figure 6.2 with a multiplier iu the feedback path with an appropriate value. Let
G1 (;::) =:-1-l and Gzlz) = K. Determine the range of value:> of K for which the feedback .struclure is stable.

6.4 Analyze the digital filter structure of Figlll'e P6.3 and dei:ermlne its transfer function H(z) = Y {z)/ X {z}.
(a) fs this a canonic structure?
{b} What slwuld be the v&lue of the multiplier-=oeffident K so that H(z) has a unity gain atw = 0?
(c) What £hould be the \>-alue of the multiplier coefficient K so that H(z) has a unity gain at w = .rr?
(d} Is there a difference belween these two values of K? If not. why not?

~+)--+Y(z)
6.14_ Pr-oblems 411

65 Analyze the digital filter strocture of Ftgure P6.4, where aU multiplier coefficients are real, and determine the
transfer function H (z) = Y(z)/ X(<:). \\'"hat are the range of values of the multi pEer coefficients for which the filter
is. ,;)lBO stable?

Figure P6.4

6.f~ By using 1he block diagram analysis approach. determine the transfer functi0u H(z) = Y(z)/ X (z) of the digital
filler structure of Figure P55.

.,
I +
~ ,-'
a%;\ tr
F"tgure P6.5

6.7 By using the block diagnun analysis approach, detennine the tran.<>fer function Hlz) = r (Z.'/ X{;:) of the digital
filter structure of Figure P6.6.

Figure P6.6
412 Chapter 6: Digital Filter Structures

6.8 By us:ing the block diagram anaiy:<.is approach, determine the transfer function H(z) = Y(:}/ X (z) of the digital
filter strn;ture of Figme P6.7.

Figun:! P6.7

6.'1 Determine the transfer function of the digital fitter stru<.:ture of Figure P6.8 !Kin72j.

y[n1

Figure P6.8

6.l0 Reali~e the FIR transfer function

H(z) = {1- 0.7;:- 1 )5 = 1- 3.5.:;- 1 + 4.9;,;- 2 - 3.43z- 3 + l.2005z-4 -{U6807z- 5


in the following funns:
(a) Two different direct fonns.
(b) Cascade of §ye fint-01"der ~c:tions,

(C} Cascade of one tint-order section and tv.-o second--onler sections. and
(d} Cascade of one second-order section and one third-order section.
O.:mpare !he computatiorul :::orn.plexity of each of the above reaJiz:at:ions.

6.tl (\w.sider J.!ength-8- FIR transfer function given by

H(zj = hlOJ + h{lJ;.:- 1 + h[2]z:- 2 _,... hl}Jz-~ + hl4J;;- 4


+h[5Jz- 5 + h[6].;:- 6 + 1:(7jz- 7 .
\a) Develop a tin-ee-br'anch polyphase realiVI.tion of H (z) in the fonn of Figure 6.8(b) and determine the expressions
for the polyphase transfer functions Eo(z), E 1{;;:), and £ 2 {zj.
6.14. Problems 413

(b} From thi;: realization develop a canonic three~brnnch polypha<;e realization.

6.12 {a) Develop a two-branch polyphase :realization of H{z) of Problem 6.11 ill the form of Figure 6.8(c} and
determine the eJ<:pressiutJs for the polyphase transfer function;; Eo(:) and E 1(<:).
{b] From ttm realization develop a canonic two-branch polyphase realization of H (z).

6.13 (a) Develop a four-branch polyphase realization of H(z) of Problem 6.11 in the fonn of Figure 6.8(a) and
detenn!IJe the e1<:pre5sions fer lhe polyphase tran:;fer functJon5 Eo(z). E l (:), E2 (:;::), and E3(z).
(h} From lhis realization de\•elop a canonic f1)Ur-branch polYJi:-.ase .realizalion of H (z).

6.14 Develop a minimum.-multipller reahzation of a lengt.":l-9 Type 3 FIR transfer function.

6.15 Develop a minimum. multiplier rea1iz-alion of a length-:0 1}-pe 4 F1R rnmsfer function.

6.16 The nested form -of the FIR transfer function ofEq. (6.9) is given by

H(z) = ho +b:::.-l l
( -+.b:!z- 1 (1 +b]t-l (I+--· +bN-JZ- 1(1 +bNZ-l) ·· ·))).
E).pre~'>
the coefficient.<; b; in terms of the meffi.cients h {k ], Develop the 11ested form realization of a 7th-order FIR
tnmsfer function H(z) based on the above expansion [Mah&2].

6.17 Let Ht"z) be a Type I linear-phase FIR filter of order N with G(z) denoting its delay-complemen!ary filter.
Develop a realizalion of both filters using only N delay~ and (N + 2)/2 multipliers.

6.18 Show that a Type 1 tinea.£-phase f1R transfer function H(z) oflengtb 2M+ 1 can be e.>~pressed as

HW ~ ,-M [h[M] +Eh[M- +,-•)]. nJ(," (6.130)

By U:.;-cllg the relation

.-t-2T(.::+.::-!)
zt +..._ - l z •
where Tt(x) is lhe lth.arder Chebyshev poly:nomial 10 in x, e:..press H{z) in the fonn

H(z.} = z-M .Eainl ( z. +lz-!) ". {6.131)

Determine the relation between a[nj and h{n!. Develop a realization of H(:;J based on Eq. (6.131) in the form of
Figure P6.9. where Ft (z.- 1) and F].(Z -l ) are causnl structures. Determ.ine the form of F 1 (z-l) and Fz(:-l ). The
structure of Figure P6.9 is ~..-ailed the Taylor structure for linear-phase FIR filters {Sch72j_

Figure P6.9 The Taylor strucrure shown for M = 4.

l°For" definition of me Cbehy5hev polynomial and a recursive equation for generating ruch a JlQ!ynomial, see Section 5.4.3.
414 Chapter 6: Digital Filter Structures

6.19 Show that there are 36 distmct cal\cade ;eah7ations of the trnnsfer function H(z) of Eq. (628) obtained by
diffcccnt pole-zero pairings and diffhem urde.:-ings {Jf t:-te md;vidual sections.

4.20 Consider a red coe-fficient HR transft!r fum;tJOn HV) with ;t~ numerator and denominator e;<;pressed as a product
,,f p.clyncmiah,
Hlc) ~
-
n --.
.
K

'
P;(:)
lJ·I ,(;.)
·~-
when; /'1 ( :;) ucd D; !;::) an~ either first-order nr ;;econd-order polynomials with real cxfficients. De tennille the tutal
number of d1~tinct cascade reah72-lions t!iat <.:Jn be -obtained by different pole-zem pairings and different onlering~ C1f
tht: auJi,·idual sec!ion~.

6.21 Develop J ~-;mo:tic direct form realization of the mmsfer ftmdcon

3 + 4::-2- 2z ·-5
H(z) = ---
!+3: 1 +5z-2+4z-4"
;u·U then dck~mme its trans;x•~ configuration.

6.21 Develop rW•J diffcr!::nl cascade canonic real!.ations Gf the following causal JJR tnlnsfer functions:

(a'; H 1 tz)= - -(O"·c'c---'O~.='"'c_c,


, ·''.c'2::C+"-'3'-'L~'=':.l
· n +2.1:: l-3::: 2 )(1 · 0.67.:: 1)·

6.:23 Rcali:Le the uansfer1\mcdon.~ ef Prnhlem 622 in parallel io-:ms [and H.

6.:Z4 Develop a <.:ascade realization {lf Che tmnsfer function of Eq. (6.32) using the factorizatior. given in Eq. [6.33).
Compare the cnmpuwtiona] complexity of this realization w11h the one shown in Ftgure 6.19(b).

6.:~5 Consider Ihe <:ascade of lhrec causal finl-order LTI diS<..---rete-tlme systems shown in Figure P6.10 where

3+0.2z- 1
H2.(z:"; = ' . H3(Z)=-==
1 0.3.:: . i 1O.lz -

(a) Determmc the !r.m~fer fundton of the ove-mlJ '>)'stem as a raLo of two polynomials in z-l.
(b) Determine the difference equation .charaderizing the overall :.y~>tem,
k) Develop the realization of !he overall.~yste:n with eac!l sa·tion reali.:ed in direct form II.
(d) De•·elnp .a pacalkl fonn I realia:hon of t::e (Jvera!l sy,.,tcm.
(e) Determine :he impJlse re:;por.~e of tTh;: overall system in dc~et! form.

Figure P6.1tJ
6.14. Problems 415

6.26 A :.:au~al LTJ di;;crete-time '-\'Stem devdDps an output ylnl = (0.4)r.III.r.l- 0.3(0A)n-lrfn ~ 1]. for an input
.r{n] = {IJ 2)"J.l[n]
{al Dctcnnine Ilk: tran:>kr fund ion of ~rn: S\·s;:err:,
{b) Detcn:cine the difference e[juation d.""~.aractcri7ing !he »y..;tc.Gl.
{c) Develop a canonic direct for::l realiL.alwn of the ~ystem \\ ith no more than three nmltiphen.,
{d) Devdop a parallel fonn I realization of the sy~tem,

{C) Determine the im;:-ulse [l;:~{R'II~e ur the \y~tem in cloi;ed lorm,


(I) Deterr:une the outpt:l yjn! of the :-.y-;tcm foi" an m;llll xln J = (0.3)'' ;_.l{nj ~ 0.4~0.3)"-1 p.fn - 11.

6.27 The Mmcturc :.ho-"'n in Figure Ph. I! wa..; developed in the ;:uursc uf a rcali2ation of !he HR digita: tran~fer
f>.<nc:iu::.
3z~ + 18.5:: -t- 17.5
HJ;:i = (~-+~0~)-c(~:-c~,c;,-
Hown<:r, by a m;s:akt' in the !abding., :wn llf the multiplier codlic.ents in this structure have incorrect value,;;, Fired
tlJest two multiplier~ anc deterrr.in::- chcir c~d values.

:i>'-·$----{'1--,-,
! ~ L___7,___j I
.'((;:) --+'I-+ ~-----------~(f!~
1
,
-2'-J I
Y(.:)

l<igure P6.11

6. !8 Figure Pli. 12 show~ an incomrlt"le rtalizatwn of the cauhl!l JIR :ransfer function
3::15;: ·- 2;
H(;:.J = - - -~~.
lz+IJ.5)(4: •·0
Dekrm::-~e the vakes of tile nnJtiplitr n-.effiUtnts A and B.

+ Y(z)

Figure 1'6.12
416 Chapter 6: Digital Filter Structures

629 Uevelop a two-multiplier can<. mi.,:: realiation of the sccond-(;rder tr.ansfer function

with multiplier coefficients a and fJ.

6.J(l Develop a two-multiplier canonic realiu.1ion of each of the following ~econd-order transfer functions where the
multiplier cueffu:icm.s are a: and -az, re;;pecti·,.dy [Hir? 3 j:

(a) Hc(z} = •<clc_;:-ca"-''c+':'a''c'c'c'c~-·-:'c_T,'~'


I fq:C I +t:QZ 2

0 - a2)(l- z- ~• 2

(hl Hzlz) =
l--a1z 1 +DilZ 2 .

6.31 In this problem we dev-elop an .alternative ca.scarled Lattice realization of an Nth-order IIR trmu;f& function
jMit77bl
(6.132}

The ftn.t stage of the realization pn_>cess is shown in Figure P6.13.


(a) Show that if the two-pali cha:!n parameter:; are chosen as
-1
A=i, c = Pll· D = PN1 •

then H.v ;(z) is an (N- l)th-order IlR transfer function cJfthe fonn

f6.J34)

with t:ocfficients giv-en by

, P'idk+l - Pk-' 1
Pk = PodN- PN
k=O,l, . . N-1, (fd35a)

' Pk.dN- PNdk


(iI ; = k=l,2 •... ,N-l. (6.135b)
podN PN

(b) Develop a Janice realization of the t""-o-pair.


(<..') Cuntmuing the above pr(lCes;;, we can realize HN (<:) a& a caso.:ade connecti011 of N lattice secriQlls COil strained
hy a tran-,fer function Ho(l) = I. '-''hat are the total number of multipliers and two-input adders in the final
realization of H.v(z}?

Figure Ni.13

6..32 Re<~li7.e the transfer fundi on of Eq. (6.32) using the cav.:aded lattice re-.J.Ezation me !hod of Problem 6.3 J.

6.33 Realize the tran...fer functions of Problem 6.22 uWng the cascaded ]attce realization method of Problem 6.31.
6.14. Problems
417

6.J4 Sht>v> that the cascaded !aUice realization method of Problem 6.31 results in the cascaded latllce struclure
d.e;;crihed in Section 6:6.2 when H /1' (:) is an all pass ttansfer function.

6.]5 J kvd<..>p t~ .~mn::tures of TYpes lB. 1A 1 • and lBr tint-order a!lpas.s transfer !Uw.:tions 5hown in Figure 6.2J(h}.
fc;.Imil {d), from Eqs. (6.38b). (63&::). and (6.38J), respectively

6..::1.6 \3} Deve;np a <.:ascade realization of the third-oWer all pas~ tr.tnsfer function

with ea<.:h allpa.% se~1ion reali7..ed in Type lA form. By sharing the delays belween adja;::ent allpa5-'i sections.
show that [he total number of delays m the overa:l structure can be reduced from 6 to 4 fMit74a].
(b) Repeat part (a) with each altpa~s section realized in Type lA1 form.

6.37 ti,.nalyze the dig ira! filter stncture of Figure P6.14 and show that i! re-.tlizes a first--<.JTder all pass transfer !Sto9<tj_

Figure P6.14

6.38 Develop formaliy the realizations of a second-order Type 2 alipass transfer funccionofEq. (6.39) shown in Figure
6.24u~ing the muh:iplier extrnctwn approach. Are there ;:.ther Type 2 altpass structures"!

6.39 Shew thul a ca-;.cade >}f two Type 2D second-order allpass strucmres can be reatired with &X delays by sharing
the delays hctweeu adjacent sections. What i;; the minimum number of multipliers needed to implement a t:as.cade of
M Type 20 ~econd-order a!lpa:ss structures?

6.411 Devdup fmmaily the c<!alizatirntS of <1 second~order Type 3 altpas.s transfer function of Eq. (6.40) shown m
Fig'lfC 6.2 using the multiplier extraction approach. Are !here other T)'-pe 3 allpass stru;;tures?

6.4:! Show that a cascade of two Type 3H second-order alfpa.s...-; structures can be realized with s:x delays by sllariog
!he lklay~ between adjacent se<.:tions V>'bu is the minimum number of multiptiers needed to implement a ca~cade of
M --::ype 3H second-order allpa>L~ strudures?

6.41~ Develc>p a three-:nultiplier realization of ;he two-pair des.cribed by Eq. (6.52-d).

ti.~· Develop a lattice reahza[ion of the two-p;;ir given by Eq. (6.52d). Determine the transfer function of an all-pole
~""CGnd-urdcr .c.:ascadcd iattia; filler realized using this lattice structure and the transfer furu:tioo of an ali-pole second-
order cascaded lactice filter realized u:.mg ~he lattice structt.--re uf Figure 6.29(a,l. Evalu.at.:: the appooximate e:o;pres.sions
lor the gmn of ho!h second-urder filters ut re.,;om;fil'e when the pole>. are close to the unit circle. Show that the gain of
the fi!'llt a!l-pole filter i~ appro:~~.Imate!y independent of the pole radius, whereas that of the second filter is not [Lar99].

6.44 Realize ea;;h of the following l!R trnn.~fe~ funct:ioru. m the Gray-Markel fmm and check the BIBO litabtlity of
each twnsfer futK1Jon:
418 Chapter 6: Digital Fi\ter Structures

z--;..z- 1 -.r-2;::- 2
(a! H1 (z) = ---
-1 -lz:_l + ~z-2"
! + 2z-t + 3.:- 2
l 7'
4.:" -
2 + 5;:: -I + 8~ 2 + 3C3
(c) H}(z)= ;-+0.75;:: --7-0 ..5.:: 2 +0.25;: 3'
l + 1.6::- 1 +O.nz- 2
(J) H::Jz) = , "
- z 1 - 0.25z 2 ~ 0.25.:: '

(e) Hs(z) =

6.45 Realize each of lhe HR transfer functions of Proble:-n 6.22 in the Gruy-Markel form and che·d:. lhe:r BJBO
swbility_

6..-Jfi Rca:iz..e the IJR transfer function

Hlz) = - 1 2
(l - 0.683z ')(1 - 1.446 L: + o. 7957 z ;
in the following forms: (a) direo: canonic form, (h) C'"-~de form. (c) Grny-Markel form, arni (d) cascaded lallice
slructun: de<;eribed in Problem 6.31. Compare their hardware requirements.

6~ In lhi;; pmhlem. Ule realizati1Jn of a real-coeffid~nl transfer function using compk~ ari!hmetic is ilJusuated
fReg87a]. Let G(z) be an Nth-order-real coeftk-1cnt transfer func:ion with simple poles in cofl'_plex-<.·onjugate pairs
ami with numerator degree less than or equal :0 t:hat (>f the denominator.
{a) Show that G(z) .:an be expressed as a sum oftw<J cnmplell. coerlidenttransfer ;unctions of order N /2..

G(z) = H(z:.) + H"'(z"'). it..l36)

where the coefficients of H"' (z*) a.-e complex conjugate of their corresponding coefficients of H(zl.
(b) Generalire the above decomposition to the case when G(:r) has one or more simple real pole;,.
(c) Consider a realization of H(z) thai ha.~ one real input x{nj and a complex output y[n]. Show that the transfer
furu.:lion from the input to the real part of the output is simply G(z).
~Him: Usc d partial-fraction expansion to obtain the decomposition of Eq. (6.136).1

6.48 Develop a realizati{)fl of a fir<.t-order complex coefficienl transfer function H (z) given by

A+jB
H(z)= ,
l -"- Ia + jfi)z 1

where A, B, .::t, tmd ji are re.a.J co.o:-;tants. Show the real and imaginary parts of ill I signal \ariables separately. Detcnn;ne
th.: transfer furn.·tiom. from the input to the real and imaginary part~ of the output.

6.49 Devel0p a ;:.;;scaded lattice realL-;ation of an Nth-order compiex coe:ffui.enl all pass transfer function AN { :).

6.50 RealiLC the following trans-f~ functions m the form of a parnllel connection of two all pass niter-.:
2+2z"l
(a_) !!; (;::) = 3 l
+z-
6. 14. Problems 419

(h)

(o

(;.! j

6.51 Cun:slder 3. .::ausallength-6 HR filter Cescribed by the ce>nvolution sum


5
yfn"!= L:h[kjx[n -k], n ~o.
k~

\\-here _)-In j a:1d x [n j denote, respectively, the ototpul and the input M:quen~.

(nl Let the output and inpllt sequences be blo.:ked into lenglh-2 vecl:ors
r y[UJ , x[2lj 1
Yt-'
- LYf2l+ t] j •
· X,:-= [ .tl2-€+1lJ·
Show that tile above FlR filter ;:;an be- equivalently des<.:ribed by a block convolution sum given by
3
Ye = LH~X.e-~.
~=0

where H, is a 2 x 2 matri~ composed of L"'Ie impulse response coefficients. Determine rhe block con\·olution
matnccs H,. An implementation of the FIR filter based on the above block convolution sum is shown in
hgure P6.15 where the block laheled "SIP" is a serinl-to-para.llel convcrtcr and the block marked ..PiS" IS a
parallel-to-serial converter.
(b} Develop the block convolutlun sum description cl the abO\·e FIR filter for an input and output block iength o~
3
(C) Develop the block convolution sum description of the above FIR filter fOI an jnput and output block length of
4.

Block
delay

Figure P6.15

6.52 Con~ider a cam;al HR titer described by a difference equation

L' d.;;y[n- kJ =
4
,L p,tsln- k;. 11 ~ 0,
k"'---0 k=O

where y{n! and .x{r. j denote, respectively, the omput and the 1nput sequear.:es.
Chapter 6: Digital Filter Stn..ictures

ta) Let the output and input sequences be blocked into length-::! vectors

, [ yf2f)
\c = _1-[2£+ 11
l ,Xt =
[ x[2fj
x(2i+ l]
J
·

Show that the abo•;e liR filter can be equivalently descr:iOCd by a block differe=e equation given by [Rur 72j

2
LDrYt-~ = EP~Xc-n
~={) r=O

where Dr ami P,- are 2 x 2 matrices composed of the drfferen-.-e equation coeffictents [dx) and {Pk /, respectJvely.
Detennine the bl«k difference equation matrices Dr and P ,-. An implementation of the ~IR filter based on the
above b.kx:k difference equalion is shown in Figure P6.16.
(b) Develup the block difference equation de:scription of the above IIR filter for an input and output block length
of3.
(c) Develop the block difference equation descriprinn of tho! above HR filter for an input and output block length
of4.

4n] y{n}

Figure P6.16

6.53 Develop a canonic realization of the block digital filter of F1gure P6.16 empklying only twe> block delay~.

654 De...-elop the three-multiplier strm;ture of a digita> sirre-cosirre generator obtained by ;o;ett:ng a sine = ±fJ in
Eq. (6."128}.

ti.SS De.,.elop the three-multiplier structure of a digital sine-cosine generator obtained by setting a ±p sin8 in
Eq. (0.128).

6.S6 Develop a onc-rnultipher structure of a di_glt'dl !>ine-cru.ine generator obtained by setting -/3 sin(} fa = I i cos 6
in Eq. (6_ I 28).

6..57 Develop a one-mult:pller ;.tnKture of a digiud sine-cosine generator -ubtained by setting C = 0 in Eq. !6.127)
and then choosing a and f3 properly.

6.58 lJevelop a one-multiplier structure of a c.igi!al sine-cosJrre generator obtaine-d by setting C = -1 in Eq. {6.127)
and then choosing a and fJ property. Show the Hrr<ll structure.
6.15. MATI..AB Exercises 421

6.59 Signals generated by multiple sources or roulliple sensors. called a multichannel signal, are usuali)' tram;mitted
thn:n.:gh independent channels in close pmximity to each other. As a result, each c-omponent of the multichannel
:;igm.l often gets conupted by signals from adjacent channels during transmio;sion resulting in cross-talk. Separation
of the. multichannel signal at the rece;ver is thus of practical interest- A model representing the cross-talk. betwt:ell a
pair of channels for a two-channel signal is depicted in Figure P6.17(a) and the corresponding discrete-time system
fer c:'"Jannel separation is as s...':town in Figure P6.17(b) [Yel96]. Determine two possible sets Qf condition~ for peflect
channel separation.

~[nJI--,LGa--/i-2-l
(-;:-):----/-<{++~

(a) (b)

Figure P6.17

6.15 MATLAB Exercises


M 6..1 Using MAI"LAB develop a cascade re~>!izarion cl each of the foi]QWing linear-phase FIR transfer fum:Lions:
(a) llt(z:} = 4.5 + 4.05.;:-l -- 35.325;- 2 - 71.6:85z-3 ...._ 63.99z-4- 7l.6185z-s- 35.325.:-6 + 4.05.C7
+4.5z- . '
(b) H2(z:) = 6 + 6:- 1 - 24.66z- 2 ...... 72.96.c 3 - SK62z- 4 + 88.52c5- 72.967-6 + 24.6fu- 7 - 6z-k
-6:-9 .

.\f 6.2 C"omader t.>re fourth-order JIR transfer function

9+ 33-- 1 +57~-2 + :B~- 1 + 12C 4


G(zl='" '-<-
. 6 l2z l+Jlz 2-5z 3+z 4
(a) Using MATLAB express G(c) in factored form,
(b) Ikvelop !"-'"0 different cascade realizatmns of G(z ),
(c) Develop two different parallel form re.Uizations of G(z). Realize each second-:xder section in direct tOnn U.

M 6..3 Coru.uicr the fourth-order TIR traru.fer function given below:

H(;::) =:;--c''2~-~2,,,-ccl~-r;o3o'c_c,~+~'c--'~
5+3z l+2:: 2+2:: 3+z 4·
(a) Using ~TLAB exp•ess H{z) m factored fonn,
<b) Develop tw<> different caieade realizations of H (z),
~c) Realize H{_:) in parallel form~ I and II.
Realize each second-order section in direct form II.
422 C'lapter 6: Digital Rtter Structures

M 6.4 Using Program 6_4 develop a Gray-Markcl cascaded lattice realization of the IIR trdnsfer functiGn G(z} of
Problem M6.2.

~ 6.5 L'sing Program 6_4 develop a Gmy-Markel caSL«dcd laU!<:C" realization of the IIR transfer function H (Z) of
Probl.er:> Mti.3.

M 6.6 Csing Program 6_ fo develop a cascaded lattice realization of each of the FIR tnm.~fer functl011s of Problem
MO.!.

:\16.7 (a) Realize the following UR Jowpass l\'1lm:.fer functmn G(;:) in tbe form of a parallel allpa_<;$ structm-c:

0.02!9(1 + 5.:::- 1 + !Oz - 2 + wz- 1 + 5.:-4 -r z- 5)


G(zl = c,-~0~.9~8~5~3,~.~.~+"-;0~.9~7~3~&~,,c,,=o~.~3~&6407,-::i3~~~.~o~.l~I~I~2,:::C,f-~o~.o:o:-JI~J~,=<s ·

(b) From !he aU pass decompu~i!ion determine it~ powu-t:ompkmentary tran.~fer function H (::./.
{c) Plot lhe square of the magnitude re>ponses of the CJriginaJ transfer function G{z) and its power-compleml:'ntary
tnmsfer function H\zj derived in part (b) and verify that their sum is equal to one at all frequencies.

M 6.8 (a) Reali.re the following IIR hlghpa:>" transfer function G(z) in the form of a parallel allpass .structure:

0.0903(1 - OJS97lz- 1 -:- Lill22z- 2 1.8122;-3 + 0.8971.:;:-4 - z- 5 )


G(") =
~ ! + ·.7':7F~==:i-c~=~+
1.7028z l + 2.4423z '-:- 1.7896..;: 3- + 0.9492z 4 + 0.2295z 5

;:h) From the allpass decomposition determine ih !JO""ef-ciJmpkmentarytransferfunction H(z).


(c) Plot th.e squa~"' of the magniuak responses of the original tra.r_;;fec function G(z) and its pcwer-complernentary
transfer function H{z) derived in part (b) and verify that th.eir sum is equal to Ofle at all frequencies.

M 6.9 (a) Realize the following IIR bandpass transter function G(z) in the fmm -of a parn.llel aUpass structure.

(b) From the allpass da:ompoxitlon determine its power-complementary ;;ransfer funnion H(z).
(c) Plot the square of the magnitude respomes of the original tran~fe-r function G(z) and its power-compl::mentary
transfer function H(z) derlvd in part (b-_l and ve.r:fy that their sum is equal t<J one at all frequencies.

M 6.1() Using MArLAR ~1mulate the "ingle-multtpiier sine-cosine generator of Problem 6.56 with C01>9 "" 0 g and
plot the first 50 samples of it"- two output sequences. Scale the oulp;.~t;; sv thai !hey both have a maximum amplitude
of:!:: I, ¥/hat is the effect of initinl values of the variab-les s; ln j?

M 6.11 U~ing MATLAB o.imulate -o:hc single-multiplier &ine-<::osine geiterator uf Prnhlem 6.57 w_;th co:-;0 = 0.9 and
plot the iir"t 50 ~drrlpics of its two o1.npur sequence:<. Scale LIJ.e outputs so that t..'.ey both have a r.uximum amplitude
of± l. What is t.'Je effect c-f mirial values of the variables s; [r: ]"?

M li.12 Using MATLAB simulate the smgle-multiplicr sine-cosine generator of Problem 6.51! v.-ith '-'OS 8 = 0.1/ and
p]o; th:' fir"t 50 samples of in two ourput sequeru.:.:s. &ale rhe outputs Ro that they both h;n.-e a r.w.ximwn amplitude
of .±.1. What i~ L'"Je effect of initial value~ of the V!lrio.bles ,<1 fn l on ih~ -o-utput.>?
Digital
7 Filter Design
A:limportaw. step in the development of a digital filter is !he determination of a realizable transfer function
U(;) approximating [he given frequency response specifications. If an OR filter ;s desired, it Is also
necessary !oensure that Glz) is stable. The process of deriving the transfer function G(z) i..:; called digital
filler de.~ign. After 1.-(z) ha~ bee-n obtained. the next step is to realize it in the form of a suitable filter
Hructure, In the previous chapter, we outlined a variety of basic structures for the realization of FIR and
fiR transfer functions. In this chapter, we consider the digital fi1terdesign probJem.
First we review some of thei;;;sues associated with the filter design problem. A widely used approach to
HR fii!er design ba.<>edon !he conversion of a prototype analog transfer function to a digital transfer fuoction
is discussed Ttext. Typical design examples are included to illustrate this approach. We then consider the
transformation of one type of filter transfer function into another type. which is achieved by replacing the
c~'lmplcx. \'ariabie z by a function of z:. Four commonly used transformations are summarized. A popular
approach to FIR filter design is next described. Finally, v:e consider the computer-aided design of both
TlR 4nd FIR digilal filters. To this end. we restrict our disccssion to the use of MATLAB in determining the
uan~fer function:,.

7.1 Preliminary Considerations


Th~re ;ne two major issues that need to be answered before one can develop the digital transfer function
G (.<: ). The fir,1 and foremost issue is the development of a rea:mnable filter frequency res;mnse specification
from the requ;rements of the overall system in which the digital filter is to be employed. The second issue
is lo determine whether an FIR or an UR digital filter is to be designed. In this section, we examine these
rwo t'-">ue,;_ ln addition, we review the basic appt\)ochei'. tn the design of IIR and FIR digital filters and
determination of the filter order to meet the prescribed specifications. We also discuss the scaling of the
tnm;;fer function.

7_1.1 Diglta~ Fitter SpeciHcations


As ln the case. of the analog filter, either the magnitude and.lor the phase (delay) response is specified for
the destgn nf a digital filter for most applications. In some situations, the unit sample response or the step
respon;;e may be specified. In most practical applications, the problem of interest is the development of a
re:al\iablc approximation <::J .a gi·ven magnitude response spec:fkation. As indicated in Section 4.6.3, the
phase response of the designed tilter can b!!' .corrected by <.:ascading it with an aUpass section. The design
of allpass phm;e equalizers has received a fair amount of attention in the last few years.
We restrict our attention in this chapterto the magnitude approximation problem only. We pointed out
in Se<::ion 4.4. I that there are four bas.ic types of filters, ¥those magnitude re;;ponses are shown in Figure
4. l 0. Since the impu!:;e re:>p<>nse corresponding to each of these is noncansal and of infinite length, these

423
424 Chapter 7: Digital Fi\ter Design

: -·~--
~-
'

FJgUre 7.1: Typical magnitude spe-Cifications for a digital lowpass filre:r.

ideal filters are not realizable. One way o: developing a realizable approximation to these filters would be
to truncate the impulse respon.••e as indicated in Eq. (4.72) for a lO\o\>pass filter. Tile magnitude response
of the FIR Jowpass filter obtained by truncating the impulse response of the i:iea1 lowpass filter does not
have a sharp transition from passband to stopband but, rarher. exhibits a gradual "roll-off."
Thus, as in the case of the analog filter design problem outlined in Section 5.4.J, the magnitude
response specifications of a digital filter in the passband and in the stopband are given with some-acceptable
tolerances. In addition, a transition band is specified between the passband and the stopband to pennit the
magnitude to drop off smoothly. For example, the magnitude IG I
(efw) of a 101\o-pass filter may be given as
shown ln Figure 7 .l. As indicated in the figure, in the passband defined by 0 ::::;: w :5 wp. we require that
the magnitude approximates unity with an error of ±Sp, i.e.,

for jwl ::;: Wp- (7 .I)

In the stopband, defined by Ws _::::: w _::::: 7f, we require that the magnitude approximates zero with an error
of !Js, i.e., .
I' G(ejm) I < 8
- 5> (7.2)

The frequencies Wp and w.!' are, respectively, called the passlxmd edge frequency and the stopband edge
freqwncy. The limits of the tolerances in the passband and stopband, Bp and 5s. are usually called the peak
ripple values. Note that the frequency response G(ejw) of a digital filter is a periodic function of w. aq,d
the magnitude response of a real-coefficient digital filter is an even function of .w. As a result, the digital
filter specifications are given only for the range 0 _:: : :wl 5 n.
Digital filter specific-ations are often gh-en in terms of the loss function, Q(w) = -20 Jog 10 IG(ej"") 1.
in dR Here the peak passband ripple ap and the minimum stopband attenuation a 1 are given in dB, i.e.,
the Ioss :spa.:i.fi-=ations of a digilal filrer are given by

«p = -20log 10 ( l - Bp)dB, (7.3)


as= -20logw(S..-) dB. (7.4)
7. 1. Preliminary Considerations 425

!
1--.'ie<lr->J~
'
~t_______l___~~~~~~::::~.
u "'r '-" "
-1 1-

t'igure 7.2: Alte-Jnate magnitude spccib:<l!!ons k•r a dig:itallo••:pass Iilier

,-\«in lhe case of an ::nalog lowpa~s fil!cr, the ~pccilicat:ons for a digitallowpass filte-r may alternatively
be given in term!:. of its magnitud-e response. as in Figure 7.2. Here the maximum value of the magnitude
1n !he passband i:; assumed to be unity, and the maximura pa'>sband deviation, denoted as lf v'l+?. is
given by the minimum yaJuc of the magmtude in the pas.<; band. The maximum stopband magnitude is
denc;teJ by I JA. 1
For the normalized specification, the maxtmum \'alue ol the gain function or the minimum va1ue of the
los.~ function is therefore 0 dB The quantity a,...,x given hy

(7_5}

1s called the marimum passhmrd a!tenumwn. For .>;p << I, a:> is typically the case, it can be shown char

(7.6)

The passband and stopband edge fn:-t~uendc::;, in ffi"IY:>t appli~ation&. are &pecit\erl in Hz:., al-ong with the
':amp!ing rate of the digital filler_ Since all fi!ter de;;.ign techniques are developed in :enr.s of normalized
angular frequencies w!' and (tl,, !he specified critical frequ~ncies need to be normali7..ed before a specific
filter dc;sign algorithm ::an be applied. Let Fr denote th~ sampling frequency in Hz, and F P and F,
denote, re<,.pectivdy, the pafisband and stopbanc: edge frcyuencies in Hz. Then the normalized angular
edge frequcuci~s in radians are given by

QfJ
Wp = Fr = (7.7)
Q,
w,= (7.8)

1·nw nuntmum 'lnph:md attenuation i> chcrefme 20 :vg m!:1)


426 Chapter 7: Digital Filter Design

7 :l .2 Sefection of the Filter Type


The second is.sue of interest is the selection of the dig;ta! filter type, i.e., whether an IIR or an FIR Ji.g.ital
filt~r is to be employed. The objective of digital filter design is to develop a causal tran1>fer function H(z)
meeting the frequency response specifications. For llR digitai filter design, the IIR tran.s.fer function is a
real radonal function of z- 1 :
_, + _,+ -M
H(z! = Po + PlZ fJ2Z. · · · + PMZ . (7.9)
· do+d~z 1 +<h.z 1 +···+d.vz N

Moreover, H(z} must be a stable transfer 5Jnction, and for reduced computational compl.exity, it must be
of lowesr order N. On the other hand, fm FIR filter design, :he FIR transfer function is a polynomial in

N
H(z) ~ LhlnV". (7.10)
n=J
For reduced computational complexity, the degree N of H(;:_) must be as small as possible. In addition, if
a linear phase is desired. then the FIR filter coefficients must satisfy the constraint:

h[n] = ±h[n- Nj. (7.1l)

There are several advantages in using an FIR filter, since it can be designed with exact linear phase
and the filter structure is always stable with quantized filter coefficients. However, in mm;t cases, the order
NF!R of an FIR filter is considerably higher than the order NuR of an equivalent IlR filter meeting the
same magnitude specifications. In general, the implementation of the FIR filter require's approximately
NpR multiplications per output sample, whereas the UR fii.ter requires 2NnR + I multiplications per
output sample. In the former case, if the FIR filter is designed wiih a linear phase, then the number
of mu!tiplicaticns per output sample reduces to appmximately (NAR + l)/2. Likewise, most IIR filter
designs result in transfer functions with zeros on the unit circle, :lnd the cascade realization of an IIR filter
of order NnR wi.th ail of the zcrD.'l on the unit circle requires ~(3NIIR + 3)/2J multiplications per output
sample. 2 It has been shown that for most practical filter .specifications, the ratio NF1R I NnR is typically of
the order often:> or more and, a." a result. the IIR filter usually is computationally more efficient 1Rab75].
However. if the group delay of the IIR filter is equalized by cascading it with an all pass equalizer, !hen the
sav:ngs in computation may no longer be that significant [Rab75]. In many applications, the linearity of
the phase response of the digital filter is not an issue, making the IIR filter preferable because of the lO\·VeT
computational reqmrements.

7.1.3 Basic Approaches to Digital Filter Design


In t·'le case of HR filter design, the most common practice is to convert the digital filter specJfications
into analog lowpass prototype filter specifications, to dctenmne the analog !owpass filter transfer function
Ha (s) meeting the~ specifications, and then to transform it into !he desired digital filter transfer function
G(z). This approach has been widely used for many reasons:

(a) Analog approximation techniques are highly advanced.


(b) They usually yield dosed-fonn solutions.
(c} Extensive tables are available for ana:og filter design.
7. 1. Preliminary Considerations 427

(di !vtany apphcation~ require the digital simulation of analog filters.

ln t!1c so.Xjud, w.: denote an analog transfer fum:tion ao.

Pa(.<)
H,"(s) = _;---::, {7.12)
DaiJ)

\vhcr.: the r.uh;,cript ''a" spccitical:y indicates. the analog domain. The digital transfer function derived
frum Ha(s} i:> denoted by
. P(z!
G(z} = D(z).

The ba~ic idea behind the conversion of an analog prototype transfer function Ha{s} into a digital
IIR transfer function G(;:) is to apply a mapping from the s-domain to the z-domain c;o that the essentiaJ
properties of the analog frequency response are preserved. This implies that the mapping funclion shouid
be such that

(a} The iwaginary (j&;!) axis in the s-plane be mapped onto the unit circle of rhc ::·plane.

(b) A suble analog transfer funcuon be transfomred into u stable digital transfer function.

To thi;. end. the mo.st widely used transformation is the bil:near transformation described in Section 7 .2.
Unlike HR digital filrer design, the FIR filter design does not have any connection with the design
of an<1log filters.. The design of FIR filters is therefore lmsed on a direct approximation of the spe..::ified
magmtude response. with the often added requirement that the pha..~ response be linear. As pointed out in
Eq. (7.10), a causal FIR transfer function H (z) of length N + l is a polynomial in z< of degree N. The
corresponding freque-ncy response JS given by

.-\'
H(e.!"-') = Lh!n]e_J·am. (7.14)
"=0

It h<cs been shown in Section 3 .2.1 that any finite duration sequence .t [n] of length N + l is completely
characterized by N + 1 samples of its discrete-time Fourier transform X (ei""). As a result, the design of
an FIR filter of length N + 1 may be accomplished by finding either the impulse response sequence {h[n;}
or N + i samples of its frequency response H {ejw). Also, to ensure a linear-phase design, the condition nf
Eq. (7.!1) must be satisfied, TWo direct approaches to the design of FIR filters are the windowed fumier
scrie); approach and the frequency sampling approach. We describe the former approach in Section 7.6.
The second approach is treated in Problem 7.6. In Section 7.7 we outline computer-based d1gitai filter
d--~sign mettwds.

7. i .4 Estimation of the Filter Order


For the design of an JIR d:igita! lowpass filter G(z) b<ised on the conversion of an analog lowpass filter
Ha(-1_), the fiJier order of H[}_(s) is fi.Nt e~timated from its ~pecificatlons u&ing the appropriate formula
gJVen io Eq. (5.33), (5.41), or (5.51 ), depending on whether a ButLerworth, Chebyshev, or equiripple filter
appiOximation is de.-:.ired. The order of G(z) is Jhen determined automatically from the transformation
belng u~d to convert H.,(s) into G(z). There are.~event1 M-files in MATLAB that can be used to directlv
estimate the I!linimum order of an IIR digiwl transfer function meeting the filter spe<:itkations for the clas's
of approximatkms discussed in Section 5.4. These are discussed in Section 7.10.1.
For the design ofHR lowpa;;s digital filters, several alllfwr;.; have advanced formulas tOr estimating the
minmuJ.In value of the filter order N directly from the digital filter specificru:~ons: normalized passband
428 Chapter 7: Digital Filter Design

edge llflgular frequencywp, normalized stopbandeige angular frequency w_., peak pass:'Jand ripple Bp, and
peak stopband ripple J~ fHer73], [Kai74], and [Rab73]. A rather simple approximate fO£mula developed
by Kaiser [Kai74lls given by
N;;:: -20log 10 (/8;E-;) -U. (7.15)
l4.6(w_.. Wp)fl.;T
~ote from the above formula that the fiherot"der N of an FIR filter is inversely proportional to the transition
band width (w,. - wp) and does not depend on the actual location of the transition ban£. This implies that
a sharp cutoff FIR filter with a narrow transition bmd would be of very long length, wherea.<; an FIR filter
with a wlde transilion band will have a very slwrt length. Another interesting property of Kaiser's formula
is that the length depends on the product iSp~s-· This implies that if the values. of Cp and Os are interchanged,
the length remains the same.
The formula of Eq. (7 .15) provides a reasonably good estimate of the filter order in the case of FIR
filters with a moderate passband width and may not work well in the case of very narrow passband or
very wide pas~band FIR filters. In the case of a narrowband tilteT, the stopband ripple e-ssentially controls
the order and an alternative approximate formula provides. a more reasonable estimate of the filter order
[Par87]:
~ -20lc-gl0(03 J + 0.22
N = . {7.16)
(W3 Wp)/2rr

On the other hand. in !he case of a very wide band filter. the passband ripple has more effect on the order.
and a more rea:;onable estimate of the filter order am be obtained using the fOllowing formula rPar87J:

{7.17)

We illustrate the application of Kaiser's formula of Eq. (7.15) in the foUowing two examples.

•ft.lflJIJ

Kaiser's formula can also ~ used to ClStimate the length of highpass, bandpass, and bandstop FlR
filters.
7, 1. Preliminary Considerations 429

'T:!lw trt:!lw" u¥1lw· f:m


dwtt:t frt I +» , tm <i+'W3;:h 4

The fonnula due to Hennann eta!. 1Her73 J gives a shghtly more accurate value for the order and is
given by 3
N ~ D=Ui,,Os)- F(Op.:'U[<w~--wp)/2Jrt.
(7.18)
(w.• - wp)/2rr

where

D 00 (0p- '~·) = [UJ (1og 10 <'ip)2 + a2(logl0 llp} + a3] loglO .S,.
- [~{log! 0 Jp) 2 + ns(log 10 lip)+ a6 J, (7.19a)

arui
(7.19b)

with
at = 0Jl05309. l1J. = 0.07114. a3 = -0.4761, (7.19c)

U4 = 0.00266, a5 = 0.5941, a6 = 0.4278, (7.19d)


bl = 0.51244. (7.19e)
The formula given in Eq. (7.18) i~ valid for Op 2::: J5 • If 8p < Os, then the filter order fcnnu1a to be used is
obtained by interchanging 8v and J,. in Eq. (7.19).
For :.mall values of Jpafld£,, both Eqs. (7 .15) and (7.18) provide reasonably close and accurate results.
On the other hand, when the values of 8p and 8_, are large, Eq. (7.18) yields a more accurate value for the
orde:.

iit" ttt;t 1
~t.tog.wr 1M A0 r!wr wtrttJ0M!bd tl<t&1 11lft:Ttrs

:"l"ote that the filter order computed using Eq. (7.18) is slightly higherthanthatobtained using Eq. (7.15)
in Example 7.3. Both formulafl provide only an esti.:nate ofthe .required filter order. The frequency response
of the FIR filter designed using this estimated order may or may not meet the given specifications. If
the specifications are not met, it ls recommended that the filter order be gradually increased until the
specifications are met. Estimation of the FIR filter order using MATLAB is discussed in Section 7.10.2.
430 Chapter 7: Digital Filter Design

7.1.5 Scaling the Digital Transfer Function


After a digital filter has been designed following any one of the techniques outlined in this chapter, the
corresponding transfer function G(z) has to be scaled in magnitude before it can be implemented. Jn
magnitude scaling. the transfer function is multiplied by a scaling constant K so :hat the maximum
magnitude of the scaled transfer function G 1 (z) = K G(z) in the passband is unity. i.e., the scaled transfer
function has a maximum gain ofO dB. For a stable transfer function G(z) witt: real coefficients. the scaled
transfer function K G{z) is then a bounded real (BR) function.4
For a frequency selective transfer function G(z:), if Gmax is the maximum value of jG(e 1"')i in the
frequency mnge 0 :5: w _::: 1r. then K = 1/Gmax. which results tn a maximllm gain ofOdB in the-passband
of the scaled tmnsfer functior,. For example, in the case of a Jowpass transfer function with a maximum
magnitude at de. it is usual practice to use K = 1/Gi I). implying a de gainofOdB forthe scaied transfer
function. Likewise, in the case of a highpass transfer function with a maximum magnitude at w = rr, K
is selected equal to 1/G( - l j yielding a galn of 0 dB at w = n for the scaled transfer fum:tion. fiH a
bandpass transfer function. it is common to use K equal to 1/IG(ej"''}i, where w, is the centcT frequency.

7.2 Bilinear Transformation Method of IIR Filter Design


A number of transformations has been proposed to convert oo analog transfer function Ha(s) into a digilal
transfer function G(z) so that essential properties of the analog transfer function in the s-drnnain are
preserved for the digital transfer function in the z-domain. Of these, the bilinear transformation is more
commonly used to design IIR digital filters based on the conversion of analog prototype filters.

7.2.1 The Bilinear Transformation


The bilinear transformation from the s-plane to the z-plane is given by [Kai66]

2(1-z-')
s=TI+;::l. (7.20)

The above transformation is a one-to-one mapping, I.e,, it maps a single poiru: in the s-plane to a unique
point in the z-p[anc, and vice versa. The relation between the digital transfer function G(z} and the parent
analog transfer function H., (s) i;; then given by

G(z) ~ H"(,): s = -'('··-').


------.,. (7.21)
-4o - - - - - - - - - - - - - - - - T l « -,
See Section 4.4 5 foX a definition of a bounded real {BR) funct:on.
7.2 Bilinear T~ansformation Method of llR Filter Des~gn 431

The b:lme-ar transformation i!> derived hy applying :he trapezoidal numeric'al integration app~oach W the
di!Terco.tial t.qua.tmn represe:n!;nion of H, (s} that leads tn the difference equation representation of G ,: ::)
( ,cc Lxamph: 2.36)_ TI1c paramteter T rcpn;scnt:; the Mep ->ize in tbe numcrical integration. A;o we ;.,halt
sec ;:ller in this :-.cct:on, !he digit~! fi!ter de;.ign pnx:cdurc ;,;onsists of lwo steps: first. the inverse bilinear
tram.formarion i'> .appr1cd w the digital fil:~-f speci:lcation'> lo arrive at the specification" of the analog filter
prototype th{:n the bilinear transformation of Eq_. (7 .2H) is employed to obtain the deJiired digital trans:-er
fum.:tJon G r:: 1 frot!l !he analog transfer fum;tiun H'- 1s) de...-tgned to meet the analog filter specifications_ A 'i
'' rC!~lllL th.: par<HHCtcJ T has no effect (>Jl the exprc<><;hm fer G{z). and we shan choose. for convenience,
T = 2 to simplify the design procedure.
T:le ~.:utre<.p<Jnding ln'cerse transformation forT= 2 :i~ given by

?.= (722;

1 "ct us now cqnnine the above tr:J.no.formarion. Not,_:, that fnr ,~ = j n,,
0.23)

whi;:h tlas a unity magnitude. Tbts impl1e<. that a point on the imaginary 3.Xis in the s-planc i<> m"pped ont-o
a puint on the Ulllt drcle in the :::-plane. In the general case. for s =a,+ jr'! 0 ,

! + ("h +JR.,) (I -r a 0 ) + jQ,


(7.24)
1 (a.,+ JR.u) {j -vo)-jQ.,·
Thcr.:ft>n;.
(I -" + !'"~"
+ Ur;)~ 0 J'
\:\-" = '> 2" (7.25)
(I- a,,)·+ :Q,)
·nms. a point onlhe jD-2xis in the .>-plane (a- 0 = 0) is mapped omo a point m: the unit cirde in the z-pkmc
as L-:! = l. A point in the left-hall s-phme with a" < 0 i~ mapped onto a point inside the unit circle in
the z.-pl<.'i.ttC a:, i~l < L Likew!~c, a. poim '.n the right-half s-phmc wjth a 0 > 0 i;; mapped onto a point
outside-the unit circle in the ;.-plane. as~;: > 1. Any point in tbe s-pi_arre is mapped on:o a unique point in
th·~ ~-plane .and \'ice \'ersa. The mapping of the s-plane into the z-plane via the bilinear rransformatioo i;,.
-illustrated in Fig-._trc 7.3 and is ~en tu l1ave all the de-;ircd p!()pertie~. Also, there j:; no alia~iug due to the
one-to-one mapping,.
The exact relation be-tween the imaginary axis in tiles-plane {s = jQ) and the unit urcle in the :-plane
i:s of interest From Eq. (7.20} with T = 2 it follows that
{:: = 2.1'''}

jft=
l+e1'''
= j ~an(~),

(7.26)

whid;_ ha:, been plotted in Figure 7.4. Note from this plot that the positive (negative) imaginary axis. in
the .r-plane is :napped lnt-o the upper (lower) hai'f of the unit circle in the z-plane. However. it is dear that
the mapping is highly nonlinear ;;in;:e the complete negative imaginary aJ~:iS- in the s-plane from Q = -0::::.
to Q = 0 i<. mapped into the lov.er half of the umt circle from w = -n (i.-e., ;:: = -l) to co = 0 (i.e.,
::: = +1J. and the complete positive imaginary axis in the s-plane from Q = 0 to Q = --+--oo is mapped into
the uppe:r half of the umt circle from w = 0 (i.e .• _;: = + J} to w = +n (i.e.,.::: = -I;. This introduce;;
a distortion in the frequency axis called f'requency warpint:- The effect of warping j:, more evident in
432 Chapter 7: Digital Filter Design

Figure 7..3: The bilmear tmnsiOrmation mapping.

--- - - - - - - - - - ·L ----- ---- -


,

~= . :.....-: : :.................
:-: _, · · -· · · -· ·-------------·· · · · -· · ·
Figure 7.4: Mapping of the angular analog frequencies 12 W the angular digital frequencies w v:a the bilinear t:rans-
formalio:r..

Figure 7 .5, which shows the tramformation of a typical aJrulug filter magnirude response to a digital filter
magnitude response derived via the bilinear transformation. Thus.., to develop a digital filter meeting a
specified magnitude response, we must first prewarp the critical bandedge frequencies (w,.. and bJt} to find
their analog equivalents (Qp and Q.,.) using the relation of Eq. (7.26), design the analog prototype Ha(s)
using the prewarped critical frequencies, and then transform Ha(s} using the bilinear transformation to
obtain the desired digital filter transfer function G(z).
Jt '>hould be noted that the bilinear transformation preserves the magnitude response of an analog filter
only if rhe specification requires piecewise constant magnitude. However, the phase response of the analog
filter is not prescr\'ed after tram;formation. Hence, the transformation can be used only to design digital
filters with prescribed magnitude re<>ponse with piecewise constant values.
7.2. BilinearTraQS-formation Method of liR Filter Design 433

f '
/ '

? I
I

]Hief'*J!

"""'
Figure 7.5: Illustration of frequency Warping effect.

---
7.2.2 Design of Digital IIA Notch Fitters
We consider next the design of a secoild-order HR notch filter as an example of the application of the
bilinear transformation method {Hir74l Now, a second-order analog notch filter has a transfer function
434 Chapter 7: Digital Filter Design

given by
(7.32)

Its magnitude response is then

(7.33)

which approaches unity values, i.e., a gain of 0 dB, at Q = 0 and ::x>. The magnitude has a zero value ar
the notch frequency Q = 0 0 • If Q 1 and Q 2 , Q 2 > Q 1, denote the frequencies at which the gain is down
by -3 dB, it can be shown that the 3-dB notch bandwidth defined by (Q2- Slt) is equal to B.
Applying a bilinear transformation to Hu(s) ofEq. (7.32), we arrive at

G(z) =-- H"(s);s=0-:;:-1)/0_.._.z-1)


(I+ n;)- 2(1- n;;z- 1 + o-- r2~) 2 - 2
(7_34)
- (l · Q~ +B) 2(1 Q~)z 1 + (1 -t- n; B}z 2 '

whkh can be rewritten as


+
1 (1 +a}- 2;3(1 a)z- 1 {1 + a).c2
G(z) ~ - "-~"o---2:7-c__::c=.--c'-"'ci-='-
+ (7.35)
2 1-,tl(l+a)zl+az2

where

1 +.n~- B
a- (736a)
- l+fl~+B'
I - Q2
fJ= l+r2~- (7.36b)
"
It is a simple exercise to show that the notch frequency w., and the 3-dB notch bandwidth Bw of the
digital notch filter of Eq. (7 .35) are related to the constants a and fJ through

l - tan(Bw/2)
a= , (7.37a)
1 + tan(Bw/2)
f3 =cos w.,.. (7.37b)

Equations (7.37a) and (7.37b) are the desired design formulas to determine the constants a and fJ for a
given notch frequency w., and a 3-dB notch bandwidth Bw. It should be noted that Eq. (7.35) is precisely
the transfer function of the second-order notch filter given in Eq. (4.118) and introduceC earlier in Section
4.5.2 without any derivation.
7.3. Design of Lowpass IIR Digital Filters 435

cf-= ~r=:_~ ' .


~
~\ ·I
l
- !D '1:;
J
~ :
~ nt-------~ '~----- ____ _J1
i::l , . \,
~-I ~ -~
I
-4!1

~><> oI .
..
-2 -·
I
--~------·
.•
:>O ;oo uo oro
Frec~uency, Hz
0
Fre•;uer;cy. Hz ""'
(a) (b)

l.<"igure 7.6: Gain and phase re".ponses of the notch transfer function of Example 7.8.

• t'U7utilllii01

• ~lh
JJEt r;cy um:u 411!

;,fiJ "'

~\~f!f {L \bt 411T}V0 0! @m -~

GL:f "m
I -

-~-
MJW Xh"::rtti

7.3 Design of Lowpass IIR Digital Filters


We illustrate now the development of a Jowpass IIR digital transfer function meeting given specifications
using the bilinear tra.nsfonnaticn method. To this end, we first obtain the specifications for a prototype
lowpa.ss analog filter from tbe specifications of the lowpass. digital filter using the inverse transformation.
We then determine the analog transfer function Ha (s) meeting the specifications of the prototype analog
filter. Finally, the analog transfer function Ha (.~) is transformed into a digital cransfer f:mction G (z) using
the bilinear tran.'>formation,
SpecificaEy, consider the de~ign of a lowpass IJR digital fiiter G(z) with a maximally flat magnitude
chamcteristic. 1hepass00nd edge frequency wp is 0.25JT, with a passband riPPle notexceeding0..5 dB. The
minimum stopband attenuation at the stopband edge frequency w 8 of0.55n is 15 dB. Thus, if !G(ef0 )j = 1,
then we require that

20!og 10 !G(ei0·25 ;r)/::: -0,5 dB, (7.38a)

20 logHJ tG(ej(' 55"' }~ ::;: -15 dB. (7.38b)

\\'c first prewarp the digi(al bandedgc freyuencies to obtain the corresponding analog handedge fre~
guendes. Frorc:l F..q. (7.26) the pertinent analog bandedge frequencies QP and Qs corresponding to the two
diplal frequencies fv P and w, are given hy

nv =tan\ rw,)
2 =tan (o.2s,-)
- -
2
= 0.4142136,
436 Chap1er 7: Digital Filter Design

I.J708496.

From Eq. (5.29) the inverse transition ratio i~

Q., } .1708496
- 0.4-:42135
= 2J.;266809.
rl,

From the specific<i pa%band ripple nf0.5 dB, we obtain F 2 = 0.1220185, and from the -;ninimum stupband
mienuation of 15 dB, we obtain k~ = 3!.622777. Therefore. from Eq. (5.30) the inverse discrimination
ratio i>,
-1 = ----
-/A' 1
= 15.841979.
kt F:

Substituting these value:> in Eq. (5.33) we obtain the filter order N as

N = logwO/kl) = 1og:o05J:l4~?79) = 2.6586997.


log 10 (ljk) log10 f2.8266814)

Tf<:e nearest higher integer 3 is thus taker\ as the filter order.


The filter order is used next to dcttnnine the 3-dB cotof1' frequency fl'c. To this end, either Eq. (5.32a:
or Eq_. (5.32b) can be used. However, it is preferable to use the latte-r since this ensures the smallest ripple
m the passband or. in o:her words, rbe smalles.! amplirude di:stortion to the signal being filtered in its band
of interest. Substituting the values of E 2 , QP, and N in Eq. (5.32a), we arrive at

ftc = l.4l99 j 5(r2p) = I .419915 X 0.41421 35 = 0.588148.


Using the function but:::ap of MATLAB. 5 we obtain the third-order normalized Jowpass Butterworth
lnmsfer function as
I
H".,(sf = (s + l)(s'l s .fl)'
which has a 3-dB frequency at Q = l and therefore ha<. to he denormalized to bring the J-dB frequency to
Q,.= 0.588148. The denurmalized transfer function i;.; given by
' )- 0.203451
Ha (s l = f1v.n (
0.588148-
=
{s + 0 588148)(.12 +-~"":iccccc--~ccc""=
0.588148s---:- 0.345918) ·
Applying the bilinear transfonnation to the above, we finally arrive at the de;,ired expression for the digital
Jowpas;; transfer function:

G(z:l= Ha<.-n',_,=n- :-! 1.-o-,-c'l

0.0662272(1 + : -I ) 3
0 - 0.2593284: 1)C o. 67 o:i.~b:CB:-,~,-+;-;;0~.3"9"1"'7.,;=s-c,~,)
(7.39}

Th;;· corresponding magnitude and gain responses are plottt."d in Figure 7.7.
The above digitallowpass filler can be de-;igned directly in the z-domain using theM-tiles b•..; t __ tord
a.'ld i.)ll t. c· e L.
7.4. Des1gn of Higilpass, Bandpass, and Bandstop !IR D~\1ai Filters 437

--~~,

,~

"" -

1~\)5<
I
"'l

"

~-2111
-;u
"'' ,
',,_

" ' -,,_,_


0 :----;:::--, - ~ -=------
j -"I
-4C!" --~--
'-~ ~
0.2r. 0411: 06!1: 08r. 0 0.2:rc :J.4n; -()_6-;T OS<
I)
Narmaliz,,d frequeru:r ' Normal!>:ed frequenq' '
(a) (b)

Figu.I'f' 7.7: Magnitude and gain responses of rhe low~~ filtt::T design based on the b~lineru- tnm~funnauun methoJ.

?A Design of Highpass, Bandpass, and Bandstop IIR Digital


Filters
In the previous ~ction we outlined the desi.gn of lowpass liR digi.tal filters. We now consider the design
of £he other three types of HR digi.tal filters.. To this end, two approaches can he followed.
The first approach consists of the following sleps:
Step 1: Prewarp the &pecified digitaf frequency spedficati.ons af the desired digital filter GD(Z) using
Eq. (7.26) to arrive at the frequency specifications o: <tll analog filter HD(s) of the same type.
Step 2: Convert the frequency specificahons of HD(S) into that of a prototype anal-og lowpass. filter H LP(s)
using an appropriare frequency transformation discu<;.sed in Section 5.5.

Sltep 3; Design the analog lowp.ass filter HLp(s) using the methods described in Section 5.4.
Step 4: Con.,.ert i:he transfer function HLP (x) in-w HD{s) using the inverse of the frequency transformation
used in Step 2.
Step 5: Transform the rransfer function H DC~) using the h1Iinear transformation of Eq. (7 .20) to arrive at
the desired digital HR transfer function G v (:::).

The second approach consists of the follov.ing steps:

St·~P l; Prewarp the specified d1gital frequency specificatmns of tbe desired digital filter Gn(z) using
Eq. {7.26) to arrive at the frequency specifications of an aJJalog filter Ho(s) of the same type.
Step 2; Convert the frequency specifications of Hn(s_} into that of a prototype analog luwpa«s filter HLp (s)
us.ing an appropriate frequency tnmsfonnat.ion discus5ed in Section 5.5.
St"'p 3: Design the analog lowpass. filter HLP (s) using the- me-thods described in Sectio~ 5.4.

Ste-p 4: Cc-nvcrt the transfa fun<.:tion H:.p (s) mto the tr.:m<>fcr function G LP{Z) uf an IIR digital filter
using the, bilinear transformmion of&j. (7.20}.

Step 5: Tran.sform G LP(?) intu the desir<::d digitai transfer function G n(:) using the appropriate spectral
transformation di&cus~d in Section 7 .5.

\Ve illustrate the firnL approw:h !n thi;;. section with the aid of examples.
438 Chapter 7: Digital Filter Design

Design of Highpass IIR Digital Filter


We consider the des.ign of a Type I Chebyshev IIR digital tughpass filter in the following example.

~I

Wn [ "' .C'2tthttt <<f <\1 I ; , , ~ X'li<J ;. i .L;; . 1;:


4\J ,, r '::nti:PF j { Y. .. '1\1\ t, iL J
H+'F. ?/Y ; " 21"" t . {fJ<V+ 2 tr';u
! f't>+'ff, •,;},;:tjt 1 "' i:;; j { J)Ar{; it!1' Ar? fL r\

Oes;gn of Bandpass IIR Digital Filter


The design of a Bntterv.'Otth bandpass liR digital fi:ter is treated in the foilowing example.
7.4. Design of Highpass, Bandpass, and Bandstop IIA Digital Filters 439

~20
2 liJ :

tl.!i

Figure 7.8: Gain response of the Type J Cheb)Sht!V highpass llR digital filter uf Example 7.9.

-- «•
"&,tit 1lw ti!B!imNI
*"" i1430' ",4\

dliiJ<jijtf!gWJX:n

\tfi&m

kk

~- *"" ~

=·i!WJA
~-~

Jb iiWJA (
J=••
440 Chapter 7: Digital Filter Design

(j~
'
/
r
/ \
~X·L
- 30

., '
i
'
i
\
\
!
!
5C -
c.z
0
'' ''
Figure 7.9: Gain response of the Butterworth bandpass IIR digital filteT of Example 7 .I 0.

Design of Bandstop riR Digital Fitter


We consider next the design of an elliptic bandstop IIR digital filter.
7.5. Spectral Transformations of IIR Fitters 441

,--
;:•:-- - - ----\ ,~---

_/
w;'
'
\
\' /
'
',,r\ ',~,.-~,·,i
I

-
" ...
'' -
w (I 0.2 !14 0&

Figure 7.10: Gain response cf the eihp:ic bandstop IIR digital fi;re, of Example 7.11.

7.5 Spectral Transformations of IIR Filters


Often, in prac(icc, it may be necessary to modify the characteristics of a filter to meet the new specifications
without repeating the fitter design procedure. For example, after a lowpass filter with a pHSSband edge at
2kHz has been deliigned, it may be required to move the passband edge to 2.1 kHz. I•. :is also possible to
design a digital filter with high pass or bandpass or bandstop characteristics by transforming a given digital
!owpass filter. We describe here the spectral transformations that can be used to transfonn a given lowpass
digital llR transfer function GL(-Z} to another digital transfer function Gn(z) that could be a lowpass,
hlghpa~~. bandpass, or bandstop fiHer {Con70]. Figure 4.10 shows the magnitude resJX-lnses of these four
types of idea.! filters.
To eliminate the confusion between the complex variable z of the lowpass transfer function GL(Z)
and tltat of the- desired transfer function G D(Z), we shall use different symbols. Thus we shall use z- 1 to
denote the unit delay in the prototype lowpass digital filter G LCd and z-l to denote the unit delay in the
transformed filter G D(Z). The unit circles in the z- and Z-planes are defined by

We denote the transformation frvm the z--domain to the £-domain as


z = F(i.). (7A0)
442 Chapter 7: Digital Filter Desig:1

Then. GL(z} i;, lr.:tnsfonned to C o(ZJ through

i7Al!

To tmnsform ,t rational Gd::) into a rational Go(~), F(!) must be a ratiorml fum:tio:J of l. ln addition,
lo ~uanmtec the- stability of (f o{~ ). the trEnsformation ;,hnuld be such that the inside of the umt circle cf
the~::-plane is mapped into the 1w.ide of the unit circle of tD.e 7.-plane. Finally, to ensure that a lowpass
magnitude resp.r:>JL'>e is ;napped into one uf the four bao.ic !yp~s uf magmtude responses. points on the unit
circle ll"!' the ;::-plane should be mapped to points on the uniT cirde of the Z-plane.
Now, in the ::-plane. a point on the unit circle is chan~e;~rized by lzj = 1, a point inside the unit circle
is given hy i::l < I, a:1d a point outsicle rhc unit cirde IS defined by :zl > I. Thus, from Eq. (7.40),
~F(Z_)) = j:: and. therec'orc.
> 1. if cl > 1,
1n11\ -
I1.
< 1.
it ~· ~ 1.
i! .c I < 1.
Thus, from the above and Eq. {4.132), it foliNvs that F- 1 (2; ~;.a stable allpas:-: function. From Eq. (4.129)
(7 _42)

w~ observe tha11he most general form ofF- 1(Z) with real coefficients is thus given by

p-1 (;, ~,
= _.__
~
nL(t-a'z)
f=l
--
.z-ar /
A
_!_.___ '
(7 .43)

where lui I is either real or occur:-. in complex conjugate pairs with latl < 1 for stability.

7 .5.1 Lowpass-to-Lowpass Transformati:on


To tran,fonTl a prototype lov.·pass filter Gr.(;;:) with a cutoff frequency w,. to another lowpa.-.;s filter G D(2)
with a -cutoff frequency We, we use the transformation

I -a;.
.,.--1
,_ = F- 1(') {.
~ _. _• ' <7.44)
::-a
with a real On the uni! c-irde. the above transformation redl:ces to

from which we arrive at


(7.45)

A plot of the re;atinn between w and tU is given in Figure 7.} I for three values ot a. Note that the ::napping
i<> nonlinear exr.:ept for a = 0. resulting in a warping of the frequency scale for nonzero vah..:es of a_
However, if Gr.(z) is. a piecewise constant lowpass magnitude re'3-pon~, then the tmnsformed filter G n(Z}
will lihwise have a similar pievcwise constant lowpa~s magnitude response due to the monotonicity of
the lran'>fonnalion of Et;. f7A5), The reiation between the cutoff frequency w, of G L (:) with the cutoff
frequency 6J,- of G IJ(i:) follows. from Eq. (7.45):

'"')- - =
t a( o
'-2'
( 1- - +a) · {''"2' ), '
1-a
tar.-
7.5. Spectral Transformations of IIR Filters 443

Figure 7.1]: Mapping "Of the angular frequencies m the lowrrass-to-Jowpass transfmmation for three different values
of the parameter a.

which can be solved for a yielding:

tan{tv,:/2} - tan(W(.j2} sin ( ~)


(7.46)
tan{w,../2) + tan(Wc/2)- sin ( w,.!Wc).

It should be noted that the lowpass-to--lO'Npass transformation can also be used to transform a highpass
filter with a cutoff at w,. to another highpass filter with a cutoff at We (Problem 7 30), a bandpass filter
with a center frequency at w 0 to another bandpass filter with a center frequency at Wa {Problem 7 .31). and
a bandstop filter with a center frequency at w 0 to another bandstop filter with a center frequency at£>.,
{Problem 7.32;•.
444 Chapte~ 7: Dig;tal Fitter Design

ot~~ ·-~----;
~-10 "'~'", '
.::S -?.0 ....
'"-,_
-3c: \ ,
_,._ _ - - --~ ~ -~'
0 0.21< OAx 0.6~~: O.!!lt '1:
:-JormaEzed frequency

Figure 7.12: Gain responses of the prototype lowpass filter (solid line) and the transformed lowpass filter (dashed
line).

04" 0.6n
No;rual.:""d fr<:qu~<Y

Figure 7.13: Gain response of :he highpass filter of Example7.13.

7.5.2 OtherTransformations
Table 7.1 lists other useful transformations such as the lowpass-to-highpass, lowpass-to-bandpass. and
lowpass-to-bandstop transformations, in additlon to the lowpass-to-10\Npass transformation dis.£ussed
above. lt should be noted that these spectral transformations can be used only to map one frequency
point We in the magnitude response of the lowpass protntype filter into a new position We. with the same
magnitude response value for the transformed low·pass and highpass filters, or into two new positions,
Wet and Wc2. with the same magnitude response values fur the transformed bandpass and bandstop filters.
Hence, it is possible only to map either tile passband edge or the stopband edge of the lowpass prototype
filter onto the desired position(s), but IWt both.
7.5. Spectral Transformations of IIR Alters 445

Thble 7.1: Spectral transfmmalHJns of a lowpass filter with a cutoff frequen("y We.

Filter type Spectral transformation Design parameter:-;


---
z-> -a
sin(~)
Low pass ] u-.1 J
a~

sin ('"···; 0 ·-)

- - - + 1 . ___ --------+--"'-"_-_~_d_e'_'_""_l_cu_·_t~o_ff_t_rec-;4uency
; 1 +« cos(~)
Highpa;;s z- 1 = :·+ai; 1 a= . '
CCS
(
Nc ~w-}

tbc = des-ired cutoff frequen;.;y

cos("-'d;'"'d)
·~ cos ( -:u 2-'-'<'1)

Bandpass j3 =cot ( W,-z ; We;) tan C'~"')


Wc-2, Wc1 =desired upper and
lower culofffrequencies

Bandstop
z-7- -?fpz-t +
I fJ ~-2 2a: ,--1
m
+ 1
t -.-pZ - l+f!
Wc2, Wc~ =desired upper and
lower cutoff frequencies

The Iowpass-to-bandpass transtOrmatiDn given in Table 7.1 can be simplified if we consider the case
when the bandwidth of the passband for the prototype lm•vpass filter is the same as. that of the transformed
bandpass filter, i.e., We = Wc2 - w,.j. Applying this constraint to the resp'"".ctive spectral transfonnation in
Chapter 7: Digi1al Filter Design

T.able 7.1. we arrive at the modified spectral uansfommtion given by

--I - (7 .48)

The paTameter u i" delennine.J. from the desired location of the center frequency&,_, of the bandpass filter
derived via the tnm;.formation of Eq. (7.48), which map:> the zero frequency of the lnv.rpass filter. i.e .•
w = O.tnti•,. From Eq. (7.4tn we get

. e·lw - a
_,-jw=-r-1'" _. (7.49)
I - ae· J"'

Substituting tv =0 and W = .-:V,, in the above equation, we arrive at

u =cos a;.,. (7.50)

] " '
tl ;
= ''
'

({ ~hould be noted that the lowpass-to-highpass transformation can also be used to transform a hlghpas5.
filter with a cutolf at We- to a k-wpass fiiter with a cutoff at W, (Problem 7 .33).

7.6 FIR Filter Design Based on Windowed Fourier Series


So far we have considered only the design of real-coefficient IIR digital filters that are described by a real
rational transfer function tha! is a ratio of polynomials in z-l with real coefficients. Since lhe transfer
function of an analog filter is also a real rational transfer function in the complex frequency variable s. it
is more convenient to design IIR digital filters by the conversion of a prototype analog transfer function.
and a commonly used method based on this approach was discussed. ln addltion, we outlined a method
to trans.form one type of IIR digital filter ;nto anober type. We now turn our attention to the design of
real-coefficient FfR filters. These filters are described by a transfer function that is a polynomiai in z- 1
and therefore require different approaches for their design.
A varietyofappmaches hav·e been proposed for the design of FIR digital filters. A direct and >.Lra.ightfor-
ward method is based on truncating the Fourier series representation of the prescribed frequency response
and is discussed in this section. The second method is based on the observation that for a length-N FIR
digital filter. N distinct equally SpliCed frequency samples of ils frequency response constitute theN-point
7.6. FIR Hter Design Based on Windowed Fourier Series 447

DFl' ot its impulse response, and hen..:t. the impulse response sequence can be readily computed by
applying an inverse DFI' to ~ht.---se frequo;:ncy samples (Problem 7.59).

7.5.1 Least Integral-Squared Error Design of FIR FHters


Let Hdk~'") denote the Je<;ired frequency response function. Sim.·e Hd(eJ'-') is a periodic function of w
with a pcriot.l 2.T. it Glll he expre~i>ed as a Fourier series,
~

Ha<ei'") = ')'~
h.tin1e-i"'". (7.52)
"=- :x:-
where the Fourier coefhcients {hd!n]) are precisely the corresponding impuhe response samples and are
given by
11
1
hd[n} = - - / Hd(e.lu)e 1'L"'dw. -oo ::: n ::; oo. (7.53)
2n _,.
Thus, given a frequency respon&::specification Hd{ef"'), we can com~mtehd(n] using Eg. {7.53) .and, heoce.
determine the transfer function HJ{z). However, for most practical applications. the desired frequency
respt.l-nse is. piecewise co-nstant with sharp transitions between hands, in which case, the corresponding
impube re!-iponst sequence !!•din l} is of infinite length and noncausal.
Our objective is to find a finitt--duration :impulse respon;.e sequence {h,[njj of length 2M..;.. I whose
DTFT H:(eJ'-"J approximates the desired DTFT lia(eJ"") in some sense. One commonly used approxima-
tion criterion is to minimize the integml-squared error
,,
<!> ~ 1
- -
].,
J jH;(ef"")- Hdl_e;w)l dw,
'
1
(7.54}
-rr

where
M
Hr(e 1 ""') = L htfn]e-i""'. (7.55)
"=-M'
U.,;ing Parseval's relation {Table 3-2) we can rewrite Eq. (7.54) as

$ = L \hr[n]- hd[n1i 2
n=-00
M --.M~l

= _L lh,[nJ- hd{nHi + 2.: h3tnJ + (7.56)


n=-M n=-IX<

It is evident from Eq. (7.56) that the integral-squared error is minimum when h 1 (nl = ha[n] for-M :S
11 5 M, or in other words. the best tinite-leng!h approximation to the ideal infinite-lengt.i) impulse response
in the me<-m-square error sense is simply obtained by truncation.
A causal FlR filter wirh an impulse response h[n! can be derived from h,inl by delaying the latter
sequence by M samples. i.e .. by funning

h[n] =h 1 ln- M~. (7.57)

Nctc that the cau,;.al tllu:r hlnl h:>S- the same magnitude response as that of the noncausal filter ht[nj and
its phase response has a linear phase shift of wM radians w:ith respect to that of the noncausal filter.
448 Chapter 7: Digital Filter Design

7.6.2 Impulse Responses of Ideal Filters


Four commonly used frequency selectlve filters are the lmvpass, highpass, bandpass, and bandstop filters
whose ideal frequency responses are shown in Figure 4.1 0. It is straightforward to develop their corre-
sponding impulse responses. For example, the jd_eal lowpass filter of Figure 4.10(a} has a zero-phase
fa:quency response
{7.58)

The corresponding impulse response coefficients were determined in Example 3.3 and are given b)

-XJ :::: n :::=: oo. (7.59)

As can be seen from the above equation, the impulse response of an ideal lowpass filte:- is doubly infinite,
not absolutely summable. and therefore unrealizable. By setting all impulse response coefficients outside
Uw range -M ~ n ::::: M equal to zero, we arrive at a finite-length non<;ausa1 approximation of length
N = 2M + I, which when shifted to the right yields the coefficients of a causal FIR lowpass filter:

,
hLp[n] = I ,.-(,.
sm(tLJ;In- Mii
0,
M) '
0 < n < N _ l
- -:-
otherwise.
• (7.60)

It nhould be noted that the above expression also holds for N even, In which case M is a fraction,
Likewise, the impulse response coefficients h H p [ n] of the ideal highpass filter of Figure 4.lO(b) are
giwn by
forn = 0,
(7.61)
for In I >D.
Correspondingly, the impulse response coefficients hB p [n 1 of the ideal bandpass filter of Figure 4.10(c)
wi':h cutoffs at We J and Wd are given by

In I.:=:: 0, {7.62}

and those of the ideal bandstop filter of Figure 4.IO{d) with cutoffs at Wet and wc2 are given by

forn=O,
(7 .63)
for in I > 0.

Alt the above design methods are for .single passband or singJe stopband filters with two magnitude
IeVi!ls. Howe\'er, it is quite straightforward to generalize the method ro the design of multilevel FIR filters
anC obtain the expression for the impulse response coefficients. The zero-phase frequency response of an
ideal I.-band digital filter HML(Z} is giver. by

HMt(ei"')=Ak, forw.~:-1 :::;::w::=:w~:. k= !,2, ... ,L, (7.64)

wh(~re wtJ = 0 and wL = rr. Figure 7.14 shows the zero-phase frequency response of a typical multilevel
filter. its impuhe response hMLlnJ is given by

L
hML[nj =~
L.... Ute- At+l) ·
sin(<v;n}
(7.65)
l=! :rn
7.6. FIR Filter Design Based on Windowed Fourier Series 449

'
~

I w
"
Figure 7.14: A typical zero-pha~ multilevel frequency response.

withAL 1 : =0
Two other types of FIR digttal filters that find applications are the discrete-time Hilbert transformer
and the difj"erertriawr. The ideal Hilbert transforme.r, also .:al1ed a 90-degree phase shifter, is characterized
by a frequency response
H
HT
(ei'") = l j. .
-],
-;r
0<W<Ir.
< w < O. (7.66)

It finds appli-cation in the generation of analytic signals (see Section II. 7). The impulse response h HT fn} of
tlw Hilbert tranliformer io> obtained by cor.1puting the inverSe discrete-time Fourier trnnsform of Eq. (7 .66)
and is given by

I
0. for n even,
hHr[nJ = ;, . for n odd_ (7.67)

Th.e ideal dis..-Iete-time differentiator is employed to perform the differentiation op..."Tation in discrete-
time on the sampled version of a continuous-time signaL It is characterized by a freque~y response given
by
(7.68)
The impulse response h 01 Fl n Jof the ideal discrete-time differentiator is determined by an inverse discrete-
time Fourier transform of Eq. (7.68) and is given by

= 0,
{~,;~~..,.·",
n
hmFLn] = (7.69)
:n! > 0.

Like the ideallowp.ass filter, all the above five ideal filters are .also characterized by doubly infinite impulse
responses that are not absolutely summatle, making them unrealizable. The; can be made realizable by
truncating the impulse response sequences to finite lengths and shifting the truncated coefficient~ to the
right appropriately.

7.<5.3 Gibbs Phenomenon


Th-:!" causal FIR filters obtained by simply truncatir.g the impulse response coefficients of the idea] filters
gi"¥en in the previous section exhibit an oscillatory behavior in their respective magnitude responses, which
is more commonly referred to as the Gibbs phenomenon. We illustrate here rhe occurrence of the Gibbs
pht~ncmenon by considering the design oflowpass filters. Figure 7.15 shows the magnitude responses of a
Jowpa~s filter with a cutoff at We = 0.4n de~igned using the formula ofEq. (7.60) for two different values
of :ilter lengths. The oscillatory behavior of the magnitude re~ponse on both sides of the cutoff frequency is
4SO Chapter 7: Digital Filter Deslgn

I
' f_ '
,I

----•- -- l{_:y_-,_-~~--ccc-·
""~ <)<,~ \1 c., c~ •J ~"' O:S-,c
,,..,.,..h"-~ f''"""""'f '.i""""li-,.t f""l'-"'""Y

I_ a) (b)

Figu.re 7.15: Magnitude re~pon..e;; of lowpass filters Je~igned using the truncated iropul..e response of Eq. (7.60): (~_l
length N = 2l,and(b)length N =61.

, __ ---~------.

\
{',I
"'I j'J
/\ i '
" \
-1: ~I
'"'-f
i I
,. .-_/ l
clc--/
u O.lll; 0.4"" l).not :;.,s"
_I "'
N<'"'"'!"tt !roqueoc;
to·~;:;urc 7.16: Magnitude response of a leng~:h-51 differentiatm designed by truncating the impulse response of
Eq. (7.69_>.

dearly visibJe in both cases. !\.·1oreover, as the length of the filter is increased, the number of ripples in both
pa..."Sband and stopband increases, with a corresponding decrease in the widths of the ripples. However,
the height~ of the largest ripples. which occur on both ~ldes of the cutoff frequency, remain the 1>ame
independent of the filter length and are appro~!:imately 1 I percent of the difference between the pa>.sband
and stopband magnitudes o-f the ideal filteT [Par&7j,
A similar oscillatory behavior iSc also observed in the fn.:quency responses of the truncated versions of
the impulse res;ponses of other [_)'pes of ideal filters de;:.cribed in the previous section {Exercises M7, 13- to
M7, 15}. For example, Figure 7.16 show,; the magnitude response of a length-51 differentiator designed
by truncating the impube response coefficients ofEq. (7.69).
The reason behind the Gibbs phenomenon can be explained by considering the truncation operation as
multiplication by a finite-length window sequence w[nJ and by examining the windowing process in the
frequency domain. Thus. the HR filter obtained by truncation can be alternatively expressed as

!7.70}

Fmm the modulation tl1eorem of Table 3.2. t~e Fourier transform of Eq. (7.70) is given by

(7.71)

where H 1 (eiru; and q..(ei"') are the fuurier transform~ of h 1 jn} and w[nJ. respectively. Equation (7.71)
implies. that Ht(e 1w) is obtained by a periodic continuous convolution of the desired frequency- response
Hd(t•J"') with the Fourier transfOrm \fl(eiw) of the windCfiN. The process is illustrated in Figure 7.17 v.ith
all Fourier trar5forms shown as real functions for convenience.
7 .6. FIR Fitter Oes~gn Based on Windowed Fourier Series 451

'" ,;;,J•,

1[---
h - - ~l - m
'

'"
'
.. -

Figure 7 .17· Hlustration of 1he effect of windowing Ill the freqrn::ru;y domain.
'"";r

From Eq. (7.7l) it follows that if l}!(el"-') is a very narrow puhe- centered at u> = 0 (ideally a della
function) compared to variation.-. in Hd(ej"'), then H 1 (ej"'i will approximate Hd(ei-») very closely. This
implies thar the length 2M + 1 of th.e wi.ndow function w[n 1 should be very lat-ge. On the other hand. the
length 2M---:--- I of hr!.nl. and hence that of w(nl. should be a" small as possible to- make the compmational
complexity of the filtering process smalL
Now. the window Jsed to achieve simple truncation of the i-deal infinite-length impulse re~pnnse h
called a rectangular window and is given by
l, 0 ~ tr.: :5: M,
w R [n 1 = l 0. other.vise.
The presence of the oscillatory beha\'ior i.n the Fuurier tm.nsfonn of a truncated Fourier series repre-
sentation of an ideal filter is ba:;ically due ro two reason<>. First, the impulse response of an ideal filter is
inJ1nilely long and not absolutely :;ummable, and a<.; a result, the filter is unstable. Second, the rectangular
window has an abrupt transition to zero. The oscillatory behavior can be eru;ily explained by examining
Uw Fouriertrans.form II' R (ej"'l of the rectangular window f:mction of Eq. (7. 72}:

'"'
'<IJR(e• ) =
~
L- e
_1 w,. = -sin((2M + llw/2) . (7.731
u==c-JW
:.in(w/2)

A pln: oft he above is ske:chc:d in Pigure 7. i8 forM = 4 and 10. The frequency response t.V 8 (e1"'j has
a narrow "main lobe" .:entered at w = 0. AH the other npple!. in the frequency response are called the
'"si.delobes." The main lobe is characterized by its width 4;r/(2M + I) defined by the first zero erossings
on both sides of w = 0. Thuii, a,; M increases, the width of the mai.n lobe decreases. as desired. However,
the area under each lobe remillns constant, while the width of each lobe decreases with an increase in
frf. This implies tl"iat with increasing M, ripples in H1 (eJ"') around the point of discontinuity occur more
closely but with no decrease in amplitude.
Recall also that ideally the Fourier transform of the window fun-ction should -closely resemble an
impulse function centered at w = 0. with 'its length 2M + I being as small as possible to reduce the
452 Chapter 7: Digital Fi~ter Destgn

RcctJnguiar wind~.."'\¥·
JO --~

Figure 7.18: Frequency responses ofth(' rectangular wllJdow forM = 4 and M = 10.

computational complexity of the FIR filter. An increase in rhe length of the rectangular window functim
reduces the main lobe width but unfortunately increases the computational complexity.
The rectangular window ha>: an abn1pt transition to zero outside the range - M :::; n :;:o M, wh.ich is the
reason behind the appearance of the Gibbs phenomenon in the magnitude response of the windowed ideal
filter impulse response sequence. The Gibbs phenomenon can be reduced by either using a window that
tapeD; smoothly to zero at each end or by providing a smooth transition from t.'le passband to the sropband.
Use of a tapered window causes the height of the sidelobes to dimif'ish, with a corresponding increase in
the main lobe width resulting in a wider transition at the discontinuity. We re\'iew in the next t\0.10 section,;.
a few of these windows and study their properties. Elimination of the Gibbs phenomeoon by introducing
.a smooth transition in the filter specifications is considered in Section 7.6.6.

7.6.4 Fixed Window Functions


Many tapered windowshave been proposed by various authors. A discussion of all these suggested windows
.is beyond the scope of this tex!. We restrict our discussim1 to three commonly used tapered windows of
length 2M+ I, which are listed below [Sar93J: 6

w[n] = 2" 2rrn ! )] •


1 [ 1 +cos (' M+ -M::::n ::::M. 0.74)
2

Hamming: w[n]
2nn )
= 0.54 + 0.46cos ( , 2 M+ J • 0.75)

= 0.42 + 0.5 cos (""


2
Blackman: win] ~n~nc-o)
2M+ I

+0.08cos ( 4nn ) , -M::; n.:::;: M. (7.76)


2M+ I
A pJot of the magnjtude of the Fourier transform of each of the above windows in the dB scale is shown
in Figure 7.19 for M = 25. As can be seen from these plou., the magnitude spectrum of each window is
6
1lti: expre.o;sion for ihe window ti.uu;:tioos given here are slighlly different from thar given in the literature.
8
In the !itenuure. this window~ often called the Hanning window nr the V<m Hann Window.
7.6. FIR Filter Design Based on Windowed Fourier Series 453

04:< {);,.
~'""""">d. f=;u¢n<:Y

(a) (b)
·- Bloclman """"*'""

(c) (d)

Figure 7.19: Gain respon;;e of the lh:ed windo\\.· functions.

cbaracteri?.ed by a large main lobe centered at ro = 0 followed by a series of sidelobes with decreasing
ar;tplitudes. Two parameters that somewhat predict the performance of a window in FIR filter design are
its main lobe width and the relative sidelobe level. The maln lobe width AML is the distance betv.reen the
nearest zero crossings on both sides of the main lobe, and the relative sidelobe level A ..t is the difference
in dE between the amphtudes of the largest sidelobe and the main lohe.
Th understand the effect of the window function on FIR filter design, we show in Figure 7.20 a typical
relation among H 1 (eiw), 'lr(ei"'''· and Hd(elw}, the frequency responses of the windowed lowpass filter,
the window function, and the desired ideal lowpms filter, respectively [Sar93I. Since the corresponding
impulse responses are symmetric with respect ton = 0, the frequency resi?onses are of zero-phase. From
this figure, we observe that for the wlndO<Ned filter, H 1 (e 1(w, +"')) + H!{eJ(w..-w)) :;::::: 1, around the cutoff
frequency w ... As a result. H 1 (eiw.c):::: 0.5. Moreover. the passband .and stopband ripples are the same.
In addition. the distance between the maximum passband deviation and the minimum stopband value is
approximately equal to the width J'.,ML of the main lobe of the windm.•, with the center at We· The width
of rhe transition band, defined by ~w = w~ - wp • .is less 1han AML· Therefore, to ensure a fast transition
from the passband to the stopband, the w:ndow should have a very small main lobe width. On the other
hand, to reduce the passband and stopband Tippie 8, the area under the sidelobes should be very small.
Unfortunately, these two requirements are contradictory.
[n the case of the window funClions of Eqs. (7.72) and (7 ,74) to (7.76), the value of the ripple 0 does
not depend on the filter length. or !he cutoff frequency w;::. and is essentially constam. [n addition, the
454 Chapter 7: Digital Filter Design

Figure 7.20: Relations among the frequency responses of an ideallowpass filter, a typical window. and the-windowed
filteT.

Table 7.2: Properties of some fixed window fuoctioHS. 9

Type of Main lobe Relativ"e sidelobe Minimum stopband Transition


window width AML level A,t attenuation bandwidth lirv

Rectangular 4n/(2M + 1) 13.3 dB 20.9dB 0.92nf M


Hann ""/(2M+ l) 31.5dB 43.9dB 3.11JT/M
Hamming ""f(2M + l) 42.7 dB 54.5 dB 3.32rr/M
Blackrr.an lbr/(2M + 1) 58.1 db 75.3 dB 5.56rtfM

transition bandwidth is approximately given by

(7.77)

where c is a constant for most practical purposes [Sar93j.


Table 7.2 summarizes the essential propenies of the above window functions.
For designing an FIR filter using one of the above windows, first the cutoff frequency We is determined
from the specified passband and stopband edge frequencies, mp and w.r. by setting We = (wp + w$)/2.
Next., Misestimated using Eq. (7.n). where the value of the constant cis obtained from Table 7.2 for
the window chosen. The following example illustrates the effect of each of the above windows on the
frequency respon.<;e of an FIR lowpass filter designed usi11g the windowed Fourier series approach.

~~~table bas been adapted from [SIH93J with me values stwwn m the 1><bk for"'>= OA:n- aR:! M = 128.
7.6. FIR Filter Design Based on Windowed Fourier Series 455

~-

'

Time index n

Figun" 7.21: Impulse :-esponse of the truncated ideal lmvpas.s FIR filtec of length 51 with a L-utoff at 71/2.

o:.-.--- -
-2(}

-!00•

-l:J C• 0.2>1:

(a) (b)

--~

·! -w;
-8()f

-1001
-LXI---
~Mt- (!(>~
0 Q-.21<
N<:>n:.ui=l tr<q""""f ""
(c)

Figure 7.22: Gain responses of a l~:JWpass FIR filter designed using the fixed window functions.

mThe~e window functi<ms ilave been generated using the M-li!es hann.:.ng, harruning-, and bl..ackma:. (see Seetioo 7.10.4}.
456 Chapter 7: Digitat Filter Design

7.5.5 Adjustable Window Functions


As indicated above, the ripple 0 ofthe filter designed using any one of the fixed window functior..s is fixed-
~ vera! windows have been developed that provide control over 8 by r:1eam. of an a<i.ditional parameter
C"haracte-rizing the window. l/lle de,;crihe here two mch windows.
The Dolph-Chebyshev window of length 2M + I is defined by [Hel68]

w[nj =
2M+l
1 [' + 2""'
~
-
y
M Tx ( /)cos b)
2M+1,
cos 2nkrr] ,
2M+i
-M _:s n _:s M, (7.78)

where y is the relative sidelobe amplitude expressed as a fraction,


amplitude of siddobe
0.79)
Y = main lobe ampli~ude '

fJ =cosh (..J-
~M
cosh- 1 2._).
y

and Tt(.l) is the lth-order Chebyshev polynomial in x defined by

IOr]x\.:::; I, (7.81)
forix~> L

The above window can be designed with any spe;::it!ed rehnive sidelnbe level and, as in the case of the
other windows., im main lobe width can be adjusted by c-honsing the length appropriately. The filter order
N = 2M is estimated using the formula [Sar93 j

.v = 2.0500,- 16.4
(7.82)
2.285(~w)

where D.w is the normalized transition bandwidth. In the case of a lowpass fi Iter with normalized angular
passband and stopband edge frequencies Wp and w,, L'iw = rv, - wP.
Figure 7.23 shows the ga.ln response of a Dolph-Chebyshev window forM= 25, i.e .• a window length
of 51. and a relative sidelobe ltvel of 50 dB. 11 As can be seen from this plot, all sidelube.s are of equa!
height. As a result, the stopband approximation error of filters designed using this window have essentially
an equ'iripple behavioL Another interesting property of this window i~ that for a given window length 11 has
the smallest main lobe width compared to other windows, rer;ulting in filters with the o.:mallest trarL">itio:r.
band
The mrn;t widely used adjustable window is the Kaiser ··..indow g.iven by [Kai74]:

1() {f.1/l ~(njM):!.}


wlnl = /o(fi)
-- ' -M :S:n ~ M, (7.83)

where fi is an adjustable parameter and l 11{u) is the modified zeroth-order Bessel function. which can be
expressed in a power series form
~ [(u/2)' 1
2
/o(u)=l+~ - · - - 1 . {7.84)
L· r! J
r=l

'Whlch is seen to be positive for all real values of !L In practice, .it is sufficient to keep only the fir,;t 20
terms in the s.ummation ofEq. (7.84) to arrive at a rea;.onahly accurate value of /o(u).
1
' The n>efficienls of !he window b;;ve been cornpll!ed w;;ng me M ·f11e ci:;: iJ:-;in (x:e ScC{!(lfl 7.10.4).
7.6. FlR Filter Design Based on Windowed Fourier Series 457

I
I

Figure 7.23: Gnin response of a leng~h-51 Dolph-Chebyshev wlndt•w with a rel;ative s.idelobe Ieve! of 50 dB.

The parameter f3 controls the minimum attenuatlonu, = -20 log 10 (Ss} i:n the stopband of the windov.·ed
filter response. Formulas for ~">timating f1 and the filter order N = 2M, for specified a_. and nonnalized
transition bandwidth /J,w, have been developed by Kaiser [Kai74 ]. The parameter f3 i-;; computed from ' 2

O.J l02(as- 8.7), for as > 50.


fi =
I
0.5842(a_, -21 ) 0.4
0,

The filter order N is estimated using the formula


--;--- o.mg86(a,- 21), for 2f S as :S 50.
for- a..- < 21.
(7.85)

0!.;- 8
N----- (7.86)
- - 2.285\L'l.w)'

where ll.w is the normalized tmnsition bandwidth. It should be noted that the Kaiser window provides no
independent control over tbe pas!.band ripple Op- However. in practice. Bp is approximately equal to D:r.
We illustrate the design of a linear-phase Iowpass filter using the Kaiser window in the following
example.

-- ------:----c:--
1~Deurm'.ned emp\r.cally.
458 Chapter 7: Digital Ftff:er Design

Tsbk 7.3: Coeffu::ients of the truncated ideallowpass filter, the Kaiser window, and the windowed lowpass. filter (J[
Example 7 .J 6.

n Truncated ideal filter Kaiser window Windowed filter

0 0.4 1.0 0.4


1 {}_30273069145626 0.99018600306076 0.22975969337690
2 0.09354892837886 0.96116367723306 0.08991583299184
3 -0.06236595225258 0.91416906485180 -0.05701302424933
4 -0.07568267286407 0.85118713849723 -0.06442011774899
5 0 0.77484399922791 0
6 0.05045511524271 0.68826531740711 0.03472650590734
7 0.02tl72826525110 0.59490963898165 0.01590090263114
8 -il.02338723209472 0.49&38664295215 --0.01165588409163
9 -0.02752097195057 0.40227124342555 -0.01353109463058
to 0 0.30992453405416 0
11 0.02752097195057 0.22433197338245 0.00617383394107
12 0.01559148806314 0.14796795346662 0.00230704058020

; ·--------.
' • '
_, \ •
\

\
\'lr-,




i
;wf .
·li -40 ,; -40+
'~ryrrrrr(n!
0
.l ' \' 1
/llr
~

_,,
' ·~ ·~ f~cy
Nom!allud

(a)
~ ;,).glt
"
:L ' '"' 0.41(
"" '"'
N<mru.f«ed ftcquen.:y

(b)
!
-l ~YVI!i
"

Figure 7..24; (a) Gain response of the Kaiser \\oindmv of Example 7 .16, and {b) gain response ?f the lowpa-.s filter
designOO using this window.

7.6.6 Jmpulse Responses of FIR Filters with a Smooth Transition


We showed earlier that the F1R filter obtained by truncating the infinite-length impulse response of a
digital filter de-.<-eloped from a frequency response specification with sharp discontinuities exhibits oscilla-
tory behavior, called the Gibbs phenomenon, in its frequency response. One way of reducing the heights

1:1.-rbe coefficient.~ oft.lw window have been computed using the M-lile ka~ ser (see Section 7 .HJ4).
7 .6. FIR Filter Design Based on Windowed Fourier Series 459

(b)

Figure 7..25: (ai l.owpa,;s filter specification with a transition reg1on and (b} the spe..:ification of itll deriva(jve function.

of the ripples io acceptable value~ is to nuncate the infinile-length impulse response by tapered window
functions. Another approach h> ,~Jj minate the Gibbs phenomenon is by modifying the frequency response
specjfication of the digital ti Iter to have a tl'ansition band between the passband and the stopband and to
provide a smooth tramcition between the bands {Orm6 i ]. We now discuss this second approach for lowpass
filter design. Similar modifications can be canied out for the design of the other types of filtef!i.
The simplest modification to the zero-phase lowpass filter specification is to provide a transition band
between the passband and the stopband responses and to connect these two with a first-order spline function
(straight line}, as indicated in Figure 7.25(a). An inverse discrete-time Fourier transform of the modified
frequency re!>ponse HLp(el"') leads to the expression for its corresponding impulse response coefficients
hLp[n]. However, as illustrated below, a simpler method is to compute hLP[nl from the inverse discrete~
time F<rurier transform of the derivative of the specified frequency response Hu•(ej"').
Let G(eiw) denoted HLp(eiw)fdw with a corresponding inverse DTFT g(n]. Its specification wi:Il be
thw; as indicated in Figure 7.25(b). It follows from the differentiation-tn-frecr.umcy property oftbe D'IFT
given in Table 3.2, hLp[n] = Jg!rt)/n. From the inverse DTFT g[n] of G(ej"') given in Fi-gure 7.25(b},
we thus arrive at the impulse resoponse of the modified lowpass filter fPar87};

n = 0,
(H7)
!nl > 0,

where llw = Ws- Wp and Wo· = (wp + Ws)/2.


A still smoother transition between rhe pa.<;sbar.d and the stopband of the !owpass filter can be provided
by specifying the transition function by a higher-order spline. The corresponding impulse response for a
Pth-order spline as the transition function is given by {Bnr92J, [Pa:r87]

= { (~~(L>wn/2PJ)p. ~~ ,
0
hLpfn] ,;.,;w...n) nl
!!.wn/2"P 'fff , n > 0·

As the above example point_.; out. the effect of P on tile frequency response of the truncated filter is not
that obvious. For a given filter length N and transition bandwidth ~w. the optimum value of P minimizing
the integral-squared approximatiOn error ha.-. been shown to be given by [Bur92]
p = 0.624(&...)N. (7,89)
460 Chapter 7: Digital Riter Design

,--------- --

' ;------;::Jl
',_ F=-2 J'I

\\
1,
cuf
-- ~

Figure 7.26: Magnitude response& of FIR lowpass filters with smooth tram;j!ion between bands obtained u~ing spline
transition functioru:..

Various other transition functions have- been investigated (see for example, Problem 7.52} [Bur92j,
[ Par87j. The:.-e transition functions can also be employed for the design of other types of filters. So far, oo
;;pecific guidelines have been advanced to select the optimum transition function for a given filter design
problem. As a result, it should he .selected by a trial-and-error procedure.

7.7 Computer-Aided Design of Digital Filters


The filter design algorithms described in Sections 7 2, 7 .5, and 7.6 can be easily implemented on a computer.
ln addition, a number of filter design algorithms have been advanced that rely on some type of iterative
optimization techniques that are used to minimize the error between the desired frequency response and
th.~t of the computer-generated fitter. In this section, we consider the computer-aided design of FIR and llR
di,sital filters. First, we describe two specific design approaches based on iterative optimization techniques.
A v-driety of software packages are presently commercially available that have made the de5ign of digital
fil':en rather simple to implement on a computer. In this section, we consider only the application of
M"'-Tl.AB.
The basic idea behind the computer-based iterative technique is as follows. Let H(ei"') denote the
frequency response of the digital transfer function H (z) to be designed so that it approximates the desired
frequency response D(ejw). given as a pieL-ewise linear function of w. in some sense. The objective is to
OO:ermine iteratively the coefficients of the transfer function so that the difference between H(dw) and
Dleiw) for aU values of w over closed subintervah of 0 ~ w 5: 1r is minimized. Usua1ly. this difference
is :>pecitied as a weighted error function i'(co) given by

(7.90)

where W(eiw) is some user-specified positive \o\o"eighting function.


A commonly used approximation measure, called the Chebyshev ox minimax criterion, is to nrinimize
the peak absolute value of the weighted error E(w),

F =max: i£(w)l. (7.91)


<O>E-~

wh.;~re R ~s the set of disjoin.t frcque?cy.hands in the mnge 0 :::: w::, :r. on which the de.<>ired frequeru;:y
response IS defined. In filtenng apphcatmns, R is composed of the pass bands and stopbands of the filter
7. 7. Compute-r-Aided Design ot Dig:tal Filters 461

!o be designed. For example. for a lowpass filter des>gn, R is the disjoint union of the frequency ranges
10, wp) and (w.. , ;r), where Wp and Ws are, re:;pe-.1ivcly, the passband edge and the stopband edge.
A second approximation measure. called the leasr ·{J criterion, is to minimize the integral of pth power
of the weighted error function[(::.;):

(7.92)

over the specified frequency range R with p a positin:: integer. The least-square!> criterion obtained from
Eq. (7.92) with p = lis u1>ed often for simplicity, If the w.:ighting function W(eiw) is I over the frequency
range [0, rr J, we have shown in Soction 7.6.1 that the HR filter obtained by simply truncating the Fourier
series of the desired amplitude response Diej"'} has the least integral-squared error. However, the resulting
FIR filter exhibits large peak err;Jr~ ~earthe bandedges due 10 the Gibbs phenomenon. Hence, W(ei"') = 1
i> usually not used.
It can be shown that~ p _.,. =· the kas! pth wlu:i<m .approaches the minimax solution. In practice,
L1e integral error measure of Eq. (7.92) i~ approximated by a finite sum given by

E= z=
K

i= l
{w1e 1"'1) (H(ej·'"'•) ~ D(ei""''))r. {7.93)

\\here w;, 1 ::::; i :::= K, is a sui!ahly chosen dense grid of digital angular frequencies. The kast-:squares
criterion obtained from Eq. (7.93) wlih p = 2 is used often for simplicity-
In the case of linear-phase FIR filter design, H(e.lw) and D(el"") are zero-phase frequency responses.
On L'Je other hand. for IIR filter design, the>e frmctions are replaced with theiT magnitude functions_ The
ill:~sign objective is thus !O i.teratively adjust the filter parame!ers SO that e defined by either Eq. {7.91) or
Eq. (7.93) is a minimum.

7 ..7.1 Design of Equirlpple Linear-Phase FIR Filters


TI1e linear-phase FIR fitter obtai.neJ by minim.i7ing the peak absolute value of the weighted error t_· gi"'CIJ by
Eq. {7.91) is usually called the et;uiripple FIR filter, since here, after e has been minimized, the wcighted
en·or function £(w) exhibits an equiripple behavior in the frequency .range of interest_ We briefly outline
below the basic idea behind the Parks-McClellan algorithm, lhe most widely used and highly efficient
algorithm, for designing the equiripple linear-phase FIR filter [Par72J.
lri Section 4.4.3, we defined lh: fourtypes oflinear-phaseFIR filters. The general form of the frequency
~.ponse H (eiw) of a linear-phase F1R filter of lengrh N-:- t is. given by

(7.94)

where the amplitude response ii(w) is a real function of w. The weighted e."TTr function in !his case
invoJves the amplitude response and is giYen by

£{w) = W(w) [ fi(w)- D(w) J, (7.95)

wh<~re D(w) ll. t.~ desired amplitude response and W (w) .is a positive weighting function. W (w) is chosen to
control the relative size of the peal errors in the :speciliedfreque11cy bands. The Parks-McCJeHan algorithm
is based on iteratively adjusting the coefficients of the amplitude response until tbe peak absolute value of
£{w} is minimized_
462 Chapter 7: Digital Filter Destgn

If the minimum value of the peak absolute value of E(w) in a band w., ~ <>.J ::= Wh iSF 0 , then the absolute
error satisfies - £()
!H(w)- D(<v-11 < - - ,
~ IW(w)l
[n typical filter design app!icanons, the desired amplitude response is given by

D(w) =II. 0.
!n111 the passband,
~lopband,
the

and !.he amplitude response iJ(w) is required to satisfy the above desired response with a ripple of ±dp in
the pa~sbami, and a ripple of J_. in the stopband. As a result, it is evident from Eq. 0.95) that the weighting
fun;;tion can be chosen either as
in the passband.
in the- stopband,
or as

W(w) = lli,JO!" in !he passband,


I, in th~ :,topband.

By a clever manipulation, the expression for the .:unrlitude response for each of the four types of
linear-phase FIR filters can be expressed in the same form and. as a result, basically the same algorithm
can be adapted to design any one of the four types of filters. To develop this general form for the amplitude
n::sponse expression, we consider each oi the four types of filters separately.
For the Ttpe J Jinear~pbase F1R filter, the amplitude response given by Eq. (4.81) can be rewritten
using the notation N = 2M In the form
M
fi(w) =L a[k] co~(wk), (7.96)
t=G
where
a[O] ~ h[M], a[kJ = 2h[M- k], I s_ks_M. (7.97)
For the Type 2 linear-phase FIR filter. the amplitude response gi\"ell by Eq. (4.86} can be rewritten in
lh<: form
{2M+Il/2
H(<v) ~ L b[kjcos(w(k- ~J), (7.98)
f=l I
where
b[k)~2hf2Mli-k]. l <k <
- --
2.'>1+!_
2
(7.99)
Eq~.mtion (7.98) can be- expressed in the form
;:<M-!!!2
if(w) =cos(}!) L bik]cos(wk), (7.100)
k=O
where

b[l) ~ Hb(1J + U[OJ). (7.101)

b[kJ = ~ (6£tl + btk -n). 2 -< k < 2M-l


- 2 •

b(2~±1]= !b[H~-1].
7.7. Computer-Aided Design of Digital Filters 463

The amplitude response for the case of the Type 3 !ine-ar-phase FIR filter given by Eq. (4.91) can be
rewritten in the fonn
M
iJ (ru) = L c[kj sin(wk_), {7.102)
k=l

dk] = 2h[M- kl. {7.l03)


Equation {7.102) can be expressed in the form

if(ru_l = sinw L
M '
t'[k]cos(wk}. (7.1041
.__--fJ

where

ell]= C[OJ- jC[ll, (7.105:


ctkj =j (C!k- l j - C[kl),
c[Mj = ~CfM- I!.

Likewise, the amplilude response for tbe case of the Type 4linear-plu.tse FIR fiJter given by Eq. (4.96)
can be rewntten in the form
(2M+IJ!2
if{lll)= L dfkj.sinw(k-~). (7.106}
k=l
where
dlkl = 2h[ 2 A.~+l - kJ, I < k < 2M+l.
- - --,-- (7.107}
Equation (7.106) can be expressed in the form
{2M -1)!1
if(w) = :;in(!f) L d{k]cos(wk"), (7.108)
'~
wbe<e

d[Jl = J[O}- ~J{JJ, (7.109)

d[k] = !{.l[k- I ] - d[kl). 2 <k <2M<


- - 2 '
d{2~+1j =J[2,~-1J-

Ifwe now examine Eqs.(7.96), (7.100), (7.104). and (7.108}, we observe tbat the amplitude response
for all four types of linear-pbase F1R filters can be expressed in the form

if(w) = Q(w) A:w), (7.ll0)


where the first factor Q(w) is given by

1. f:Jr Type I,
Q(w) = ~(w/2), for1'ype2.
(7.1ll)
{ sm(w). for Type 3,
sin(:v/2), for 1)rpe 4,
464 Chapter 7: Digital Filter Design

and the second factor At_w) is of the fonn

L
A(w} = l:):[k]cos(wk), (7.112)
<-0

where
a[k]. for Type I,
- b[k), for Type 2,
(7.113)
a[k] = ~[k~, fO< Type 3,
{
d[k;, for'fYpe 4,
with
M, for Type I,

I
2M-l
2
for Type 2,
'
L= M-l, for Type 3,
(7.114)
lM-1
-,-. forType4.
Substituting Eq. (7.1 10) in Eq. (7.95) we arrive at a modified form of the weighted approximation
functirm given by

[(w) = W(w} [Q{w)A(w)- D{w)J (7.115)

= W(w)Q(w) [ A(w)- gt:?].


Using the notations lt:'{w) = W(w)Q(w) and iJ(w) = D(w)jQ(w) we can rewrite the above equation as

t'(w) = W(w) [ A(w)- i>(w) J. (7.116)

The optimization problem now becomes the detennination of the coefficients G{k], 0 :S: k ::::; L, which
mirtimize the peak absolute value E of the weighted approximation error G(w} of Eq. {7 .I l 6} over the
spedfied frequency bands w E R. After the coefficients {ii[kJ) have been determined, the corresponding
coefficients of the original amplitude respmse are computed from which the filter coefficients are then
obtained. For example, if the filter being designed is of Type 2, we observe from Eq. (7.113) that b[k] =
G[k!, and from Eq. (7.114) M = (2L + l)j2. Knowing b[k] and M, we determine next b[k] using
Eq. (7.101). Substituting tllese values of b{l:] in Eq. (7.99) we finally arrive at the filter coefficients h[n].
In a similar manner. the filter coefficients for the other three types of FIR filters can be detennined from
.ii[k}.
Parts and McClellan solved the above problem applying the following theorem from the theory of
Chebyshev appmximation fPar72I:

AlhJmatlon Theorem. The amplitude function A(w) of Eq. (7.112) is the best unique approximation
of the desired amplitude resJXmse obtained by minimizing the peak absolute value E of E(w) given by
Eq. (7.115) if and on1y if there exist at leasr L + 2 extremal angular frequencies, "X), WJ • . . . , WL..-1, m
a dnsed subset R of the frequency range 0 .:::: w .::;: ;r such that CcJo < w 1 < · · · < WL < WL+I and
E(w.-) = -E(wi+J) with j£("'1· )J =£for alliin the range 0 .::;: i .::;: L + l.

Let us examine the behavior of the amplitude re~ponse for a TYPe l equirippJe lowt~ass FIR fiitff
whose approxiw..ation error £(w) satisfies the condition of the above theorem. The peaks of E(w) are at
7.7. Computer-Aided Design of Digital Filters 465

w = w;, 0::; i.:::: L +I, where 4 $!:) = 0. Since in the passband and tbe stopband. Q(w} and iJ(m} are
piecewise constant. ir follow-s from Eq. (7.116) (hat
d£(w) dA(w)
---~---=0,
dw dw
~r mother WO!"ds, the amplitude response A(w) also has peaks at w = Wt. Using the relation

coo~wk} = Tk{costv),

.v·here Tt{X) is the kth order Chebyshev polynomial defined by

T;;(x) = cos(k cos-t x),

!he amplitude response A(w) given by Eq. (7.112) can be expressed as a power series in cosw.

L
A(w) = L>·fkJ(cosw)k,
.k=O

which is seen to be an Lth order polynomial in ;;osw. As a result, A(w} can have at most L - i local
minima and maxima inside the specified passband and stopband. Moreover, at the bandedges. w = &p
Hnd w = w.,, j£(w)! is a maximum, .and hence A (w) has extrema at these angular frequencies. lrl addition,
A(w) may aloohaveextrema atw = 0 and w = rr. Therefore, there a.-eat most, L+ 3 extremal frequencies
of E(.tv). Similarly, in the case of a linear-phase FIR filter with K specified bandedges and designed using
ttle Remez algorithm. there can be at most L + K + 1 extremal frequencies. An equiripple linear-phase
FIR filter with more than L + 2 extremal frequencies has been called an extra-ripple filter.
To arrive at the optimum solution we need to solve the sel of L + 2 equations

(7. J!7)

for the unknowns Q[i] and F, provided the L + 2 extremal angular frequencies are known. To this end.
Eq. (7.117) is rewritten in a matrix form as

I COS(WQ) cos(Lwo) 1/W{W(I) Q[OJ


1 cos(wJ) cos(Lwt) -1/W(wt) <i[l]

COS(WI.) cos(LwL) (-l)L ;W(wL) a[LJ


CO.'>(!ti[, q) cos(LwL+i) (-J)L+tjW{wL+I) E

. ' ' (7.118)


D(wL)
D(wL+d
J
which can, in principle, be solved for the unknowns if the locations of the L + 2 extrema) frequencies are
known a priori. The Remez exclumge algorithm, a highly efficient iterative procedure, is used to determine
th<~ locations of the ex.tremaJ frequencies and consists of the following steps at each iteration stage:
466 Chapter 7: Digital Riter Design

Step ] : A set of initial values for the e::nremal frequencies ace either chosen or are available from the
completion of the previom stage.

Step 2: The value cis then computed by solving Eq. (7 .11&), resulting in the expression

eoD(wo) + Ci b(wi) + ... + CL+tD{wL+d (7.] l9)


£=

where the constant c, IS- given by

l
c, = n'~
cos(w,j-cos(w;)
(7,l20)

Step 3; The values of the amplitude response Aiw) at w = w; are. then computed using

(-J)i£ - .
.4(w;) = _ + D(w,J. O~i:S:L+L
W{w,)

Step 4: The polynomial A(w) is determined by interpolating the above values at the L +2 exlrema.l
frequencies using the Lagrange interpolation formula:
L+I
A{w) = L A(w1 }P;(cosw),
i=O

where

P;(cosw)
, = Ln~' (- COSW- COSW()
, O:S:i:S:L+L
COSWj COSWt
f=fJ.
f#

StepS: The new v.eighted error function £(w) of Eq. (7 .116} is computed ar a dense set S {5 2:: L) of
frequencies. In practice, S = l6L is adequate. Determine the L + 2 new extremai frequencies from
the values of £(w) evaluated at the deme set of frequencies.

Step 6: If rhe peak values E of £{_w) are equaJ in magnitude, the algorithm has convergd. Otherwise, go
back 1D Step 2.
Flgure 7.2/ demonstrates how the weigh1ed error function changes from one iteration stage to the next
and how the new extremal frequencies are determined. Finally, the iteration process is stopped after the
difference between the value of the peak errorE calculated at any stage and that at the previous stagers
below n preset threshold value. soch as w- 6 . In prnctice. the process converges after very few iterations.
The basic principle of the Remez exchange aJgorithm is .illustrated in the following example [ParS?].
71 Computer-Aided Design of Dig1tal Filters 467

,_ Q. .. D

i' ''
''"i !w( lro:
w

~+I

Figure 7_27: Plots of tire desired response D{w), the amplitude response At(w), and tbe error Ek(w} at the end of
the klh iteratio3... The locatiom; of the new extremal frequencies are given by w:+l.

[: ' .,
:I ]~[ l
·- -
.,
I $

WI} ft--~.Jm, »iix


&

110 $

[:
-
468 Chapter 7: Digital Filter Design

, __ --~

(a) (b)
cl-- ~~ ~- -~1

'f_ {)\
i ' ~"-.... /
/ I
"' ~- .- I
~'L-
:1 ---·-~
---------
-~ ~~·--_j2
,
I) !H !~

(c)

Figure 7.28: Illustration of the Reme?- algorith.'TI.

7.8 Design of FIR Digital Filters with Least-Mean-Square Error


For the design of a linear-phase FIR filter with a minimum mean-square error criterion, Lhe error measure
of Eq (7. 9-3) reduces to
K

'~ L
i=!
IW(w,) [ H(w,) ~ D(w,)Jj'. {7.129)

where if(w) is the amplitude response of the designed filter, D(w) is the desired amplitude response, and
W!w) is the weighting function. Now. as s.'lowr; in Section 7.7.1, the amplitude response for aH foar types
of linear-phase R R filters can be expressed in the fonn

if(w)
'
= Q(w} Lii[kj cos(wk). (7.130)
.l:=O
7.9. Constrained Least-Square Design of FIR Digital Rlters 469

where Q(w), U[kJ, and L are given by Eqs, {7.11 0. (7.113). and (7.114), respectively. Hence. the mean-
square error of Eq. (7.129) is a function of the filter parameters ti[k]. To arrive at t:ht. minimum value of E,
we set
~=0 0 :s k 5. L,
Uii[kJ '
which results in a set of (L + 1) linear equations that can be solved for iifkJ.
Without any loss of generality, we consider here the design of a Type I linear-phase FIR filter. In this
case., Q{w) = 1, ii[kJ = a[k], and L = M. The expression for the mean-square error then takes the form

= LK { LW(wt)a{k]cos(w,k)-
M W(«.!i)D(w,) } ' {7.131)
f=l k=O

t:sing the notation.


W(wi) W{wt)cos(wt) W(w1)cos(Mwt) ]
W(0'2) W(W2)cos(wz) W(W2)cos(M!U2)
- ,
[
W(~x) W(wK)cos{wg) W(rux)cos(Mwx)

a= fa(O] a[l] · · · .a[M]JT.


and
d = [W(wt)D(wJ) W(W2;)D(f.t.12) · · · W(wx)D(wx)JT,
we can express Eq. (7.131) in the form

where
e=Ha-d.
The minimum mean-square solntior. is then obtained by solving the normal equations [Par87]:

If K ::: M, which is typically the case, the above equation should be oolved using an iterative method
such as the Levinson-Durbin algorithm [Lev47]. [Dur59]. as the direct solution is often ill-conditioned.
A simillll" funnulation can be carried out for the other three types of linear-phase FIR filters. Note that
the design approach outlined here can be used to design a linear-phase FIR fiher meeting any arbitrarily
shaped desired response.

7.9 Constrained Least-Square Design of FIR Digital Filters


For many specialized filter design problems. the amplitude response iJ{w) of the FIR filter is required to
satisfy some side constraints. For example, it might be desired that the frequency response have a null at
a specified frequency Wo. FIR filters with constraints on their frequency response can be designed using
the least-mean-square~> approach by incorporating the constraints into the de5ign algorithm. To illustrate
410 Chapter 7: Digital Filter De-sign

thi.>approach, assume, without any loss of generality, that the filter to he designed is a 1'ype I linear-phase
FIR filrerof order N =2M with an amplitude response given by Eq. (7.96) whi.;:h is constrained to have
a null at w 0 . This can be ...,'ritten as a single equality constraint Ga =d. where

G = (!, cos{<-> 0 ), cm{2w0 ), · · · • co~(Mw.,)].


7
a= (a[O], a(l]. ·- ·, a(MJl
d ~ [0]. (7.132)

In the general case. lf there are r constraims, then G is an r x ( M -+- 1) matrix and d is an r x l column
ve;tor. Such a fitter can be designed by using the constrained least-square method. This method minimizes
th<~ square error

s = (; {" W(w)[Ji(w)- D(w)f dw) lil (7.133)

subject m the side constraints


Ga=d. (7.134)
InEq. (L133), as before. D(w) is the desired amplitude response and W(w) is the weighting function. The
side constrain:s of Eq. {7.1 34) need not be linear, but the solution is more easily obtained if they are.
To minimize s 2 subject to the constraints, we first form the Lagrangian;

¢l = e 1 + lL- T · [Ga-d). (7.135)

wllere
V.=[J.tJ, /J.2, ••. ,f-'-;]T
i:> the vector of the so-called Lagrange multipliers. We can derive the necessary conditions for lhe mini-
mization of e 1 by setting the derivatives of <I> with respect to the filter parameteTS a[k] and the Lagrange
multipliers ILi to zero resulting in the following equations:

Ra+GT Jl,. =C,


Ga=d, (7.136)

w!Jere the coefficients of the vectos c = [c(O]. c[l], c[ M]J are given by

I [" W(w)D(w)dw.
c[O] = -
Jr .. o

c[k] = -
I
rr "
1" W(w)D(w}cos(kw)dw,

and the (i. k)th element R 1.k of the matrix R is given bv

R1,k = {" W{w) cos(iw) r:os(kw}dw.


·"
The two matrix_ equations of Eq. (7 .136) can be written as

[~ {7.137)
7.9. Constrained Least-Square Design of FIR Digital Filters 471

Sol.'<ing the above equation we get


J'. ~ (G R- 1GT)c 1(G R- 1c- d),
a=R- 1 (c-GriL). {7.138)

When tik: integrals needed to form Rand c cannot be caiculated simply, then Rand c can be approxi-
mated using the discrete forms

a:; ir. Section 7.8.


In rhe special L'aSe when the error function U; not weighted, i.e., W(w) = 1, R becomes an identity
matnx, and the c; are simp-ly thecoeffidentsofthe Fourierseriesexpanslon of D(w) As, a result, Eq. (7. i38}
reduce~ to

~ = (GGT)- 1 (G c- d),
:.t = c - GT t.L-
One u~ful application of the constrained least-square approach is the design offifters by a criterion that
takes lnto account both the squ<~re error and the peak-ripple erroc (or Chebyshev erroi}. The constrained
Iea.•n-square approach co filter design allows a compromL'>e between the square error and the Chebyshev
criteria, and produces the filter with ]east-square error and the best Chebyshev error filter as special cases_
The least-square error filter design i~ based on the assumption that the size of the peak error can be
ignored, whereas filter de-,;ign a..::cordin_g to the Chebyshev nann assumes the integral-squared error is
irrelevant. In practice, however, both of lhese crileria are often important [Ada91 f. The design problem
can thus be formulated as the minimization of the square error subject to constraints on the peak error; and
ttl;: solution can be obtained by an iterative constrained least-square algorithm.
In all of the FIR filter design methods outlined earlier, the frequency response specifications include
transition {or ·'don't care") band!. to either reduce the oscillations near the bandedges due to the Gibbs
phenomenon for least-squares approximation or to allow the use oftbe minimax approximation. There are
applications where the spectrum of the desired signal is in a narrowballd of the frequency range 0 ::;;; w ::;: n,
while the spectrum of the interfering signal (or noise) oe<:upies the whole range, and there is no transition
band separating the tWQ spectra.
The constrained least-square filler design method can be used to design both linear-phase and minimum-
phase FlR filters without specifying explicitly the transition bands {Sel96]. It minimizes the weighted
inte,t;ral-square error of Eq. (7 .13 3) over the, whole frequency range such that the local minima and maxima
of H (w) remain within the s.pecitied lowe.r and upper bound functions L{w} and U(w). As£: defir:ed above
is simply the £2 nonn of the error function [.fi(w)- D(w)J, it has also been referred to as the Dz error.
fur lowpas.<;. filter design with a cutoff frequency Wq, the functions L(w) and U (w) are defirn:d by
L(w/ = l - dp. l_l((J!) = 1 +Jp, forO~ w::::; w.,.
L(n.J) = -0,. U(w) = .S,, for w.,::;: w :<::: rr. (7.139)

Because this design problem has inequality constraints, an irerative algorithm is employed to minimize
the errorE of Eq. {7. i33.J subject to the constraints on the values of ii (~ ), where the frequency polrus Wi
Me COtlWirted in a constraint setS"' {w;, {1)2, ••• , w,} with Wi E f0, ;r;J. Let the setS be partitioned into
two sets. the sel Sr containing the frequency points w,, l ::;: i sq. where the equality constraint
H(w) = L(w)

is irnpe~ed. and the set S, containing the frequency poi.ms w;, q +1 ~ i .:;: m, where the equality constraint
H(m) = U(uJ)
472 Chapter 7: Digital Filter Design

is imposed. Then the equality constrained problem i<; solved on each iteration.
When the Lagrange multipliers are all nonnegative, Kuhn-Tucker conditions [Ae87] state that the
sclution of the equality constrained problem minimizes e of Eq. {7 .133) while satisfying the inequality
constraints;
if(wj) =::::_ L(.wi), 1 _::::: i :::0 q,
il(w;) :::0 U(.:u,}, q +I~ i ~ m.
The constrained least-square design algorithm therefore consists of the following steps:
Step 1: Initialization. Choose the constraint set m be an empty set, i.e .• S = 0.
S1ep2: Minimization with Equality Constraints. Solve Eq. (7.138) for (he Lagrange_multi.pliers by
mini.mizingthemean-squareerror£ ofEq.(7 .133) satisfying the equality constrahts H(w,) = L(w,·)
for WJ E Se and if(w.i) = U(i!.li) forw; E S,..
Ste-p 3; Kuhn~ Thcker Conditions. If a Lagrange multiplier p. i is negative, then remove the corresponding
frequency w j from the constrained set S. and return to Step 2. Otherwise, calculate the coefficients
a[kl using Eq. (7.138) and proceeC to Step 4.
Step 4: Multiple Exchange of Constraint Set. Set the constraint setS equal to StU S,.. Note that at the
frequency points w; in St> a!l(w)f8wl«>=«>t = o and ii(w;) ~ L(wi)- Likewise, at the frequency
points Wj ins... aH (w)j&wi=, = 0 and if(wt) :::::_ u {w,- ).

StepS: Convergence Check. The algorithm converges if ii{w) =::::_ L(w)- .6. for all w; in Se, and if
ii{(<)) ~ U (w) + A for aJI w, in S 11 • Othcn¥ise, go back to Step 2.

In Step 5, il is a very small number, typically w-6 , chosen a priori based on the desired numericaJ
accuracy. For an additional discussion on the algo:ithm and its properties. see [Se196], fSel98].

7.10 Digital Filter Design Using MATLAB


The Signal Processing Toolbox of MATLAB includes a variety of M-files for the design of both IIR and FIR
d~~ita! filters. We illustrate the use of some of these functions in this section.

7.10.1 IIR Digital Filter Design Using MATLAB


Tte llR digital filter design process involves two steps. 1n the first step, the filter order Nand the frequency
scaling factor Wn are determined from the given specifications. Using these parameters and the specified
ripples, the coefficients of the transfer function are !:ben determined in the next step. We deM:ri.be the
MATLAB implementation of these two steps below.

Order Estimation
For HR digital filter design u:singthe bilinear transformation method, the MATIAB statements to use are as
follovvs:

LN,Wnl buttord(Wp,Ws,Rp,Rs)
[N,Wn] cheC:iord ( Wp, V>/s, Rp, Rs;
[N, Wr..] cheb2ord(Wp,Ws.Rp,Rs)
[N,\'/r..] ellipord(Wp,Ws,Rp,Rs)
7" 18. D1g1tal Filter Design Using MATLAB 473

For lowp<>ss tllters.i?lp and ;,·s are the normalized pa~sbandand '>topbandedge .tfequencies, respecttvely.
with '&p < \'lis. These frequency point;; must be between 0 and 1. where the samplil'g frequency is equal
to 2. lfthe sampling frequency Y-:'. the passband edge frequency Fp. and the stopband edge frequency Fs
are 5pecified m Hz. then :.;;p = 2.Fp I:""T and 'tis = 2~ ;.; ?T. The other two parameters,!{,: and .Rs. are
the passband ripple and the minimum S.:ophand attenuation specified in dB. respectively The outputs of
these functions are the filter order Rand the frequency scaling factO£ Vln. Both of these two parameters are
needed in theM.\ rLAB functions fOr filter design meeting the desired specifications. For the Butterworth
fiiter'~n is ti:w 3-dB c-utoff frequency. For a Type l Chebyshev filter and an elliptic filter, \'>n is the passband
edge freqt:ency, whereas for a Type 2 Chebyshev filter. it is the stopband edge frequency.
Fur highpass filters Wp > Ws. For bandpas-; and bandstop dig.ital filters, Yip and ws are vectors of
lcngrh-2 specifying :he transition bandedges, with the lov.t:r-frequency edge being the first element of the
vectors. In the latter two cases. Wn is also a length-2 vector, and i"-f is half of the order of the filter !o be
designed.
The use cf the M-flle~ for order estimation is illustrated in the next two examples.

tl\llf R!l1f&iiu#tt tu*"' WlN iiN!!&iiWv


¥#&¥ n,{ * mtit%
ttl t ."

Filter Design
For UR filter destgn based on tOe hi linear transformation, the Signal Pmce~~ing Toolbox of MATLAB
includes functions for each one of the fou!"" magnitude approximation techniques, i.e.• Butterworth, 1)-'pe 1
and 2 Chebyshev, and elliptic approximatiOns. Specific-.ally, the following M-fiJes are availabJe:
[b,a] bCJ.tler(l\',Wn)
[b, aJ chebyJ (N, F.p, 1-Jn)
cheby2 {N, "Rs, •...rn)
ell ;p{!.'~, Rp, Rs, tin)

The;,e fimc:ions directly determine the digital lo-&'P'J.SS filter tmnsfer function of onler !'<with a trequency
~ahng factor '.-.In that must be a number between G ru;d 1 with a ~mpling frequency assumed to be equal to
2 Hz, These two parameters are ttmse determined in the order estimation stage. The additional parameters
for the Chebyshev and the elliptic filters a.--e Rp. specifying the passband ripple in dB, and Rs, specifying
4:74 Chapter 7: Digital Filter Design

the minimum stopband attenuatmn in dB. The output files of these functions n-e the length N+ 1 L-"t.Jlumn
vectors band a. providing, respective.ly, the numerator and denominator coefficients in ascending powers
of _:c- 1 . The form of the. tmnsfer function obtained IS given by

G(,..)
Biz)
=- - ~
b(l) + b(2)z- 1 + · · · + b(N + l}z-N (7.140)
-, A(z:) I + a{2)z 1 + · · · ~ a(N + I )z N

After these coefficient& have been computed. the frequenL-y respon.<;.e can be computed using theM-file
freqz {b,a,wi, where w is a ret of specified angular frequencies. 14 The function freqz Co, a, w\
generates a complex: vector of frequency response samples from which magnitude andior phase response
&amplt~·s can be readily computed.
The following four IIR lowpus.oo,;. filter design functions can be employed to determine the zeros and
poles of the transfer function:
[Z,';J,k] !::>utter(£-.<, 'ti~L)
[z,p,k] cf:ebyl(};",Rp,Wni
[z,p,k} c£:eby2(l'I,Rs,Wn)
[z,p,k} ellip{N,rtp,Rs,Wn}
The output files of these functions are the zeros and pole.s of the transfer function given as theN-length
vectors z ;and p, and the scalar gain factor k. The numerator and denominator coefficients of the transfer
function can then be derennined using the function zp2t f.
We illustrate the de&ign of a digitallowpass filter using MATl,AB in the next example.

14
The .-ange of frequencies"' in tre'l:o: \ b, i3., .,;> :;hould be b;;rv,;..,en ()and rr. whereas tbe range of freqllC1lo:ie,\ in :my of the fiher
fum:t1<m~ <lie betW<X<t 0 >md L
7.10. Digital Filter Design Using MATUo.B 475

Ellip'"" HR l~' Fllter

;g -20-
ff-"\'J;
c
-40-

-ijJ''
OH
"
(a) (b)

Figure 7.29: IIR elliptic !ow-pass filter of Exarr.ple 7 .21. (a} gain response, and (b) passband details.

Other types of digital filte-rs can be dehlgned by simpk modifications of dre filter- function commands
given in Program 7 _L Bandpa~s and bandstop digital fihers of order 2N are designed by using Wn a:; a
two-element vector, Wn = [wl -.v2 ~, where the frequency range w-1 < Ql < w2 is the pru;sband for
a bandpass filter and the stopband for a bandstop filter, For designing hlghpass digital filt-ers, the string
high is used as the last argument for the filter function, e.g.,

{b,aj = che:Oyl (N,?,p,Wn, 'b.igh' )-

Similarly, the string stop is included as a final argument for designing bandstop digital filters, e.g.,

[:O,al = el:..i_;;:;(K,Rr;,r<.s,Wn, 'sLop')

The following two examples illustrate the design of a highpass and a bandpass digital filter, respectively.

f1i l;f
J:tv;
; 11t, ¥1""~;
iJYi}'\0 \ 4 1 4'-trtr ill\f:\0{\1]
:&/ wtU\ tazv;tL
"' s±:mk::d t::10~:±
0,,,,,.,,.
HLA} A tfL

"' ::""\:: ' H'>¥ Lt t QJ:t£: ]J:d t r" zpr


"JV.Wf>+l , 1> • ,: t t rust n, rf-!i> n
'J f
476 Chapter 7: Digital Filter Design

of
i
- H) -

--m:
.._,(/---' __I
. ~ _,,.:'

- '! l'-2. i;_, U.7l

(a) (b)

Figure 7.30: IIR Type I Cheby<>hev highpass filter of Ex.a~ple 7.22: (a) Gain response, and (b) pa~band detalis.

;q •:;'• f:<uf t<S:Vddl: fi:lli£tH :it :1~1;0;!1111P:T { 0:\0


: '< ';-r 1 •\:< ~~P:b~" t' r +"tPJIII0G<:: 1: ++:
Pf , .•j <1 i {'<<Yf{:hU>} i' i)'jd Y' {1; Jfl
¥Y ,; L ; f/,rt h t ;,;;,p, &t
; !4, P··. t rA :· ';err·+ ; v;:: • ftn
\h-.4; ;t;)i L t >tU dL :On i
r;, :r«~r<,H\ L .&Y>IIr\¢0 ""n, 11¥
\tJS\/\ , ::<:"'
j r,gt::s; f{xi 4;
{:;1: :PL I Zt~f&r r']Jti 'H
St£JiitZlii« l ! 3p;pi7&;r8 : \ ]> , ' ' /
t+1!fvf':~:R:

--
For the design of higher-order filters, the functions computing the zeros and poles of the transfer
function are more accumte thlin the functions computing the transfer functicn coefficients. Moreover,
numerical accuracy could be a problem in designing filters of order 15 or above fKm94J.

7_1 0.2 FIR Digital Filter Order Estimation Using MATLAB


As m the case of IIR digital filter design, rhe FIR digital filter design process also consists of two steps. In
the first step, the filter order is estimated from the given specifications. In the second step, the coefficients
of the transfer function are determined using the estimated order and the filter specifications, We consider
7, 10. Digital Filter Design Using MATLAB 477

-10~
'' I
I
I
I \\
\
'
I
\
'
l '
____j__j
-~~·--~~~~--~
0 0.1 0.4 O.t
"-'
Figure 7.31: Gain response <Jf the DR ButterWOrth bandpass fiher of Example 7 .23.

in this section the order estimation problem. In the next two sections we treat the FIR filtec design problem.
Either of the two formulas of Eqs. (7 .15) and (7 .18) can be used to estimate the order of an FIR filter.
Program 7_4 given below can be employed to estimate the order using Kaiser's formula of Eq. (7.15}.

% Program 7_4
% ComputaLion of Lhe order of a linear-phase
% FIR lo~~ass filter using Kaiser's formula
%
dp input ('Type in the passband ripple = ' } ;

...'' ...
us input\'Type in the stopband ripple= '};

..
?p input\'Type in the passband edge in Hz =
?s input\'Type in the stopband edge in Hz =
?T input{'Type in the sampling ~requency in Hz = ' ) ;
r,um = -2G*log.i0 (sqrt (5p*ds) l -.i3;
den= 14.6"'\FS- Fp)/Y'1.';
N = cei 1 (num/den};
fprintf ('Filter order is %d \n', N] ;

The following example illustrates the application of the above program.

1be Signal Processin~ Toolbox of MATLAB includes the M~file remezord which detennines the
FIR filter order employing the formula of Eq. (7 .18). There are two different aptions available with this
function:

[t.;, fp~s,mag,wt~ remezordifedge,mval,dev)


:N, fpts,rnag,wt~ re;nezord ( fedge, mval, dev, FT)
478 Chapter 7: Digital Filter Design

where the input dara are the vector fedge of bandedges, the vector mval of desire~ ~agnitude values
:in each frequency band, and a vector dev specifying the maximum allowable de~muon ~tw~ the
magnitude response of the designed filter and the desired magnitude, The length of :c: edge IS two tunes
that of rr,val minus 2, while the length of dev ls the same as that of mvaL The output data are the
estimated value N of the filter order, the nonnalizcd frequency b;mdedge vector fpts, the frequency band
magnitude vector mag, and the weight vector wt meeting the filter specifications. The output data can rhen
be directly used in filter design using the function remez. The sampling frequency FT ~ru: be_ sp~c-ified
using the second version of this functim1. A default of 2 Hz is employed if FT is not exphcltly md1cated.
ln which case, the frequency poi:nts in :edge mLst have values between 0 and l.

F~ FIR fiil.er design using the Kaiser window, the winOOw order should be estimated using the formula
of Eq. (7 .86). To this end. the M-fiie kaise::-ord in the Signal Processing Toolbox of MATLAB can be
employed. The ba.,.ic forms of this M-fHe are
a:,·wn,beta,ftype] : kaiserord{fedge,mval,dev)
[N,Wn,beta, ftypeJ = ka::_serord(fedge,mval,dev,F"r)
c = kaiserordjfpts,mva:,dev,FT, 'cell')
This M-fHe estimates. the minimum filter order N using Eq. (7.86} and the parameter f1 =beta using
Eq. (7 .85). It also determines the normalized frequency bandedges ~'in. The input data required are the
filter ~pecifications given by the vector fpts of specified bandedges, the vector mval of the specified
amplitud~ on the frequency bands defined by fpts, and the vector dev of the specified ripples in each
frequency band. The length of fpts is two times that of cnag minus 2. The second form of kaiser orO.
spe-cifies the sampling frequency F':'. The sampling frequency is assumed to be 2 if FT is not mcluded in
tllc function argumenc
The function ~a ::_se.:::-ord. in some cases, can generate a value for N which is either greater or smaller
than the required minimum value. If the FJR filter. designed using N does not meet the specifications, the
order should be either gradually increased or decreased by 1 until the specifications are met. The filter
order N estimated using Eq. (7 .86) has been found to be within± 2 of the required value foe a broad range
of filter specifications.
7.10. Oigi1al Filter Design Using MATl.AB <79

The output data generated by this fundicn \:an be directly u:;.td to de'-ign tile FIR filter based on the
windowed Fourier series appro<.t~.:h using the funclion ' __:_.:: 1 discus:-.cd later in this :'>ectioo. The third fonn
c;f :.;a i S"''rc:l rd generate.... a<.:eE array ,_:_ who.;;e ckmenr~ are the parameters needed tD run f ~ 1 l.
We dcmon.~lrate its upplic.1ll01l in !1-,e following example.

For designing an cquirippl.e llllear-phasc FIR filter emriDying the Parks-McClellan algofi[hm !SeL·tion
7.7.1} we can use the MATLAR M-file , er:-c<',G. There are •;mitlU'> VtT:'\ions of(hi:;; function:

r·e~"'t?:{l\", f!;"J~~.ndq;'

b rcr:1e:.-: ;t~. cpt_~., :rag,'-"'--


0 rP:U:?Z (N, f 0":: 0', r:-taq, ' fo__ '-'UP' J
l_1 reTtG z l \1, f [; t? , :il?.q . \·;t_ , ' f -__ ype ' i

lt can design any type ofmllltiband linear-phase filter;;. lt:-. output is a vector b oflength :1 + l containing
the ;mpu!se response coefficients of 1he !inear-pha;;e F1R filter with bandedges specified by the vecto-r
fo~s. The frequency points of the vector fp=s must he in the range between 0 and 1, with sampJ:ing
frequency be:ng equal to 2. and must be spe-dfietl in an increasing order. The first element of fpL s
mus< be 0 and !he last eicment must be :. The OOndedges between a passband und its adjacent stopband
mu:>;< be- separated by at least 0. L lf sune values are used for !hc-se frequency points, the program
.auto'Tlatically .~eparnte.~ them hy 0.1. The de<>in:d m<Jgnitudes of the F1R filter frequency re...,ponse at lhe
specified bandedges are given hy the vector rnag, wnh the demenh; given in equal-valued prurs. The
desired magnitudes between two spec1fied cunsecative frequency points fpLs ( k) and fpt s \ k+ 1) are
detem1ined according to the fo!lowmg mlc;;. Fer k odd. the magnitude is a line segment joining the
points {rr:a•} i k;; fP'~ s ( k: 1 und {nac;; ( :><::, l :· . I ~.Jt ~' l k- -l) }, whereas fork even, it is unspecjfied,
wi!h the frequency range , ~ p 1. ~ · \ k ; , r p::: s ,: k , 2 1 ] htoing a m:.msition or "don't care., re!'!ion. The
vecto~ "J=".<::S and mag ~us! be of rhe same length with t!-;e Jength be.ing even. Figure 7.32 illustrates the
relationship between the vectors fpt.s and 21ag given by

fpt s rQ c .2 G - <1 c 7 0.8 1 .0J


r.;ag Ic 0 r, l 0 0 0.25 :!.25]

The desired magnitude response' in the passband(<;) and the slopb:md(s) can he weighted by an ad-
ditiona1 vector h't included us the argument of the funnio-n r emez. The function can de.;,ign cquiripple
Type; I, 2, 3. and 4 Jinear-phase HR filter;;. Types 1 and 2 are the default designs for order N even
and odd, respectively. Types 3 CF even) and 4 (r\ odd) are used for two specialized filter designs, th"'
Hilbert transformer and differentit:~tor. Tu d~ign the;<;e two types c>f AR filters the flags 'hilbert' and
'd :i fferer.t:la tor' are used for 'fty--pc • in fie last two versions of remez.
'Dlerc- are several otber optic>n;; available with the function reme:::

b re:;1ez (, .. { .s} )
l b, err J re::~ez ( •••• )
[:C., er;:, _::-rJs] net1ez. i . . . . )
480 Chapter 7: Digital Filter Design

0.1 0.4 0.6 08 l.O


Specifted frequency po:n:o. (fpts)

Figure 7.32: Illusrration of the relationship between vec:ors fpts and mag.

Tbe density oi the frequency grid needed to evaluate the error function £(w) can be increa<:ed from
the default value of S = l6L by using the statement D = remez { . . . . , S j where Sis a one-by-one
ceU array containing an integer greater than 16. Use of a larger value usually results in a tiher with exactly
equirippleerror at the cost of an increased computation time. The maximumrippleheightcan be obtained
using the statement Lb, err J = remez ( .... ) . Optional results. computed by remez can be obtained
using the state:nent f~, err. res 1 = remez ( . . . . ) which can provide the following information if
needed: the ·vector of frequency grid point in res. fgrid; the desired response in res. des; the weights
used on fgrid in res. w::.; actual frequency response in res. H; error at each point of the frequency
gnd in res. error; the vector of indices into fgr id of extremal frequencies; and the vector of extremal
frequencies in res. fextr.
Incorrect use of remez results in self-explanator~y diagnostic messages. Tbe use of the function r 21ne"L
is illustrated next for the design of frequency selective filter'>. differentiators, a.."'lti Hilbert transformers.

Lowpas.s FIR Filter Design Examples


\\'e consider first the design of a lowpass linear-phase filter using the functiof'. Le:tnez. Recall from our
discussion in Sect1on 4.4.4 that the transfer function of a Type 3 FJR filter has zeros at both ;-; = I and
z = -I. and the transfer function of a Type 4 FIR filter ha'> a zero at z = 1. These filters cannot be used
to design a !owpass filter. On the other hand, rhe transfer function of a Type 2 FIR filter has a ze.ro at
;: = ~ l and Bence it can be used 1.o design a 1owpass filter along with the Type 1 FIR filter which has no
restrictions.
The following example illustrates the design using MAT:...AB.

% f"7TAJ'f 011' "i


%
7 10 Digi1al F Iter Oeslg""' Using M.~n..AB 431

f Oo!.'l!la t 1 ong
fedge: input("H;~~nd ed~~s Lft liz • ')~
m..•al - iiJIP\.H 4 'De::nred IJiolgn :L.C:.uO.,. '1.•~1 ul'!'n ~n ~11.t:h baind "' • I •
dev - input I 'D• S i roo r .i.liiP ]-I! in c:ach lJlu'ld • • I ;
F1" • ~npL..I: ( 'S.!Ullilli n~ f :t t..oqU~J'lCy .1 n H2 ... ' ~ r
[ .N'. l pt!l:. !nd.t;. we] .. .1 ~me zo.t :d ( :t e<loe, val r de". :f't' ~ •
b temez•~ fprs.m~9.w~1;
~i~pt'FTR F lt~ Coo!fici~n ~·•; di~plb)
rh.vl ~
fr qz (b. l. ~5G~:
pl.otto'lPi.20'"'log1013Jbs<!'"Jl•l ·~n:!d,
xl~bcl(·\~n/\pi•l; yl~bell'G~in, dB'~;

f~dge - [800 1000]


!TTY l - ll Ot
d v - ro.oss9 0.011
P .. 400D

"rbe Ollf9ul Jhw. gstenfal hy 1hUi p~ ._,. pvn~ !)rlrn~o•,

~rR fil~er coet!i~l~n~~


columns 1 tnrough ~
O . OOS~8d21565206 -0.024276560 5J2S
-o.o 1616711~~682 o.oo~'Bl9JBJ3?Ja

Columns 5 thrauyh B
0. 0223S1.B7646567 -0.0019192 004-415
-0.0Jl721 J448o63 -O.Ol0B2775442,G9

Col~ e ~ t~~au~h i2
o.041750708lp597 Q.03S,907~964251
-o . oso4o'J4.4J0.20'i73 -o. o ·r6.&1!17471 gs

Columna 13 ~hrough 1~
0.0562•6517~~105 0.31190360510'~~
o.~~l70G201~l6~4 o.31190J6D5l04~~

Co 1 uJnu.!li 11 l!.h:rc·-gh 20
0.0~623651756105 -n.OS7G87974714,ij
-O.D504044J02C513 0.035 9072964251

Colll1J\n!l. 2.: tl"lrou~h 2


Q.O~ 7502083259~ -~ . 010Al71~442~63
-O.DJ1121~1448~6l -0 . 0019792100~415

Column~ 2S through ~3
c.n2235 B7646561 o.0074839383173B
-C.017~J5~~12~6B2 -0.02~~76!~0~5325

colunu1 29
-o.oos~a 2156~206
482 Chapter 7: Digital Filter Design

o'~-- --- ',


-I()

,"-N ,,,c--7~~ A /\
-~ -3<l /\ ;\ /,
0 ''
1·\i'.,("\/\ (\('\ (\(
/
i \
\
I '
,I ··:\\\ !'
i \

() 1 ()_4
lldUL!J
06 OS
-05
0
/
\
lll
'\ /__-c-----'c:f---
()_2
_j·
0.}

(a) (b)

Figm-e 7.33. FIR equiripple lowpass filter of Example 7 27 for N = 28: (a) gain response, and (b) passband detail;;_

-,
oF---~-------,_
-JO· \
i \
~ -201
" '
i;-J!l>
/\ /\
0 -·[
I \ I \
-~r J \ j
-, Ofl CB <.>2
~
'}.1;> 0.~ \l ..l.> il-4

(a) (b)

Figure 7.34: FIR equirippl.e lowpass filter ofEo;ample 7 27 fw. N = 30: (a) galn response, and (b) passband details.

Bandpass FIR Fiher Design Examples


In this case, all four types of linear-phase FIR filters can be used. The following example considers the
design of a bnn::ipass linear-pha-;e FIR filter using MATLAB and investigates the relation between the filter
order and the ripple ratio on the frequency response of the filter.
7.10. Digital Filter Design Using MATlAB 483

'
-"l'lj-
I
:~Yf(icc-1~~~ l''''lll)lf1~~
''
' i

1
fl ... 0~ D t.1 J4 l).tl f)~
ml1r ''
(a) (b)
-~
of ,r· ''
''
-20' '
'
' I
~~~1~
""
J
11J~~Wff11~¥~~
'
., "' "'" "' ll-3

(c)

Figure 7.35: Gam n:spcru;e of tho:: FIR equiripple blUldpass ftlt.er of Example 7.28: (a) N = 26, weight ratio= J, (b)
N = IIO. weight ratio= I, and (c) N = 110, weight ratio= J/10.

# ¥ ~"
0$:\Lt¥ tf, :, tt, 4t \0 Jr? "¥2
JN!I' i lf Gj

The absolute error for each of the above three bandpass filter design examples has been plotted .in
Figure 7 .36. As can be seeo from Figure 7 36(a). the absolute error. as expected. has the ;;arne peak value
484 Chapter 7: Digital Filter Design

H i\ -

'
I'
I
\J'

·--
{)_& I

(a) (b)

(cl

FigUre 7.36: Absolute. error of rhe FIR equiripple bandpass filter o!~ E:,.ample 7.28: (a) N = 26, weight ratio= 1, {b)
N ; Jli), weight ratio= 1, and (c) N = 110, weight ratio = ill 0.

in all three band;;. Since L = i3, and there are four bandedges, there can be at most L - 1 + 6 = 18
extrema in this design. The error plot exhibits 17 extrema. ln Figure 1.36{b). we again observe the error
plot ha<> the same peak value in each band. On the other hand, as can be seen from Figure 7 .36( c }. the peak
absolute value of the eiTOT in the passband is 10 times the peak absolute error in the stopbands as expected.
The Remez algorithm can create some unusual results in the design of linear-phase FIR filters with
more than two bands. We :investigate this problem in the following example.
7.10. Digital Filter Design Usmg MATLAB 485

'11''';
' I i I' 'I ," '
II :l I! ·I. Ill d f
()U~-
I

. J
''' -';' 'II ' "" ' '•'< I '. ~~~~~~~·.·.I:,~~~·~~~:,·~~
I, I H I, II

i'fl!f(fi'('fifiJ ',t:: "'···~··!"


· • ·
\
' 1/
"II'\' I'
I
1
· ' 1 ti1,J
\, I ',. .I
\,II ';I '1,'• V 1~1 I.'.1
I. .I .
,

. -
,, ;
.
-0 I'

'
(aJ (b)

I
..
..
---~--
.

{c}

Figurt' 7.37: FIR equiripple bandpass filter uf Example 7.29: (a) <lmplitude respome with original bandedge specifi-
cations, (b} absolute error, and (c) Jmp!Jtude re.;;ponse with a slightfy modified stopband edge.

Even though the Remez algorithm guarantees equiripple error in the !;pecified bands, as the response
in a transition hand is left unspecified, it cannot em;ure that the gain respo-nse of a filter with more than
two bands will decrease monotonically from ib- value in ~lle passband. 1f the gain response of the filter
designed exhibits a nonmonotonic behavior, iris recommended that either the filter order o.r the bandedges
or the weighting function be adjusted until a satisfactory gain response is obtained, For example. the
bandpass filter of the above example has an acceptable galn response as shown ln Figure 7.37(c) if the
second stopband edge is moved from 0.6rr to 0.55)'f with all other parameters kept at their original values.

FIR Dtfferentiator Design Examples


Now, from Eq. (7 .69). we observe thal an ideal differentiator is characterized by an antisymmetric impulse
response which implies that either a Tvpe 3 or a 1)'pe 4 FIR filter can he used for its realization. However,
from Eq. (7.68). for an ideal differeniiator ii (n} = ;r. Hence, a Type 3 FIR filter cannot be used for it>.
realization as its transfer func!ion has a zero at;:: = -1, which forces the amplitude response to go to zero
at -:,1 = :n:. As a result, only a Type 4 FIR filter can be used for irs design.
486 Chapter 7: Digital Filter Design

ln most practical application-:. signals. of interest are in a freque-ncy range 0 ~ uJ::: wp. Consequently.
a lowpas.s dd'erentiator with a handlimited f:equency re'>?On,;e

(7.14IJ

can he employed. In the above equation. {l ~ w :;: w, and w_,. .:::; w S. sr represent, respectively, the
passband and dJc stopband of the differemialnr_ The freqllercy U.!p is usually called its bandwidt!J. Hence.
both Ty;>e 3 and Type 4 FfR filtc:s can be used to design a Jowpa.""'-s differentiator. For the dcsigr: pha;,c,
we choose the weighting function a.s

P(w) = ±-
a:::d the de:>Jfed amplitude respo:-~se as

Dkv) = l.

The func-tion ::--emez::n-d c<:.nnot be used to cstunate tht: order of an F1R differentmtor as the formula
nf Eq. (7 .18) has been developed for conventional filter:. wi:h two or more bantls with constant gain levels.
However. the function n~I:'.ez can be employed to design an equiripple FIR differcntiator as demonstmted
in the following two examplt:s.

tt! 4 110! kf'l*,i0W!In"""IWWY pf


f¥1il( 4fru pw!i:1£71W!t U:WW:Wt¢00

hr "' H
Y:t:t:w "' f<> J{
A:Liilifi "" X!J S'L

li!X"~ i!iPftl #h:t1!hf0k M

ii.fm (t;W4T}1sim f :£.~ ••• ! J


:r:w ''T! :in

t:::!IIG>& : .?.tu· rv.v;¢. it


~ {! {tfcGr ;- ;:;q,~ Y''<»:. 0 t: 0 '('(:'ZJr300i/':L£Yffj;;:;\;
<· :::TV/7 t.) FTY"fV"; ff !f\ydf +04\ +0"1"'
:.Yn h;p11r- T '} Y{:k;t o: L\H· "'
fr 4'r f'.'i,;zi:J

\; .:;p..v; ;;,r,; ·:·W.r.?; :: :· >


•' t.h·4f,""" .z:?.. ;t:~; ... t-•:
7.10. Digital Filter Jesign Using MATLAB 487

~
-
/~
'
'
,I
/ i Jl
/ i
'' i
'
''
/
'\)'

(a) (b)

Figur-e 7.38: FIR equiripple differcntiator nf Example 7.30 oi length 11: 1:a) magnitude response, and (b} absolute
=-
C.J5 -~------,

''
'"tV'/\l\flYl'l\f{'f f
0.6 OJ\ 1

(a) (b)
}i"igure 7.39' FIR equiripple lo"'P"~" diffcr-entiator of fuampk 7.31 of kngtb 51: (a} magnitude response, am.l (b}
abl'olutc error.

Rabiner and Schafer fRab74a] have im"eStigated the relations between the filter ordeT N. the bandwidth
Wp. and the peak absoltlte errorE of a lowpass differentiator through extensive designs. Their results.
available in the form of design <:harts, can be used to estimate the filter order for a specified bandwidth and
the peak absolute etTOr in dB.
488 Chapter 7: Digital Filter Design

FIR Hilbert Transformer Design Examples


Like the ideal rliffe:rentiator, as can be seen from Eq. (7.67), the ideal Hilbert transfonner abo has an
antisymmetric irnpube response implying either a Type 3 or a Type 4 FIR filter for its realization. However,
from Eq. (7 .68} we observe that the magnitude response of an ideal Hilbert transformer is unity for all w
which cannot be satisfied by a Type 3 FIR filter whose magnitude response has a zero at w = 0 or by a
Type 4 FIR filter whose magnitude respor;se has a r-ew at both w = 0 and w = ::r. In pratlice, the signals
of interest are in a finite range ML :::= lwl ~ WH, and as a consequence, the Hilhert transfonne-r can be
designed with a bandpass amplitude respcnse given by

D{w) = 1, (7.142)

with the weighting function P(w) set to unity in the band of interest.
A5 indicated by Eq. (7.67), the impulk! response samples of an ideal Hilbert tramJormer satisfy the
condition
for n even. (7 .143)

It can be show:~ that the above attractive property can be maintained by a l}·pe 3 linear-phase F1R filter if
the desired amplitude response if((l)) is symmetric with respect to n/2. Le.,

if(w) = flt_:rr- w).

However, the condition ofEq. (7.143) cannot be met by a Type 4 linear-phase filter with an antisymmetric
amplitude resvonse (Problem 7.64}.
As in the ca.<>e of the FlR differentiator design, the function rernez:ord cannot be used to estimate
the order of an FIR Hilbert lnm<>former. Rabiner and Schafer lRab74bj have investigated the relations
between the order N, passband ripple Sp. and the nonnalized transition bandwidth WL of a bandpass
Hilbert transformer using extensive desigr.s. Based on their investigations, the following formula has been
developed to estimate the order ofthe Hill::tert transformer:

N :;;::::: _ 3.833 1og, 0 Jp


<VL

The design of an equiripple FIR bandpass Hilbert transformer using rer.ez is demonstrated in the
following example.

i£\:10£0001 AL .{ +
:t+ f \! ..f;·::"?":tih\!'f <;;'0% ¥ {i .t \}
7.10. Digital Filter Design Using MATc...t.B 489

-~

r-
f\
I . I. I
(\
I
1' ~r- ...____---·~._ •. . - - ·------ - • .______....-,\ c.o~ ,'1 \
' '
.r lj_(JJ·l
-8
(I 8•
!
l ! I t' \ . \
E ott
~
~(U
I
! "
-0.\ll;
\
I \
\ i
.I\ Ii I
\I
'',
I, \I
Ill' _,.,1 v v v
'\,I I\
\ '
'
" (1.2
-~.o'\1,---- 0.2 0.4
"-'
(t;hr:

(a) (b)

Figun! 7.40: FIR equiripple bandpass Hilbert transformer of Example 7.32 of length 21: (a) magnitude response. and
(b) absolute error.

s:c:Jz,fff;.fr ,X z
0

7.10.4 Window-Based FIR Filter Design Using MATLAB


The window-based FIR filter design proce;;s involves three steps. ln the first step, the order of the FIR
filter lo he designed is estimated using either Eq. (7.15) or (7.18). For FIR filter design using the Kaiser
window, it is recommended that the order he estimated using the formula of Eq. (7.86). In the second
step. the type -of window to be used is selected and its coeffi;;ients are then computed. Finally, in the last
step. the d!!sired impulse response of the ideal filter is computed which is then multiplied by the window
coefficients generated in the first step to yield !he coefficients of the FIR filter.
FIR filter order estimation using !VlATLAB has been described in Section 7.102. We now discuss the
windCMJ generation using MATLA3.
490 Chapter 7: Digital Filter Design

Window Generation
The Signal Processing Toolbox of M..~TLAB includes the following functions for generating the windows
discussed earlier in Section 7.6:

w - bl.ackrnan(Li, w = hamrning(L), w"" f:anning(:U;


w ~ chebwin(L,Rs), w =kaiser (L.beta)

The above functions generate a vector w of \vindow coefficients of odd length L. 15 The parameteT beta in
the Kaiser window is the same as the parameter fJ of Eq. (7 .83) and can be computed using Eq. (7 .85}.
The following example ilJustrates the generation of the coefficients of the Kaiser window.

Filter Design
The functions available ;n MATLAB for Ihe design of F1R filters using the windowed Fourier series approach
are firl and fir2. The function firl is used to design conventionallowpass. highpass, bandpass,
bandstop, and multlband FIR filters. whereas. the function Eir2 is employed to design PIR filters with
arbitrarily shaped magnitude response.
The various fonns of the function firl are as follows:

b firlfN,Wn)
b firl(N,Wn,'ftype'}
15
11 should be nuled that the HaD I! and Blackman y,indows generated by MATL.\B have zero-valued fir~! and last cocl'fi.ciem~. As a
re;.ul!, ll' keep the length the same fnT a:! windows, !hese two wiru.i()W~ dmu(d he genentt..d using a value of L that is gn:att:r by 2
than lhe valt;c uf L nrect foc the Hamming and the Kaisel" window:;.
7.10. Digital Filter Design Using MATLA.s 491

---··--1

Figure 7.41: Gai:: response of the Kal.<;er winGmv g.:neiatetl m fua..mple 7.3].

b r .i..r 1 (N, ;...,'--:, v1.:i n•:::::)\""'}


t• ~irl (~.w~. 'f!.ype',w~~dowJ
:) ,::;..r-:..; . . . . ,':-_oc,._-a·e"l

Th~ ha:sic fonn. b = i ~;::. '01, Wn I. generates th..: vector C of length H+ l cDnt«ining tho.: 1mpuls.e
JC.>p<ln'><: cocflicitnts ot a lowpass F1R filter of order:-; with a nonnahzed cutoff frequency of h"n hctwcen
0 and ! . The ;o;ampling frequency is assumed to be .;qual tu 2. The transfer func!ion of the de..<;.1gncd filkr
; . ; of the torm:

~cr the Ces1gn of bandpass filters. VJn is a length-2 vector. '!m = l Wl W? i ,containing the pm;sband e.Jge
tn...qucm:ie:-. v.. lwre 'Ill < 'N 2. For the des1gn of a rnuhiband !liter, VJ~l is a multielement vector .::ont:tiuing
handedge frequenci.::s of increasmg magnitudes, V..'r. = [ 1-;-1 '.«'2 f'l} ':<<1 T!"l5 ... , :.·;:y, 1 , where the F:.,. _;_
hands are tktlned by ( 0, >,..;1}, t Wl, i12 \, ... fWm, 1}. The designed filter, obtained after windowing.
is sc:.tleJ to enwn: !hat the center of the first passband has«: default magnitude of 1,
T~e option b = fir 1 !. :1, 'Jhl. ' f t•{p<?' ) ,;pecifies one of four other types of filk-rs. For de<cignin.?
a t.ighpas<;. fitter with a cutoff freq\tency 'rlL, :: -:::y""Pe is hi q :1. ln the case of a bauds top filter, f t:ype is
c:tof v.ith ~">'r; a kngth-2 vector, Wn = L':Jl ;J2 J, containing the f..topband edge frec,uendes where Wl
< 1,;;::_ Since a Type 2 F1R tilrer cannot be used lo design a higflpass or a bandstop filter due to the presence
of a zero at :: = -l, the order .::-1 must be an even integer for these two .::ases. Jf r.; i:s specified as ;m odd
inte-ger. t,;, r·l autumatically increases it by I to make the order even. f ~ype is .DC-1 if the first band of
a nmltihand filter :,hould be a passband, o\herwise it is DC-D.
The Hamming window for filter design is u~ed as a default if no window is specified in the argument
of f i :·l _ The 0ptiun b = ::: i. r"l 101, Wn, ,... l :;dow: is employed with the vector wj nduw of length No ;
containing the coefficients of the specified window. The option b = fir 2. •:h:, ~Vn, ' ft.ype' , wi nclol-.')
Js used \Vith spec1f1ed filter type and %indnw coefficient~. The default scaling is tumerl off in the option
f' = fir~( . . . . ,'nascalf.·').
The vanou~ forms of the function f i! 2 are as follows:

h fir2 IN, f,::")


;:. ~::.~2 ;N, f,:'I,WinQo\>1)
f.:..::-2;r;, f,rr,nr:;l.1
;:;. r::: !t;, f, ;:r, :c.r:;t,':>indow;
f; r 2 { t\. f , en, npt. ::. -='-P :
492 Chapter 7: Digital Filter Design

As in the previous case, rhe basic form b "' f i r2 ( ~, f, Y.t) is used to design a lowpass FIR filter of
order N whose magnitude response samples at the specified frequency points given by the vector f match
the specified magnitude response samples defined by the vector f'L The output vector b is of length N -rl
containing the FIR filter coefficients ordered in a<;cending pow-ers of z- 1 • The vectors f and m must be of
the same length. Elements of f must be in the range from 0 to t in increasing order, with the sampling
frequency as'>umed to be of value 2. Moreover. the first and last elements of f must be 0 and l, respectively.
Duplicate frequency points in f are permissible for approximating magnitude response specifications with
jumps. Any one of the wlndow functions available in the MATLAB Signal Processing TllOlbox can be used
for designing F1R filters by making use of its corresponding M-file to generate the window coeffid.ents and
entering them into the Junction fir 2 by means of the length :t-:: + 1 vector windmv in the .argument. A,;
before, if the window coefficients are not explicitly included, the Hamming window is used as the default.
Intheoptionsb = fir21N,f,m,npt)andb = fir2(N,f,m,npt,window),nptspecifies
the number of points for the grid onto which the frequency response is interpolated by f i r2. The default
value of npt is 512. ln the remaining two option'>, la~ specifies the size of the region inserted between
duplicate frequen'.:y points in f.
In the first step of the a1gorithm, the desired amplitude response is interpolated onto a dense grid of
size npt wilh grid points evenly spaced in the frequency range (0, 1). The filter t.""nefficients are then
determined by applying an inverse DFT to the sample values on the grid and then multiplying the IDFT
values by the coefficients of the window.

AR Filter Design Examples


We illustrate the use cf the above two functions in designing linear-phase FIR filters in the next several
examples. Without any loss of generalit}', we restrict our attention tofilterdesign using the Kaiser window.
The first example considered is for a lov.pass filter design.

fl fi +7
:l'?nt +Stt:L: i ¥'%!1iR :v:n;:;:;H t\tfl&up ~+4tLittti81U ""' 0 t,p
&1\ ttl810' 1; 'Sf\ "" ,w .* i

t Th :r
::;::;;

fnW tit: t f t:if'k


F \ 0\illbt/Qk¥J" 0 i;>; i '! ·!&up1 (t 0 ::iii

The next example considers the design of a highpm;s filter using the Kaiser window. lr should be noted
that for highpass and bandstop filter design, the function firl requires that the order of the filter N be
even. lftheorder generated using kai serord is not even, lt shrn::J.1d be incre.ased by 1 before the window
and filter coeff.cients are generated.
7.10. Digital Filter Design Using MATlAB 493

:ij-- l
·2G~ !
~
-'" -4Gt
I
.s I
I
-<>GI

-l;j_
{l tl 2

Figure 7A2: Gain response of the lowpas<; filter of Example 7.34.

~~- -~-- -~~~------

-20
!
/

o_: ().4

Figure 7.43: Gain response of the highpa_;;s lilter cf Example 7.3S.

rm 111m w y, N
"r '"'
(j fr:YL
A
tdzyw
*'• = fJ.if:t. «l%1
t~:x-%
I ; ¥1 f0\•
<< '1 '::--}"j!:lllr"
'WF
~~
"' .,, 3J &

'! ii>LS '' 0 ·7"


ifh'k\f q} I <

.idr'r < ---,;


V0fr urf010 n& 1-11¢ tl!"f«dt?v ?4 -;- #J

ln the following example we consider the application of the fUnction fir 2 of MATLAB in designing
an FIR filter with passbands ha\•jng different gain levels.

:& :Vf'tS&KJidW\ 7.~)


'& i'""lii K\f.'S' ·X f 0!is:.i b •• !~ 4"':i lt.JUI 't:2A '0$j"{{ b.t(ff{!\\} '1/tf% Thj ii!;~Gr#

'
494 Chapter 7: Digital Fiber Design

0.2 'J.Ir (!_ll

Figu~ 7.44: Magnitude response of the multilevel filter uf Example 7 36-_

/Yi< \/f {' ; "" !' } if ., \'


I> f ; \ "! t ltt"; 70'\ +, t?">~H?:;
1,7;,,:>t;<::r< "' Y;rtn:tt. f.\,'; }t
' ' JHtr1}4 r f'% mfFf : } f {)<!; )A:,;

i1wc :liJ1Wr zwerny!f41!4! ww ,;;w;mJtmrn! m * wwmm c:


""
7" 10.5 Least-Squares Error FIR Filler Design Using MATLAB
The function fi2:"ls in the Signal Processing Toolbox of "MATLA3 can be used to design any type of
multiband hnear-phase FIR filter based on the least-squares method. The various options available wirh
this function are

b firls{N,fpts,magl
b firls!N,fpts,rnag, wt)
b fir:s{N,fpts,rnag, 'ftype')
b firls (N, :pts,mag ,wt, 'ftype')

The specified order is given by N. The vectors fpts, mag, and Ht, and the string :: t:ype are defined the I
same way as in the case of the fy.nction reme z in Section 7. 10.3.
Typical filter design applications of the function I irls are considered below.

tt 'F':A'<:4fJil;ll!

'*
!
u'F tu;VfiA, t
7.10. Digital Filter Design Using MAnAS 495

-.-·
Uf--

'
_J \
\
""" I' -,f'.
s-4o' l''~r'
"' ' , n '•' ',, •./'\i\1\
0 _,,:;ri I 'i '1.· .,, :1' I; 'I \

gn'':)' ---;;c;---c;";---"'---!;;---
0.1 '].1 j;;,. ""

Figure 7 AS: Gain response of the lowp;J.s~ filleT of E:o:.ample 7.37.

'?':;,'"$'A{ { /t f:./{0' £:sd0 :hkttr"''b;l P< t't. "' :Wf' "' ' { £
-!\401 :; 11;nx•d ;,r; r: t>fi 1101<,1 ::<: c;;A±e v .-& ; .: \hiiL?¢..\tf * * q /
w G "' --'Nj,;,q f. 1 ' ·Jy&w' 1f} U::= 'MIIT' t'\(f{\-:: '\1\0'1" t "~"' _, ' } ;
01 r:-v: :s<Jj, :s ;:Miia.- ih\ 1
{ i yt"fe.<i<-'- ; './;:;t-:1
{+r/ JY'+, ;;;n+ J >>::I} •} • •SX:% 1 ft1 ! : 1 tl% ;:-./;
:a t4ld»&} t -.<,;:-£:sdpg,tr y· ;; { •d:Mii! f •/0{ Mr ;±R 'r

l'i :;. 2':


' ,., ' v
-!\419 U- .o I
tiS: N U: Lt

For linear-phase FIR filter design using the constrained least-squares method. the functions f ircls
and fire lsl are available. The latter is employed for the design oflowpassand highpass filters. Variou&
options available with this function are
b llrclsl(N,wc,dp,ds)
b firclsl{N,A·c,dp,ds, 'high')
b =irclsliN,wo,dp,ds_.wt.)
b i:irclsl \N,Y.:o, dp,ds, w~. 'high')
b ::=irclsl (N,vw,dp,ds,wp,ws, Kl
b i:irclsl(N,v;o,dp,ds,wp,ws,K, 'high')
b ::': ircl sl (N, HO, dp, ds, . . . , 'design_f::_ag' )

The basic fonn b :: ircls l 1N, wo, dp, ds} generales the length-(N+ 1} vector boffiltercoefficienrs
with a normalized cutoff frequency of wo in the range between 0 and I where the sampling frequency is
assumed to be 2. The passband and the slJpband ripples are given by dp and ds, respectively. The string
' h igb. ' is- included in the progr.am statement for t:Jighpass filter design.
49€ Chapter 7: Digital Filter Design

The parameter wt in the argument of the function c~ ,~ 1 specifies a frequency above which (for wt >
wo i or he low which (for w t < h'O) the tilter designed is gi.Gu-..mreed to meet the given pa!?.'>band or stopband
edge requirement. For rhe lo,.,.pas,s case, ifO < Nt < "HO < 1. the amplitude response of the designed filter
is within dp over the frequency mnge {) < w < wl; otherwise. the amplitude response is within ds over
the frequency range wL < w < l. For the highp<JSS case, 1f 0 < v.:t. < vio < 1, the amplitude response of
the designed filter is within as over the frequency range 0 < w <wt; otherwise, the amplitude response is
within dp over !.he frequency range wt < w < L
The parar:wter K in the function argument indicates rh.: relative weight of !.he £2 nmru of the passband
error and the .C::! norm of the ~tophand error. Thus the error that 1s minimized is given by

Thus by choosing a higher value of K. the stopband error can be heavily wei_ghted relative to the passband
en or.
The string 'design_flag' in the function argument is for monitoring the filter design process. It
is 'trace' for a textual dhpl.ay of the design table being used. 'plots' for displaying the full-band
magnitude response., passband and stopband detdifs of the filter, and 'be t.h ' for displaying both the
textt<a.l information and plots. AU plots are updated at each iteration step.
The function ~ircl sis used for the constrained lea....,t-Mjuares design of multiband linear-phase FIR
fi.lrers. The two forms of this function are

b fircls{N.f,3.ntp,up,lo)
b fircls(N,f,a8p,up,lo, 'design_flag' J
The basic form b fircls (N If, affii::, up, lo) generates a length-(N -r1) vector of filter coefficients
meeting the amplitude response specification~ given by the veclors f and amp. 1be vector f contains
the frequency points defining the transition frequencies in increasing order in the range 0 and I where the
sampling frequency is assumed to he 2. T.'le fin;t and last frequency points must be a 0 and a 1, respectively.
The desired piecewise conshmt values of the amplitude response are specified through the VeL"1or amp,
whose length is equal to the length of f m.inu:-; I, .i.e .• the number of frequency band<i.
The vectors up and l o specify, respectively, the upper and lower bounds of the amplitude response in
each frequency band, and have the same length as that of the vector amp.
The string display _f Jag is used to monitor the de·~ign process the same way as in the function
I I

fj rclsl.

- - *S¥G'8(bv011P!10 00 grV£14 ;gt


1'ir "'' (1 ,:10
0/1¢
Wv
ptd!Ot twdtm:

7", '
'"
\{\ i," L ' ',0 tr ' ";
;F
; " <,
' "/; \4", :; "
497
7,11, Summary

------,
'·:t-=-~
U Ql M A6 Q& I

'
niL .
ot ------------------------- ______________ - --l
/ _ _ _._,.,,,\
\
.n ·,,~----c,c,~--
\ i 0.4

iYfff(Hl
-•>L_-;;;-----~- ---~-~
0 0-l Q4

(a) - 06 0.~

(b)

Figure 7.46: FIR 1owpass filter of Exampie 7.38: (a) gain response. and (h) plots of magnitude response. passb<md.
and stopband details showing frequency points selected.

f tlL
+~04 u. J 1-
q i •> !)£ '(), ){j h {f# j t
.if-\"' ,j),jft:/;:«ffL,:t' 00"\>o:i:LjL

AiiE 1M!. ')'*"


lVtOT!!r+vw.
tw 1ftt
til%111!t-0

7.11 Summary
This chapter considered the design of both infinite impulse respons-e aiR) and finite impulse response
(FIR) causal digital filters. The digital filter design problem is concerned "v:ith the development of a suit-
able transfer function meeting the frequency response specifications, which, in this chapter, i-s restricted
to magnitude (or, equivalently. gain) response specifications. These specifications are usually given in tenns
490 Chapter 7: Digital Filter Design

'
, __ ___,_~·-"-----
-1~ -- --,~,--

>: '""'
l; 0',
"'-om:·
-~
~ '~':
, ,_.
~

"',-,..,"
~-'

""''u:
'~ -M"-' /\ /'-
.. c\
.. -'- ,-1
- --,; ' / '
-M>:--~c'--;:-;--
•J 0.2 f)_ '• ()_{>
'•
"'
(a) (b)

Fi~:ure 7.47: FJR bandp~ ftlter of E_,;_ample 7 39: (a) gain rc~ponse. and (bj plots of Llagnilude re.-.ponse, passband,
an<l sWpl:w.nd delails showmg frequency point> se:ected.

of the desired passband edge and stopband. edge frequencies and the allowable deviatior..s from the desired
pass-band and stopband magnitude (gain) levels.
IIR filter design i"l usually carried out by transfomting a prototype ana1og transJer function by means
of a suimble rr;apping of the complex frequency variables into the complex variable.;:. The widely used
bilinear transform method, di.-;cmsed in this chapter, is based on this approach.
F1R filter design, on the other hand, is carried out directly from the digital filter specifications. The
method outlined in this chapter is based on the truncated Fourier series expansion of the desired frequency
re;.;po'lse. Fomulas for computing the Fourier series coefficJeots of wme ideal frequency response speci-
fk~ltiom, are induded. To reduce the effect of the Gibbs phenomenoo. the truncation can be carried out by
applying a suitable window functJOn. Some commonly used window functions are reviewed along with
their propertie&. The effect of the Gibbs phenomenon can also be reduced by providing a smooth lransition
between the passband and the stopband.
Finally, tbe chapter con..siders the computer-aided design of digital filters. To this end it primarily
t.lis~u~se:'l design algorithms that .are available in the Signal Processing Toolbox ofMA TLAB as functions. For
the design of IIR digital filters, MATLAB p:-ovides functions for designing digital Butter'korth, Chebyshev,
:md c~liptic filters. For the design of FIR filters, it includes functions for the windowed Founer series.
approach, the Parks-McClellan algorithm. the leasr-square;.; algoritlun, and the constrained least-squares
algorithm. The Parks-McClellan ulgorithm develops a transfer function with equiripple passband and
~10[lband magnitude responses .and makes u:;e of the Remez optimization algorithm.

7.12 Problems
7.1 Iktermmc tfuo peak ripple value~ 3 p J.nJ E, foread: of the fuUo·,ving sets-of peak passband ripple a P and mimmum
Moph::t!lU all~nual!on Cts:
{aj"'r> = 0.!5 dll, u 0 = 4; dR. (b)ap = 0.23 ::ill, as= 73dB.

7.2 Detenninc tbe peak pll•<sb;md npplc o- v nnd mmimum stopb;md attenuation a 5 in dB for ~h of the following
'eb ut peak rirp~e va;ues r., omd J_,-:
(a~'!Jr,...,. tUH. 8., =0.1J!. (b)t'ip = 0.035. i!i,- =0023.
7.12. Problems 499

7.3 L.et H{;:) he the tnmsfer fundion of alowpa."s dig;tal filtcr wilh a pas;,band edge at wp. stopband edge at w 5 ,
rassband ripple nf Op. ar,d >lopbamt ripple of J.,. as lndicaled in Figure 7 .1. C'nns1der a Ca$Cade of two iderrtical fUters
with a transfer fu-:Jction H (z). What a:-e the passband l>nd swpband ripples of the cascade .at Wp and w,. respectively?
Generalize the rL'Sults for a ca&eade cf M identical sections

7.4 Let HL pi;::) denote the transfer function of a real-coellicien: lowpass ~iller with a pahhand edge at w,.,, stopband
edge at"'·', past.ba-r..d ripple of 8 P• anJ &topband r.pple of li_, as mdicated in Figure 7. L Sketch the magnitude respom.e
clthe h1ghpas.> transfer furn:tion HLp( -z) f['>r -JJ: ::: w _::: -n: ami detennine its passband and swrband ecges in term\
of u;p Gnd u;5

7.5 Consider rhe t:ansfer function G {z) = HLP f_ei'''"z}, where HLP (z) i~ lhe luwra~s transfe: functior. of Prohlem
7A. Sketch it~ magnitude respon~e for --:r :::
lV S -;r, and de!er.nme i1s passband and stopband edge frequencies in
knn:; of Wp, ru,, ru;d {V 9 .

i.6 The impo~sc invariance m~thod 1~ anodle: 2.;pproach to the design of a cm~al UR digital tilter G(z) bastd on the
!ran~formation o: a prototype cau>.al analog tr.msfer function H.,(~). If ha (f) is the jrnpube response of Ha(x), in the
crnplllse invariancc method, we require that tO.: unit sample re~pome g[n 1 of G(z) be given by the sampled ve~i<m of
h,, ( tl sampled at uniform intervals ol T .';ecunds, i.e.,

g[n] = h,.{.'lTL n=O, 1.2,,


(a) Show that G(z:) and H,;;:{s} arc related through

G~z:) = .Z{gfn)j = Z!h,{t.1 )j

~ l f:
T k=-oo
H"(q;2'k)l
"- T '=!liT)Inz
(7, 144}

{b) Show that the tiansfor-:nation


l
s= TIre.:. (7.145}
ha1-- the desirable propertie;; enumerated in Section 7 .1.3.
{c:• Develvp the Condition under which the frequency respun:-;c Gl_eJ"'; of G(z:) will be a scaled replica of the
frcquencyre~p<Jnse Haljr.l)of H,(.~J-

(d) Show thJt :he normnlized digital angular frequency w is rdale-d to the analog angular frequency U a~

{7.!46;
7.7 Lei
A
Ha(s) = (7_ 147)
s+a
be a cau;,al first-order analog tr.:m.~fer function, Sho1"i that the c:ws<~! fir~--<>rde-r d:gital :ransfer fu11ction G(z} obtained
frum l-i" (J) via the the impulse inv:rri:mce mf'thod i>< given by

(7.148)

fJo, Let
Jc
H 0 (.,·)= - - - - - (7_!49)
{s+fl)2~;_2
be a causal sec-end-order analog lran~fer function. Slv.ow that the causa\ "-CCond-urder digltallnml>l'er function G{z)
obtained from Hu(s) via the impuhe invanance method is given by

17.150)
500 Chapter 7: Digital Riter Design

7.9 L;;t
s.._f]
(7.!5!)
~~ -i- fj)L~ })
be ;~ nl:lql <;cx;,)nd-nrder ap_alog tnm!>fcr function. Show that tfx ~-au<;al ~ond-nrder digital tran~fer function G{; l
uhtamed !mm Ho1h J via !he the impulse mvariance method is gncn by

(7.152)

1.10 Show !hal the digital transfer function G(:) obtaHted from dl arbitrary rational analog trumd6 furn;tion Ha(s)
with simple pole~ vca ~he impulse invari!u>ee lilt'!hod is given by

G(.::) = L Res1dueo> [
1
:7.l53l
all p<:M> ur
11"\<l

7.11 Venfy the relation between Eqs. (7.147) and {?.l4S} using the above formula.

7.12 Detennim !he digital transfer fuoctiuns obtained by transforming the following causal analog tnmsfer functions
ushlg the impulse im-ariunce method. Assume T 0.2 sec. =

l6(s + 2) + lO.~·i- S
4s 2
(b) H~;;(s; = _,
(s'" + 2s + 3)is + 1) ·

7. U Th~- 101lm-ing causa! fiR digital tran~fer fum:tions were de.~igncd using t~ impulse invariance method wid~
r =!) __-:;sec_1)-.;:tcrmioe the:r respc:::tive ;mrern causal analog tran~fer function~.

2:.- k z 1 -:: e-fJ_;> cw;.(0.9)


----~--- (b) G,(~J = :-"2
;:-e' (]_'! ::-e- 1·::· 2: e 0.6 cos(0.9) +c l .2 ·

7.) 4 The follov;.ing causal UR d:gital transfer functions were de:>igned u;;.ing the bilinear tnmsfonnnrion metlmd with
T ~=- 2_ D.otermir,e their re~pective- parent Causal analog tc;msfer fu'K:timh.

R(z 3 +3z 2 + :!.;: + 1}


1].~ + 1)(7.;:2 +6.: +3)-

7.15 An tlR digital Jowpas::. filter is to be designed by lr.m~formi'll? an analog l•:~wpass filter with a passband edge
ne~UCIKY Fp at O.S !.:Hz usm,g the impulse invammce rr.ethod with T = 0.5 ms. \\'hat is the normnlized pa.~~band
t:dg-;; angular fr~ucncy M;: uf the digital tilkr if there is no aliasing" Wlmt would be the normalized passband edge
,mgulnr frequeocy (<Jp of the digil<il filter if it is dt-t.igr.ed u ..;iog the tlilinear tansformation with T = 0.5 m~ '!

7.H> An !IR l{)wp<i~~ digilal filter ha~ a norn1aliztxl pa:.sband edge frequency'" = 0.3.t>. What is the pas!>band edge
frcquer.cy in Hz of the protmype ana leg !o'l>.pa.ss filter if the digira! filte-r has been designed using the impuhe j;wariance
-1lelhod "'ith T = O.l ms? \Vba\' is !he pa-ssband edge hequency in Hz of tf_e pmfotype analog lowpass fil!er if the
dig,~_al !ilter ha.o., ~:1 dc~igncd u;;:ing :he b-ilirlliar transfoflllatim1 mcth.__--d with T 0.1 ms? =
7.17 De~ign an IIR low pass d1gital filt.er (7\ z} wi:h a maximally tl:tt magnitude responlieand meeting the:~-peclficatlon~
given by Eys. (7.:;8al and (7.38bl using che impuh:c invnriaoce method How illJes this filter compare with thatdes1gned
n~c the f-Ji!incar tnns.fnrmatiml mcthuJ in Se..-"tkn 7.3' 1
7.12. Problems 501

7.18 This prohkm illu~t!.~lt'.; hov. ali.J><mg un he ~uitahiy <-~,ploik'i 111 <.lfder to reali7e :ntere;;ting :'~tr.::enc)' re~porr~.;
,-hara~·tcri~>tj~·"- An tdcal causill aJulog lo->.IIF'J1>' tiller w1th ;m lmpLd:,;,.· rc~pen:..e h,,(f) ha~ a frcquenL·y r('1;pOn\e gl"<cn
b~-
~,
I f,,l/><,= Iti.
L lUi<
mhtTWlSe.
nc.

Ld H;•<·i'"i ,md H2(eJ"'; h.: the frey_u~ncy re>;xmse-. of digitoll filter< obt;~incti by sampling haU) at t =. nT.
where T -= .':rr,'::':P.:c and ::r_; 2,. f<.:'>ptdiVe!). A:-e'-ume the tnm.\J<:"r fund1un~ are later '!Ormaliz:crl so that H1 te!J) -co
H-;o_i.-1°; = i.
(a} Sketch the frequem:y rto<pon:>c~ U; (••J"'-'; ~nd G:tc'"') u(lh.: two tli.!_!lla~ tilrer ~tructures shuwn m Figure P7. I.
~hl \Vh<it type of tilte1.~ ;;,reG 11 ~~ ;~rd G21.-! tlowpa%. highp«-~'- etc.)?

Figure P7.1

7.19 Th1~ prohlem illu>tr.ut.;,. the mct'w-d ol Jigitill tilter de~ign hy the SUfJ-rt•.>pmne imwriwr(T methn:l jHay 70·,
[Wh:7 i I Lel: .1{,:,;} 0.0 ~ rcal-<.--..~ftloent C.tu~ul ..;mJ stable anait•g tr,m"fer functiC>Il Its unit step respon~ iJ,,,u ~ ri
l'> given hy the ~11ve-:-~e Lnplace tran~fnnn of H"(s)(s. Let G(::;) be ac:n1sal diptal tram;krfum:ikm wtth a unil ~tt'?
''-'~?Oil"e ,;;J1- rn! such Ihat
n = 0. L 2,. {7.1'i41
Dtwrmm::-lh~: o,pression for C {:::) if

0.155)

uni.-:how that it"'<< JjT acJd


Hr.Jf<u)
0,
'"" Vv' i7.i-"i61
~ >
1'-'-' 'iT'
rhco
Ci(<:j"': - Hu(ju). rm 'cui < ;7. 157)
2T

7.20 An LTf ._-onlinuou,-hmt: syskm J..::M.:rihed l)y <> li1•eaJ .._·onsb!lt cocffkkm differential eq"ati<.:m is often "nlved
flU!llttice~lly h~ Jcveloping an <:XJU~\oakm !i:1e.n Ciln~wm cndiiclt'tll tlitfercnce ett-uation by rcpi<Jeing the denvati•<e
qx·wton in tho.' J:Fcrenti:JJ eqnati;m !Jy thdr •1pproxin:w.tt- Jif!cr:nce equatinn repre~entalion, A <.:mnmonly u»ed
U1fkrcnce ~XJU::tfon r~presemdion nfP.Jc fir;;.t t1~1ivaLve al time 1 =nTIS given by

wherL· I '" lhe ->.J.n;pJiilg pcnnd ;md 'In j = 1- 1n I J The ~·nne-~port.ling m.appmg from !he ,;-d<.mHm to :he :-don:mi.n
i> obtl'iined hy replacing s with be iJ<td.>rard d~tferenre ''{"-"!ahH i·1! --::-: 1- Inve'iligatc- the ;W.wc mapping and il~
pwpenw.... Dn;::s a S!ahk H" (:> i rc>u!t i :1 a ~nNe H {; Y' Hw.\ use ttl i.> tta~ nl<l.pping lor digi1al /iller de~ign!
E02 Chapter 7: Digital Filter Design

7.21 Let Hv.\.<:) be a rcul-coeflir.:ien! causal and stuble analog ;ran~fer function with a magnitude re~pon...e bounckd
abn\e by <>Dify. Show th<it the dig1tal tra!lsfer function G(;:j obtained hy a bilinear tmnsfnmlation of H{,(-~'_1 is a BR
lnn..:nun.

7.22 Show that !~ sccond-o.:-der anal;,g l:>anclpa~s tmns.fcr functton


s,
Ha(s) = - - · - -o {7.158)
-> 2 +Bs--..-Q;;

hdl; a magnitude re~ponseo that goc:-: to zcm -..·aiues m n = 0 and -x:, and has a vaiul! of unity :at Q = P...-.. H l.1 1 ami
Q;o_, f!;: > Q;, denote the hequen<:Jcs at whi::h the gaitl is down by -3 dB, lt can be shown that the 3-dB lmndwidth
de.'int>-ti by (Q:J - Q!) is equal to 8_ Develop the ~<e<.-"f.md-order digital transfer function G(z.) from the llbovc Ha{s)
vi~ the b:lincar transfomJJJion. Show that G(<) can be e~.presseJ in the form of Eq< ( 4. l 13) 1f the con:>t.dnL~ u and t3
are c:O.men according to Eqs. (7.36a) anC 0.30bJ.

7.23 We have shown in Section 6.7.2 that the transfer function C(z.} of a second-o-rier HR notch filter as given :n
Eq. (7 .35) can he expressed m the form G\ z) = ~ f I + A2(;:_l j. where A2 (.:) h a second-onier all pass transfer !Uw.:tion
given by Eq (6.68). Consider a uotch fiher with a JJUtcb frequer.cy at w = n ;2. Show ;::ha: a notch fiher with
mult\pk notch frequencies is obtained if z- 1 ~s repl:aceJ. w1th :-:-V fReg8lSI. 'Wl1a.t 31<' the locat1ens of the new notch
fr>;qucncies?

7.M A notch filter with N notch fr.:que:-Kie" .can he realized by nop;a<.-'ing the allpass filter A1(~) in the abo•.-e Frob!em
w1th :J -::a<;-.,ade of N secrmd-rnder allpas~ filter~ lJos99j. In this problem, we comider the des1gn of a notch filter
with two noh::h frequencies u.>r. w:z. and mcrespo-r.ding 3-dB nolch bandwidths B!, 82. We thus replace A2\-Zl with a
fourth-order aflpass transfer function A4 (;:),

obtained by ,-,a~cading two second-order allpa!is filter;s. The cons!.Jntb a! and az are chosen as
1- um(B,/2)
a·-------
' ~ ! + tan{R,/2)'
l'ht: tr.tnsfer function of the modi lied structure is 1:ow given by H (z} = { ll + A4(;)[ = Nt~)/ D(z).
(a) Show \hat NC::} i;;;:: mirror-image polyrmmia: of the fQrm a~ I + b1 :- 1 -+- h1::- 2 + b1 :::- 3 + z:-4), and e"pres~
:he crmslall!~ hl and h]_ in term~ of the coefficients c-·f .14(_;::).
(b) Stmwthll1U =(I +::t 1u:::_lj2.
(c) By sett:ng: fv'(e;w,) = 0, i = l. 2, ;-,Olve for tbe-enn~!ar.l~ hj and bz m !enns of WJ and wz From the equations
in part-o; (d) and (h), determine tlle expres.~ions foi the coeflic1c::t~ fit and IJ1.
(d) U>ing the dt!sigu equatiOns de;,vet.!ahove, design a double notch filter with the following spe.:ificati:ms: w; =
0.3n, t1}1 = 0 5n. Bt = 0.171'. and Bz = 0.15Jr. lls.ing MATI.AB plot the magnitude response of the de:.ig:ted
nntch iilter.

7.25 Lt:! HLp(.·J bt- an IIR k-.wpa.-.;s tr2nsferfunctinn with [l7.cro fpole) m;:: = Z/v Let Ho('i) denote the lowpass
tran!..fcr h11Krion obtained by applying the lowpass-to--lowpa:...., tr.H'.,formation given :n Table 7.2 which moves d:c- zero
(pole) at: = <:1;_ nf llt~P(Z) tu a ne;>.·locat~un a!::= ik- Exprcs~ i'.l( ia termc. o~· Zk- If HLp~::_) has a zero at~= -l,
show that H n \i) a:so ha..., ;;: ?CfO at _;: = - I.

7.26 Ld Hu-'l:cl be an llR lO'WJhl% transfer function with a zero (pole; at::= Zk. Let HDIFJ denote Jhe bandpass
mmsfer furrtion nbtained hy applying the \n\'\'pass-t-o-hanrlpas:;, ua.-:;~fmrnatiun given in Table 7.2 which mov<Os the
zero (pole) at::: = ;k of Ht.p{Z) tc a new lm:ation at;:= Z:1.- E'l.pres:. Z:;, in tenns of::.~· If HLp(~) ha-s a z::ro at
_ - -1. S:'low thai Hn(Z:l abo ha~ a 7em at:: = ±-!.
7.12. Problems 503

7.27 -\ sccorhl-mdcr lo•;,;pa>s HR d;!!italli~ler wit..'! a 3-dB z:uto!f frctJ_ucr.o::y at M,- = OA2:;r h<Js a transkr rlm..:tion

0.::!23(1 :: -J lz
GLp(z) = l -- fUY52z +0.1X7.c
De~ign a se<..·onJ-ord':'rlow 1"'-'-' fihr:r HLp (;;: j with u 3 dB cu!o!l frcquem..y a! W,- = 0.57;:; by lrun~formirJg tlu.; a!mv'-'
•nwp:n~ Transkr fum:twn u~ng a lowpa...,~-to-lowpa:-s s~ttral tmrbfurmation l 'sing M t;TLA3 plm the gam rcspon<e~
,,f th.:- lw<> lowp;:~~ filten: ll'l 1he ;.amc figure.

7.28 De,;ign a '><OCOfld-o~Jcr highpa~" fitte-r Hfl p\7 J wi1h a 3--dB cutoff frtcjuency at We = O.fJ.b- by tran~formmg t:H::
I<~Wp;t~S tr.m:,h:: i"nctiort nl Eq. 0.:59} u;..in.g .a ,owpr,<;<;-to-!li£'1p:<.>'> ~-:oec:ral transformatio;L Cs.ing MATLAB plilr
the g.;in ~C'-j)On~6 nf the h1gh;nss ami_ the kw.-pass lilters c>n the sa:ne figure.

i .29 A '\ewn~-on.lednwpw,.,Type l Chebyshev HR digital tilrer G L p (z) with a0.5-dl:i cutotl lrCl\UC!lt.-J' at•<>, = 0.27rr
j-.!~ a tran'i.fcr function
0.1494(1 + ::·1) 2
Gt.p(zJ= ------ 0 160}
1 - 0.7{)76;;-- l i 0.34\lh- 2 .
Ue:.ign a founh-onicr bandpass lilkr H lJ p (:) w1th a,;enter frequeJt~-y at.;:\· = G.45:;r by transkmning the ab~·ve !oW?'f%
tmn~fer fum.:tion using a \owpass-to-bandpa;;~ ;,pectral. tnm~fonn;mnn. Using MATLAB plot the gain response;; of the:
!nw;:;;~s:. ;mJ the bandpass filters on tbe same !igurc.

7.30 A ti:ird-order Type I Chchy~ht.:\ high pas:. illtcr ~ith a p.:l.s~bm.l edge at Wp = 0.6:r has a transfer fum.:l!on

0.0'1~6(1 -1.:-'-+ 3;:-2 -_--Jj


Gf1pC:_) = j +0~760-t.: I,.G702iz=-2~-0.208Sc 3 ·

Iksi;:n a highJlU'-~ filler HHr(z) with a pa'>shancl edge al "'P = 0.5Tr by tnmsfnrming the above highpa<;.<; transfe
function using: the lm1-ras~to-:owpn;,~ '>pcclral transfDrmation. l''>ing MArLAB plnt the gain rt>~pnn-~e~ of the two
highp<:~-; tiller;; on the "ame hgnre.

7.J1 De~!fH a >t:<-'Ofhi-order bandpcss Iiller with a ~-enter frrquenq: ;H w,, = 0.5.7!" by C!ansform,ng the bam:!pa~~
!r111l~!t"l lue~ctmn of Eq. (4_1 l7bl using the iowpa:>s-te-Juwpas~ spe.:trall>an-;fomlation. Using 11A 1LAR plot :he gaic
rc;pon:,c~ o! the t\~O bawJpas;. tilt('!" on tht same ~gure_

7 . 32 ·\ ,e.;,_--nld-orJer nnkh filter wl!h a notch fr.:queney at 100 HL and opc:-atmg a sampling n!e of 4-0() liz i~ to h<:
dc-s.ignt"d_ Design tlli~ liltcJ hy tran~fcrming the no:(·h tmn~fcr function of ExEmp!e 7.!{ using the- lov,paS'--Ill-lowpa~~
~pet:L-al lr->nsfmmatwn. l"-,mg NfATf AB plot ti::e gain response<, o:· the two nutch ii.lter> or..lhe ~arne figure.

7 ..JJ Dc~ign a k>»- p:ns fille> with a cutullat w? - 0.5:< b-y tr:wsforr.nng the highpass tnn~krfuncu.m of Prnhlem 7.30
<.F:i"~ ri·.c !ewpa;.,~-(v-higtpas-; ~pe,·\ral lmnsfnnn·c.ti.on. Using W1A TLAB plN the g&n r~spun.~e~ uf the highpa;.,,c and
the lowra~s IHlen. on the ;.,ame ligure

7.34 Verify t!le exprcs;;iun for the i ~lpuhe-rc'<ponsc c;>dficienb h :.t LIn 1 given in E4. (7 .65) fnr the 7Cr<.J-p..'wse
nr.!ltib<md hirer with a frequency response flfiit_ (,, 1"') defined m Eq (7.64} and showr. in Figure 7.15_

7.35 Shuw thatth.: !de<!. I HJlhert tr:Jnsfmmcr witil a frequcn~-y resp'-'nse H HTf )"') defin<Xl in Eq. (7 .M) has :m im[)Ulse
rc~pons"' h H r in I as gwen in ht. (7.6 7J Sine"' the iwpu;~e r~~prmse is duubiy infinite, the ideal tl!;;crele-time Hilbert
!ran~f1nmer is nut r>;;al:7.able. To make :l realizable. the trnpuhe re~nonse has to !:oe truncated to In t ::;: lit!. What type cf
lin•:ar-pt:<!sc F1R fi:ter is the tnmcatcd 1mpulse respon~e"" Plolihc fro:ljt:ency response Gf the tmn;:atc:d approxiomtt!Oll
lor V:lfirlt\:-. \~duo:-~ 0·: lvt _ Cumr:-telH o:-1 your re>:uhs.
5C4 Chapter 7: Digital Filter Design

7..36 Let -Hl·} denote the ideal-operation of Hillxrt tran~forma:ion de filled by


~

H\x!nH = E hHy[n- fjx\R].


t=-x

where hHrln] j<; as given in Eq. (7.67). Evaluate the follov-wg qmmlJtJeS:
X
(a) H!Ji{1t[ftlx[nll});. lb) L x[tlli{x[fj).
[c,-'X)

7.37 Show that !he ideal differentiator ·with a frequency resporu;e H Dl F{ej"'J defined in Eq. {7 .68) ha:> iln 1mpulse
respon~ hDJFinJ a.~ given in Eq. (7.69). Sin;.:e the impulse response is doubly inlinite, the- Ideal di;:-erek-time
differenttatili 1s not realizab!e. To nuke it reali:mble. the 1mpu\se response has to be truncated to !n I _:::: M. What type
dlinear-phase FIR filter is the truncated impuhe response? Plot !hefrequcnc; respong:ofthe truncated approximation
for various. values of M. Comment '-'ll your Jeo.ults.

7.38 Develop the expression for the impulse respollSe hHpfnJ of 11 <.:>msai highpass FIR filte>oflength N = 2M+ t
-nbt:rined by truncating and shifting the impulse response h H p[n J of !he ideal highpass filter given ~y Eq. (7.61 ), Show
tha1; the causal lowpass FIR filter hL p[n J of Eq (7 .60) and h. H pin J are a. delay-compl~mentill}' pair.

7.39 Determine the impulse response h LLf' lnj of a zero-phase ideal linear passband lowpa.% filter characterized by
a frequency response shown in figure P7 .2(a).

7.40 Determine the impulse response h BLD! F [nl of a .zero-phase ideal bandlimited differentia tor dmracterized by a
fre<.juency response shown ln Figure P7 2{b).

H u_y(e 1''.. J
-W c

r<i<J
" '" ' 0 w,. • "'
_,
-w
D
'" ' n "'

(a) (b)
Figure P7.2

7..41 The desireC frequency rcspons~ cf an ide~l inuxrator i:-. gi:verc by

iw I
H;m(e ) = -.-. (7 t6!}
JU.i

Deo~nnine the transfer function H R(Z) of an ITR inregratot denved via the rectangular numencal integration method 16
<.wd the transfer function H 7 (z) of an liR integrttor derived \o'ia the trapezoidal numerical imegratioo method. 17 L:sing
MATLII.B plot lbe magnitude responses of H;n1t.t)- HN\Z.L aod Hy·j) forT = I Comment on your results.

J6seeEq. (2.11\1).
11 se,-, Eq, (2.98).
7.12. Problems 50S

7.42 An improved HRdi_g:ital inieg_'1ltorcan beobtaine<l by inte.rpolating the rectangular and the trapezoidal integrator:.
lK'Cordin,g to [Ala93l
3 I
H,v(z) = HR(Z) ...,.._ Hr(z}.
4 4
U<iing MATL4.B plot the magnitude respon.'ie-s of HN(Z), HR(Z), and Hr(z) foe T = l. Comment on your re;;ults.

7.43 De'>'elop an HR digttal differemiatru by inverting the IIR d1gital integrator of Problem 7.42 [Ala93]. Is this a
stabk transfer flllCtion? If not. develop a slable equivalent. Using !I.!ATLAB plot the magnitude responses of the ideal
diiieienhator and the digital differentiatur designed here. Comment on y01.1r results.

7-44 1lte frequency respon~e of an ideal zero-phase notch filter is defined by

H
notch
(cfw) = I0.
1.
]wf = Wo,
o:herw1se.
(7.162}

where w 0 is 1he no1ch_fn:quency. De!ennine its impulse response hllOichln] [Yu90].

7.45 In thi:> problem we comider the design of an FIR digital filler approximating a fractional delay .,_- D :

where the delay D i:. a positive real rational number.


z -D~"'hJ
=L... nz
.
J-" .
~

fa) Show lhat the filter coeffidencs obtained using the Lagrange mterpolation method 18 are given by !Lak%}

h{n]= nN D-k

i=O
n-k' 0 ~n::; N.

•#

(b) Design alength-17FIR fractional de!.ay filter wilhadelay of JOOlll samples. Plot u~ngMATLAB the magnitude
re~pon..;e of the designed filter along with that of the )deal fractional delay filter. Comment on your results.

7.46 A maximally fiat group delay HR all~s filter can a1so be designed toapproximMe a fractional delay z-D:

z-D == ~:-cd"Nc~,.cdoNc;=-"'o',--'c+,_·_··c+.ccdc'c'c-_'=NT-<i'='•+C-'z_-c"-=
l+d1z 1 +£l2z 2 ...;-···+t!N-JZ (N l)+dNZ N'

By expressing the de.<iired positive delay as D = N + 5, where N is >l po..~itive integer and li a fractional number, it
can be shown that the <:oeffident {d~l of the a!Jpass fiJter is given by [Fet71I
N
dk=t-dcfn D-N+n .
D N k+n
"=0
whl~re Ci"' = N!j 1.-.!{N- k)' is a binomial coefficient. Design an aUpass fractional delay filter of order 9 Wlth a delay
of 100111 samples. Plot using MAl'LAB the magnitude respome of the designed filter along with that o: the ideal.
fractional delay filter. Comment on )ot:f results.

ll!~;ee Sttti<Jn 10.5 2.


506 Chapter 7: Di•J1tal Filter Design

7.47 An ideal zero-phase comb frlter with notches at a fundamen:al frequency w 0 and its harmomcs has a frequenL)'
n;,spoliit: given -::>y
- }[!)·,> = [ "l.'
H ,;o:nble 17.163)

If the inp-..11 to the comb l!Jter is of the furm x[nl = s[nJ + r[rtJ, where sfnl is the &>;ired signal and r[nl =
)_~
0 A;- sin(kw 0 n + ¢k! il< the harmonic interference wi[h a fuLdamental frequency w,,. the comb filter supi)fesses
the Jruerfurem:e and generates s!nJ as its output. Let D = '2rr/w0 -denote the fractional sample t.lelay.
(a) Show that r!n- Dj = r[n].
{b) i'>!ex:t, by =mputing the output v[n] of<.< filter H(~) = I - ;;-D whoE-e inputi~ x(nj.show that y!•1) does not
contain any harmonic interference.
>c) Even though the fii1er H(z) = I - z-D eliminates the harmonic incerferencecompletely, it does not have <1
uni!y magnitude at freqnendes w #= kw0 thus introducing 'agnal distortion at its output. The disto:t1011 in !be
pa>sb-,md of H(z} can be eliminated by :nodifying !he filter according to [Pei98j

where 0 < p < I. In practice. p should be dose w 1. Usmg MATLAB plot the magnitude response of H,-(;::l
fur Wv = 0.22:r and p = 0.99.
(d) Develop an efficient reali.:atlon of !he improved comb filler Hc(Z).

7.48 U!<ing lhe method of Problem 7.47 and the FIR fractional dday filter design method of Problem 7.45. design a
comb ti!ter of order 16 for w, = 0.22rr, and p = 0.99. Using MATLAB plot the magnirnde response of the designed
filter_

7.49 Usmg the method of Problem 7.47 and the all pass fractional delay filte- design method of Problem 7 .46, des-1g_n
a com~ filter ul order 9 for w 0 = 0.12n and p = 0.99. Using MAl LAB plot the magnitude response of the designed
tiltcr.

7.50 By compuong the inverse di~crete~time F::lUrier transform ofthe frequency response HLP (el"') of the zem-pha-.e
modi fred lowpa'is filter of Figure 7.26(a) with a first-order splme as the transition furn;tion, verify the expres,;;ion fill
its ;mpul~e response hLplnJ a'J given in Eq. (7.87). Show thai hLplnl ofEq. {7 ..87) car. also be derived by compudn_g
the invene di~(;rete-time Fo1.:rier traru.i'orm of the deri,.1!.tive function G(ej"') of Figure- 7.26(bi and then u~ing the
differenl~alion-in-f:equency propeny of the discrftt"Aime Fourier transfonn given in T~ble 32.

7.51 Show that the impulse response h L p[n] ohhe rem-phase modified l<W>'paS> filter with a Pth--«.der spline as the
tran~iwmfunction i~ given by Eq. (7.88).

7.5:! The f:-equcncy re,;;_pon!<e of a Zl:'W-phase lowpa.~s filter with a passbund edge at Wp, a stopband edge al w,. and a
r.ti~eC: cosine lransition function is given by [Bu:-~121, fPar87j.

0 ~!to! -< Wp,


+ COS (J"!"(w- "'p-I)-) Wp S iw: S W5 , {7.164)
ii.';Wpj,"
ws < lwl <H.

Show that 1:s impulse respon&e i'J of ihe form

(7. 165)
7 .12. Problems 507

7.53 Show t!lat the length-(2M + l) Barr/t'tlr<mdov,· ~equencc given by

In". -.'11::;; n:::; M. (7.166)"


u-fnl = 1- ~~- .
.'vi + 1
can be ohamed hv a !iru:ar convolution of two scaled ;ength-N rectangular windows. Dekmlin<O N and the o...:ak
fat::tur. Prom this ~I anon. determin.; the c:tprt~SJO!l for the frequency respome of the length-{ 2M+ l f Bartlett window.
Dt:tcrm;r..e the main lobe wtdth t:..,,.fL and the no-!ative sidelohe level li.>f of the Bartlett wind~· sequence.

7.54 The !eag1h-(2M + t) Hann, Humming. and Blackman window sequences given in Eqs. (7.74) to (7.76) are all
of th<O fonn of rai<>ed cosine windows and can be cxprer.sed as

where II'R In] is alength-(2M + l J rcctac;gular wmdow sequence. Express the Fo:~rier transform ofthe abo"I.e ,;::eneralized
cos me window ict termsorthe Pou.>iertransform of the rectangula.r window >P Rkj"') . From this expression. detennmc
tl-K- Fourier tran,;.fonn of the Hann, Hamming. and Blackman wjndow sequences.

7.55 Ma~y applications requir~ the f.:tmg of a set of2L + 1 equaily spaced data samples x[n] by a sr::-oolh polynomial
""(I) of degree N where 1V < 2L_ In the least-squa.res fitting <lpproach the polynomial cnefficicnts 01;. i = 0. l . . <
N, <tre detcnnincd sn that th.:: mean-square error

L
t'"(u,)= L (.>.[kJ-.ta(UJ
2 !7.1681
k=-1.

is a minimum [HamR9]. In smoothing a very lo:tg data sequence xjn I based on the lea.~t·:<.qlmf~ fining <>p?mach, the
<.:enlral :>Empie in a ~et of consecuti>;c 2L + I data samples is replaced by the polynomial coeffictelll miniminn;e: the
cmr~spnnding me;m-Mfjarc error.

(a} Dcvelnp the smoothing algmithm fur N = l and L = :5. and show that it is >l moving average FIR Ji!ter of
kngrh 5.
<h) Develop the smoothing algonthm for N = 2 and L = 5. What type of digital filter i~ represented by this
algnrilh!J'?
(c) By compilrir.g the frequency r;::sponses uf the previous two FIR smoolhing fitteiS, ~elect the ftlter that pmvidc:;;
beuer sm;mthing.

7.56 An improved ~mooth1ng algorilhm is Spe>r("t'r 's I 5-pnint smoothmg fom1ula given by [Ham89j

y\n~ = ~ {-3xln- 7j- 6x[n- 61 -· Sx[n- SJ +3xin- lij


-t- 21x:n- 3J + 46x[>r- 2j + 67.>./n- Jj + 74xinJ
+6h[!1 +I}+ 46-x(n +2\ + 2h:[n + 3! + 3x(n + 4}
- 5xfn + S]- 6x[fl + 6]- 3x[n + 71}. (7.169)

Ev;~ luale ib frequency response and. comparing it with that nf the two smoillhmg tiller:; of Prohtem 7 .55, show why
Spencn"s formu'a y:elds the betkr re~uh.

7.57 ln Prohlef:17 .3. we considered filtering by <J cascade of a m.:mbercf identical !ilter1>. While the cascade previde>
mor<: stapiYand attenuation than that oblained by a single filter secliun. it also inneases th.e pa;;sl;><md ripple ur in cffccl
decre~~e;, the pa>sband width for a pve:t ma.-::mum pa~~lxmd dcviatkm. lu the ca.~e of an FIR fil~cr H!d wi!h <1
508 Chapter 7: Digital Filter Design

>.ymmetric 1mpcllse re«ponse, improved passband and stopband perfOITilam:e:s can be achieved by employing_ thcjiltt:r
sharpening appmach [Kat77l in which the overall system Gtz} i.~ implemented as
L
G{~J = Lue[Hi:o:l(, (7 .J7{))
i=!

w<1ere {ae J are real constants. In this probiem, we outline the method of selecting the weighting coefficienls lut·) for
a lP.ven L. It follows from above that G(_z) is alsu an FIR filter w1th a symmetric impulse re~ponse. let.{ denote
a ·oqx:citic value of the amplitude re5ponse of H(z) at a given angular frequency w. If we denote tbe value of !he
amplimdc rcspons.c of G !z} at this value of w as P{x ). then it is related to x through

L
P(x) = Laext. (7.171)
i=l

P(x) is called the amplitude chonge function. For a BR transfer function H(_z), 0 :S x :::; l, wllere z =
0 JS in the
Stllpband and .1' = 1 is in the passband. If we further desire G{~) to be a BR transfer function, then the amplitude
d:.ange function must satisfy the two bask propenies P(O) = Oand P( i) =
I. Additional conditions on the amplitude
cbange function an: obtained by constraining !he behavior-of its slope at_:r = 0 and x =
I. To improve the perfonnance
of G(:_} in the -stopband, we need W ensure

d'<Pix)l
d t =O , k=l.2 .... ,n, (7.172)
X _t={)

aul t:J improve the perfonnance of G(z_) in the pa._-.:sband, we need to ensure

d~~x)l =0, A.=1.2, ... ,m, (7.173)


.t=l

wtrere m +n =L - L Determine the coefficients {at l fm L = 3, 4, and 5.


7.58 Consider a Type 3 linear-phase FIR filter with an amplitude re-sponse as given in Eq. (7.102). Show that if
lh~ amplitude response is symmerri~,:. i.e., i!(w) = it(rr - w), then it is possible to choose the parameters c[kl of
Bj. (7 .102) so that the even-indexed impulse response ~amples h[n] are zero.

7_!;9 In ihe frequency sampling approach of FrR filter design, the specified frequency response Hd(ei"") is tirst
ur.iformly sampled at M equally spat;:ed points Wj; = 2nk/M, 0 :;; k ,:::: M- I, providing M frequency sampf<!~
Hfkl = H4(el'»t). These M frequency samples constitute an M-point DFr H[k] whose M-point inverse-OFf thus
yidill. the impulse response coefficient-s hlnJ of lhe FIR filter of length M [Gof69a]. The ba;;K; assumption here i;,
th.1t £'\e ;;pecified frequency response is uniquely -charaeterued by the M frequeocy ;,ampies and_. hence, cen he fully
re::overed from these samples_
(e) Show that the transfer function H (z) of the FIR filter c-lUJ be expressed as

(bJ Develop a realization of the FIR filter based on the above expression.
(c} Show that the frequency response H(ei"') of_the FlR filter designed via the frequency sampling appro«ch h<~"
exactly tile specified frequency samples H(el""'-:• =
H[k] at wk = 2rrkfM, 0 :-:;: k 2 M - 1.

7.ii0 Let :Hd{~j"'): den~ the desired magmtude response of a real linear--phase AR filter of length M.
7.12. Problems 509

ror M odd \Type 1 FIR iilk-rL show that the OFf samples H!kl needed for a frequency sampling-based design
'" are glven hy
= 0, l, ... , M: 1.
I
I F!d ( e.J2nk; M )\.--}2rrkfM- 1;12M, k
Hfk' - (7.174)
., = IHd (ei2:tA.f.li)!ef2..•7\.l1-k)O-f-!)i2M, k = Mf-J.. __ .. M _ 1.

For M even (Type 2 FIR filter\-. o;how that the DFf :>am pies H!k J needed for a frequency sampling-based design
:;re given b;

IH, (ei2Jd:;M )k- J~:d{.\f-1 •i2M, k = 0, 1, .... Ef - l,

Hfkj =

7.-61 Design a lint-m·-pha>;e FIR


! 0.
lh'd ( e-jl;<k•,\.1) I e.i2rr(M • k)(M-l)/2_-l.f.

low_pa~s filter of Jcrrgth ]7 with a pas.-.;band edge at Wp


k =
k =
¥·
!f + 1, _.. , ,.,f _
=
0.51l" u:>ing the frequen..::y
l.
(7.175)

-.;;.-rnphng approach. Assume an ideal brid..wall characteri~ti<.: fo,-the desired magnitude response.

(a) l~~ing Eq. (7.174,\ de>-dop ~he exact value' for !he desired frequency samples.
(I;) L;~mg MA TLAB plot th.;, magnitude resp...">n~e ohhe Je<;igned filter.

7.62 Design a linear-phase FIR lowpass filler of length 37 with a passband edge at <L'p = 03rr u&ing the frequency
~-oan;pling appn;;J.<:h Assume an ideal brid:_wall characteristic for~ desired magnitude re.>pon~e.

(a! l '~ir.g Eq. (7 .174J de-.e!op the exact nlue;; for lh" de~ired frequen;,-y samples.
(bl C!>ing MATLAB plot the magnitude re-sponse of the designed filter.

7.63 By t.t~lving Eq. (7_11~). derive the value of E given by Eq. O.l 19),

7.64 Sho"' !hat the condition nf Eq, (7. 143) on tbc impulSt: re~rr-.e samples h HrfnJ of an ideal Hilbert transformer
,·annrn be met hy a Type 4linem--pha:o.e FIR hlter.

7.65 "f11e Wtll'"f-'<'d du.:reu Fourier Jr-a.,.;_!(u-m tWDFTl co:m he employed to determine lh<" N frequency samptes of the
.::-transform X!:::_l of a k-ngth-N sequence .tlnl at a warped frequency scale on the unit circle. TheN-point WDFT
,Yl.l. J l'f xlnJ i~ given by the¥ equally s;:ta.."ed freq~m.:y sample~ on the unit circle of the modified z-transfonn JC: {Z}
.-bw.mcd by ;:;pplying an all pass first-order sp~ctral tmn~formation 10 X(;:) [M"it98al

P("Z_)
XU}= X(.::)f __ 1 _" 1 ;:-i (7.176)
- =-·---.
1 .. ,-,

whc1c jed< !. Thus, the />'-point \-VDJ--1 X;.q ot .x!nJ is given by


X!kl = Xii.ll:z=., 1 co-.~_ .~·. 0:::; k ::= N- I. (7.177)
(al Develnp the- exprr~si<m;; fur P(} J and 0(5;.
(hi If we denote
S-1 N-1
P(f:J = L pjn}Z-" and D\~l = L dtnr:-",

snow that X!kl = Pikl_! D[k;. whe>e Pfk] and D;k] are, n;!oo:pectively. the N-point DFr.~ of the sequences pfnl
t~ml d[nj.
ki H wedenute P = (p\0} p!J! ···piN- IJIT. and X= :AjOlx[ll --.xU¥- J]j"r. showtha1 p = Q·X.
where Q = [q~." l is a real .'V x N matri,..; whose iirst row is given by qo,s = a'. firs: column ls given by
_:v-1 C~u:". and remaining eleinCnts q,.,., ca:1 be derived using the recursion relation
q,._r:, -

tt~.> =qr-l.;-1 +aqr_,-; -aq,-1,>·


510 Chapter 7: Digital Filter Design

MATLAB Exercises
)VI 7.1 De,ign <I dig1Ud llhor t'y <Hi in;pulse invariant tra;p;fcrmati<•n of <1 fourth-order JnJ!og He\s<d tnns:e,- ft,nclion
lor llw fnllt.H\m_,: values ol ~amplillf fn:X.JUerrue;,.: (a) FT = I HL, ,.mt.l \b) F1 =2Hz. Plol !he gain a.ml !h.: gmup
de illy '"~?""''.:~ ,,f tmth de.if'n~ u~in~ MA TLAB. and corr:p:trc tf:c~C" rc~pon:;e:; with th;:rt ;}f tile ur:ginal l:ks~d n:m~fer
1m' <'linn. Cmnmcm nr. yc..rr rc,ub-.

IH 7.2 !k>;gu a dig:t;t: Butler"' <lrth lowpao.> filter npcmting at a sampling mte o! ilO kH7 Wlth a 0.5-<-IB cutnfffrct;uer:.:~
,H L k H1 ;mtl a mH:anum ;,.top hand :nt.:nuallon of 45 dB at :w kHz u'ing the bilinear trruHforrrMlior. meth<'tl. Dc~<.·rn1mc
1he Hrdn of the analog fiikr prutolype o;-,.ing tftc lormula g:\-.on in J---4 (5.36 i and then dcs:gn th" analog proLJtype filteJ
\l'lllg th~ 'vl-lilc t .. L ~ df-' L•f 1»fATLAB. Tramfonn the analog !i][t'f tr:m!;fer function 10 the de. ired digira! tn:t,;fcr
!urdi,m t.:>ing I'>C .M Jj[e lJ~ _;_ in(•._,; P!O! tlk gam and pha~e re~pon;;c;; usmg MATlA.B. Show all steps u:.ctl in !he

\1 7.3 1'1.-lodify Program 7 _.1 to design J. digit;,! Butterworth lowf1<!~' :ilter using ::he h:linea:r tramformation me!hud.
Th::; iuput Jma _rttp.;,m~d hy til<': modified pt"O_gram should be the Je~tred passband anti ~-topband edges, and rr.aximum
pa•;-;band lkviati..:m ami lt•c minimum stopband ;:;,ttenuotton in dH. L'.\i:J!_! the modilh:d progr;un. design the Jigit<J!
Buttc-rw"nh lo"'P""' £iltcrof ExeRi.-.e ~17 ?..

"-1 7.4 l'c.ing the M-file ir-·::-i r.v "' ' de~ign lhe de pta! But:crworlh luwpa~s !titer of Exen·i~e. lt.F.l.L Use !Ire ,,,-,a!ng
rn•t<ll.j-flf :ihrr orJe~ Jctcnnined u~mg the for..:ml.1 given in Eq. 1:'i 36 ).

M 7.5 Desig:n a digital T,·pe l Ct:rbyshcv kwp<t~t> Iiiier operating: at a sampling rate or 80 ki!7 with a passband
t.'llg.:: tH'I.jc!t':Ih:Y al 4 kH7, a p<b.\bam.J npple of0.5 JB, and a m:::imum stopband auenuation of 45 dB at 20 kllz
us:ng the irnpul~· inva~i:em::c nk'lhcd and lt:e bilinear tmn~fmmation method. Determm"' the order of the analog ::1te1
pwt.ltypc u;.in,o the iormula given m E+ (5 41) anU then desig.c: th:; analog prototype fiher u..;mg the 1'¥1-lile , :he. h -, <'l[.
af ~iAI'L-'\B. Tr;;n~forGl !he analog ti:tcr !ran~fer fun~"1ion to :he desired digilal transfer fuw.:hon using the M-fik
•. :. Ph4 the gait and ph~.>e rc.~ponse" nf both £k;;ign~ u~m_g MA'lLAII. Com{Xlre the perform;:;,nce~ of the
hH' lillel' Sho\1. all ~tep~ u:.ed in <he- det.:gn,

M 7.6 Modtly Pmgr;1m 7_2 w de~ign a dign:;_l T_yP<" l Cheby~hcv lo'''PliS~ filter using the b;Jinear transformation
mt"lhoJ_ Tnc input data required by t:'le nJOdiiietl program should h.: the desired passband and ~tophand edge-;, and
ma umum passb;md dev1a\ion am.J the mlninwm .;topband nuenu<>tion m dB. Using the modified program, design t!Je
di:,c'1tal Typ~ l Chebyshev lowpa~o filler ofExcrci,-,e- M7.5.

M '1.7 ll,,ng the );.1-fi!e i:-.r-· r:vuc wnte a M_tdLAB program !(J design a dig1t.al Tyj)C I C"hehy~r.ev lowpa.% hiler
c~'C;Jg thz: irnpu:~~ in...·ariaw:e method. The input -data required by the modified program ;;hould be the desired pa~.~band
;me ~h>pha~JJ eJge~. and maximum passband deviation ami the minimum stopband attenuation in dB. ll~ing ym;r
pmgrwn. de>Jgn :h"" di!'it<<, Type l (.bebyslu~v low?a;;s filter ol Ex:en::~c M7.5 .

.\1 7.K l..k~ign a digiw.l elliptic ;;m'ras~ filter operating at a ;.amplir.f :me of ~•kHL wit!-.~ pa~band edge fr..:quen<:y at4
kJ-b.. a "!cp-~and edge frequency at 2\1 kHz. p;H:,hand ripple of0.5dB. and a stophand ritJrle ,_,r 45 CB using the impu:se
i~Jvari..tnc:c me;huJ <lnd the bilinear tran;;forma:mn tnethud. Determine L"le order of the analog tllter prololype using:
the fnnnub. giwon m r:q. ().51 land then design the analog pmtotypc filter u-;;;ng the M-Ille c 111;; ~~·of MATL.'\11.
Tra r:~f, 'l"rll !he ~u~.tl•)g liltcr lran:o.!"t:r tunclion to the de~ired digital !J;:;,nsfer function usm~; !he M-file D' 1 i n,oc..:r Plo/
the gaul and pha~e re~pon~~ cf hoth dec.igns using .\1ATLAB. Compare the performanc-es of the two filter!> Show all
~kf'~ tt-;cd in the Jc~:gn.

'\I 7.9 Mortif; !'rog1mn; :1 t.-.


;\t>~J~n a d;gital eEipti<: ln'A-p:t>~ filter using the bilinear l~<~n~fomMI!tm methuG. The
in?u! dala r~q11ire<.! by the nodif.cC pmgram should be lhe desired pas\b;md and !;topband edges. and the maximtom
rn~:;hcwU <h:h.Hion <~nd the nmnmt.m stopband a!tenuatioo in dB_ L:~ing the modified progn:tm, design the digital
dliptic !nwpa.<.' tilkr of F.xen:i'>!: M7.X.
1'.13. MA7LAB Exerc:ses 511

M 7.10 De~~ign u;ing the bilinear toansfonnatk:n method" digit:J.I elliptk hlghpas.o;; !iltcr opeTJtwg at a <..ampimg r<1te
o! 1 Mll.r. wit~ !h:; folkM·ing c;pr:nfi..:at;on'-: pr>~shaml edge at ."<2.1 kH7., stnpband edge at Z:.:"i kHz, pt'ak pa~sband
r ,yp'.:; nf 0.5 dB, and minimum s:ophand attenuation of 50 dB. (a) What are the specifications of the analog higbpa~s
F Iter" (h) What are the we<::ific.atiDn>' of tbe a-nalog protc:-:ype lnwpas'> filter? (c) Show al! pcrtiru~nt transfer functmns_
Ftoltht' gain re:-,p>ns.es ,,j-t~e proltJly?e analog lowpa>~ !iller, ;malng highp<J."~ filter, :end tksired dtgital highpass lil!ec
Sho·.v '-ill ~teD'-

\\!) 7.11 Design -..~sing the hiliucn• tnmsformatiun me!hud a digLtal Type 2 Cheby5hev bandpass filtt:>r uperating <Jt a
-.;unplitt)!: nne of 2500Hz with the fPiluwing sperificc.tions: j1U.\>hand edge~ at 560Hz ami 780Hz, stopband edge~
<~t -~7'1 Hz and JOOO Hz, peak pas~hand nppic of L2 dB. ~nd minimum :,topband at{eouation Df 25 dB. {a) What are
t -,e spe::!fi::ati.,ns of the :m;!lug baoclpa;,;s ft tler·; (b} What are the spcdficatlons of Ihe ana'.og prototype lcwpass filter'!
t.::l Shn\<.- ull ~rtment trau~fcr functions. Plot the gain ~e~pcll.'>¢~ of the prototype :~nalog lo•.vpass filter, the analog
ban:Jpa;,." liltcr. and des:;eJ d1gital bomdpu." filter. ShL'W a:: ;.:ep~.

M 7.12 Oestgn u\mg the bilinear tr;onsforrnation me-thod a digi!al Bunerwortl-: bandstop filler nperating !!I a sampling
rate of 5 kH.c wit~ the tOilowiog spedfkation~: pa,~band edges at :>00 Hz and 2125 HL stopband edges at 1050Hz and
140n H1. peaJ.: p.b;.baml ripple nf 2 dB. ;tnd m;nimum stoph<wd dle-nuation of 40 dB. (a) What are the specification~
c.f the 'malug bandstop filler? (bJ \\'hal are L'1e spedficati..m~ of the analog protolypec lcwpa~~ filter? (I:) Show all
pt-rt,nent tran,fer function~- Plot tt.c gain respon"-es ot the prototype analog lowpas,<; fiiter. the analng bandstop filter,
and deiired d,gital hand.~top filter. Show all ;;tep£.

!'1'1 7.13 Plor t~e rnagmtmk rt.~Jmnsc of a linear·pha'>C HR highp<~~~ filter by truncating the im:;mlse respon~ec h H p [n I
cf the ideal highpass filter of fut- (7.61) to length N = 2M 1- ! for twu ji-ffereo: values or M and show th;u the
!run..: a ted filler exhibi~s usnliatory behavior oo hoth sides of the cu:off frec_ucncy.

1\1 7.14 Plot the '""gnituo.k: re:;pon"c oF a !im;ru--ph;ose FIR handp:Js<; filter by trunc-ating the impuhe rec;ponse h B p !,n I
of th~ ;ileal bandpa,-;s filter of Eq. ;7.02) In-length N = 2M-+ l for two different value>. of M and show that the
!nJncliled filter exhibiH osnllatnry behavior .m hoth side~ of the cLtoff frequency.

t\1 7.15 Plot tlle ma,gnitudc rcspo!!S<" of 11 linear-phase FJR Htlbcn tra.'lsl'ormer by truncating the impulse respom;e
ilHT[r. I oftt:e ideal Hilbert tnm~Jurmerof ~q- (7.67) to length N =2M+ I for twc different values of M and~'
that the lmn;;a[ed filter exhibits o:«:lllatory h.::hav:or ned! w = i} and w = r..

!\11'.16 Wnte a MAfLAB progra!lllo design a !•near-phase FrR notch filter by wimktw1og tile l!Tipulse re.~pon<;e of
1 the itkal nol:ch !iller of Proh:em 7 .44. Using :his pmgr<im_ desig_r. ;m FIR :.etch !liter of order 40 operating: 3.1 a 400"Hz
s .• mp-iiug rJte with a notch freyucncy of 60 H~-

J\:17.17 fktennine aqu;;dn\lic approximati(IJJ ao~:---.- fiJX +a-zx" to the cubic function D(x) = 2.2.rJ- 3x2 + 0.5
de!ined for the r;onge - 3 :'S _t <: ':"by mioircti7ing :he pt'ak value of !he a~solure errm ID(x_l- <l(J - a 1x - ap: 1 /.

max
-3-<0x:OJ.S ·
D(>:;-ao-<•;x-a,x 2 1.
u >ing L":c RemeL aigorithc. Plot the "'rror fuflction after ronvergence of the algorithm.

M7.1H: Design usmg the windowed f-'ourier series approach a lincar-pha~e FlR lowpass filter with the following
.>pecfication<;: pa~~hund edge at 2 omlhec, stopband edge ;;t 4 I'.J.d/sec, maximum passband attenuation uf 0. I_ dB,
1r ini!llllm ~tophand atte-nuation of 40 dB, aml a sump ling frequency of 20 rad!sec. Use e:ach of the foliO\O<ing windows
for the design- Hamming. Hann, :md Blackman. Show 1he impul<;,; re.-;por.se coefficients and plot the gam respnu.~e
of the de>.ignd filten tor cr.ch Cd~- Comme;t on your ~emits. u,, nol use theM-file f irl.

lVI 7.19 Reyeat Exercise M7 .18 u~ing !he K:us.er window. Do not use 1he M-file f i r·J .
512 Chapter 7: Digital Fitter Design

M 1 ,1.0 De<,ign u.~ing the windowed f.."lurier F>eries approach a lifltom"-phase FIR low pass filter of lowest order with
the following specifications: passha~d edge at 0.3n, stopband edge at 0.5rt", and minimum stopband attenuation of 40
dB. Wh:ch window function i:. appmpriatc for this design? Show the impulse resporne coefficients and plot the gain
re.s~w.se of the designed filter Comment on your result~- Do tN1 u.;e the M-ti!e fi r1.

M 7.21 Repeat Exercise M7.20 using the Dulph-Chebys~cv- window. Du not use the !1.1-file f _i_ .::·1. Compa:re your
re.>UI!~ with that (Jblaifl<!d m E.<.crci:>c M7.20.

M 7.22 Re-peat EJ<.e~cise M7.2{} using theM-file :: _j_ r:!_. Compare your results wi!h that ohtained in fuerci;;e M7.20.

M 7.2!> DeMgn a Jinear-plta;;c- h!ghpass FIR filter oflength 30 with a passband edge at b)p =
0.5JT using the frequeocy
sampling approach, Show the impul~ response ;;oeftldenL<; -and plot the magnitude re~;ponse of ihe designed filter
using M -\TLAB

M 7.24 Design:> liw.:ar-phase ba1Kipass FJR filter of order 40 witl; passband edge.'> at WpJ =
0.4n and Wp2 = 0.6JT
uhlng the frequency sampling approach. Show 1he impu\5!:: n:~pOJI:.e <.:uefllcients and plot the magnitude response of
the designed filter usmg MATLt\JI.

M 7.25 L"~ng tile frequeocy sampling upproach redesign the length-37 linear-phase lowpass filter of Problem 7.62
by including a transition band with one frequeru.:y sample of magnitude 112. Plot the rnagmtude respom;e of the new
filter using MA TLAB and compare ir with that OC.tained in Problem 7 .62.

l\f 7.26 Repeat Exerci.~ M7.25 by mclud.ing atrnnhllionband will! tv.·o frequency samples of magnitude 213 and 113.
respectively.

M 7.27 De~ign the linear-pha-se FIR lowpas-; f,lte~ of Exerdse M7.l3 using the fuPCtion fi Tl of MArLAR. Use
ea;;h (})'the following windows li:Jr the design: Hamming, Hann. Blackman, and Kaiser. Sho-x the impulse respon~-e
coel'!lt·i.:nt-s amt plm the gam n~"pon,;e; of lhe deo.'ogned filters for each case. Compare yow results with those obtained
in ~xerr·iw~ M7.1li andM'U9.

M 7.211 Using !he M-file [ i r :_ design a linear-phase FIR h"1ghpa&~ filter with the follo-wing specifications: stopband
edge at 0.45x. -pa:-;sband edge at 0.6n. maximum pas&band attcm;ation of 0.2 dB. and minimum stopband attenuation
of 45 dB. t:~e e<t<.:h nf the- fpllowing windows for the design: Hamming. Harm, Blackman, and Kaiser. Show the
1mpul~ response coefficient~ and plcn !he gain response of the designed filters for each case. Comment on your
results

M 7..29 U ~m,g the \.1-!ile f i ,---:._ design -.. linear-pha.!>-c FIR bandpas~ filter wltb the folkw.ing specifications: stopband
o:d~>e" ai 0.45:;r and O.Kn. pa<,.~band edges at 0.55rr and 0.7;r. max:mLm passband attenuation ofO.! 5 dB, and minimum
swpband attenuatirnl of 40 dB. Use each of rhe following windows for the design: Hanuning, Hann, Blackman, and
Kaiser. Show thee impulse r-esponse coefficients and plot t~e gain re.spO£Jse of the designed filters for eJch ;.::ase.
Co1nmenl on youJ remllS.

M 7.30 Design a two-dlanncl cr-ossuver FIR lowpass and highpa5'> filter pair for digital audio appliCiltions. The
lowpa~<> and the tughpass filter~ .are of length 27 and have a crussover frequency of iO kHz operating at a sampfing
rate of SO kHz. U~c th~ funct;on ;: i r 1 with a Hamming window to deo;ign the k».Vpass filter while the b:ighpass filter
lS derived from the lowpass filtt:r w;mg !he delay-complementary property. Plot the gain response$ of both filters. on
the same fi,..:ure. What is the minimum number of delays and multiplien< needed to implement the crosso~network?

M 7.31 De;,_ign a three-channel crossover FIR filler system for digital audlo ~lications. AIJ filters are of length 3-t
and ;~mte at n S11mpliug rare of 44.1 kHz. Th~ two crossover fre-quencies arc at 3.5 kHz and 9 kH7~ respectively.
L;'iC the funct10n f i r i with a Hann w·ndow to d~tgn the low pass and the highpass filters •.vhile the bandpas~ filter is
derived from the towpass and highpal>s filters using the- delay-complementary property. Plot the gain responses of all
filler<: on the same figure. What ls the mlllimum number of de~ays and multipliers needed to implement the crossover
rn:twork'~
7.13. MAn.As Exercises 513

M 7.32 I'he M-file fir 2 is employed to design FIR filters with. arbitrarily shaped magnitude responses. Using thi3
function. de:.ign an FIR filter of order 80 with three different constant magnitude levels: 0.5 in the frequency range 0
tv 0.4. 0.3 in Ihe frequency range 0.42 to 0. 7 • .md 1.0 in the frequency range 0.72 to 1.0. Plot !he gain respon.>e of th.':
de,;.igned fil!er.

M i.33 Design tbe linear-phase FJR !owpas~ filter of Exercise M7 .18 using the renez fWICtioo of MATLAB and plot
ib magniUJde response.

M 7.34 Design t_l)e linear-phase FIR highpas.s filter of Exercise M7.28 using the rer'I€Z function of MATLAB and
ploLJts magnilude response.

M 7.35 De~ign the linear-phase FIR bandpass filter of Exercise M7.29 using tbe remez furu:tion of MATLAH and
piOLJt~ magnitude respons.c.

l\1 7.36 Design a lcngth-32 discrete-time FIR d1fferemiator using the remez function of MATLAB and plot il;.,
magnitude response.

M 7.Y7 Design a 28th-order FIR Hilbert tnmsforrn<:r using the remez function. The passband is fr{'m O.O&n tn
0 92IT _ The two st-npbands are from O.Ol;r W 0.07rr, -and from 0.94r torr. Plot its magnitudeTespons.e.

:\-[ 7.38 Repent Exercise M7.36 WJmg the M-tile -r l :r ~ s.

!a) (b)

\
o,.,ccc---~'c-----------------<>-+-w
w" _,_,_-_w_, "o~~"""''""·,_---------------------;e--w
M M M M

{c) (d)
Figure P7.3

M 7.39 ln this o::x-crnse, we- considerthe des.ign of computatiun:JJly efficient linear-phase narrowband FIR filters using
the inrerpolatedfinite impulse resJUmse {IFIR) approach [Neu84b ~- Let H (z) be a lowpass. FIR transfer fllllction with
a pas:"5band edge at wp and a stopba11d edge at w_s. ag ~hown in Figure P7.3(a) Ttte magnitude response of the tmnsfer
function H (zM) is then a~ shown in Figure P7.3(b), where M is a positive integer. If we cascade H {zM) with another
514 Chapter 7: Digital Filter Design

lowpa;s tilter F(::.i with passband edge at wp/ M and stopband edge a! (2;r ~ &.>.1 ) / M, as shown in Figure ?7.3(<.:).
the c;~.,c.tde G(;) = H(:cf.f)F\z) is a sharp cutoff lnwpa!i~ filter a~ indi..:ated in Ylgure P7.3(d). By choosinj: F{zf
appropnately, aroy one of the M passband;; of H t.:u) can be retained and ihe remaining (M -- 1) passband& attenuated
res Lilting in either a narr;1wband lowpa~»., or highpas~. or bandpa&s iihtT.
By rcve!"'iing the above proce%, ».e can tle•elnp the spee~ticatmn, nf H ( <:) al'd F(z) from the desired ;o;pccttications
of ·:he overn!l narrowband lilter G(z) obtained hy the cascade red:zadon. SitlCe ~:he gain in dB 0f a ntM·arle i!> !he
sur:: o+: the gain.~ in dB of the iruiividur.l sect~ons, the p<lO.sbam.l ripple:-. of H{;) and F(z) can be made onc·~'mlf of tlu:
de~ired passband ripple Sp of the n;UTowb:md filter G\zl_ On the other h.:~nd. the M:opband ripples of H(;:;_l and F(:;~'
car be made equal to the de~ ired passband ripple Js of be narrowhand filter G {;:J for simplicity. The U)ffipu!Ai;mal
complexity of the ca~ade G(:) i:> thu:; equal to that nf H(zj together with that of F(z_).
Usmg the above appwach. Cesign a linear-phase lowpass FIR !Iller G(z) m the tDrm of II {eM) F ( z), wher<O" H(; ::
anti F(;:_) are Hnear-ph.aseHR filters.. The specifical:ons fm G(zl arc a> follows: wp/1'.1 = RJ5:r. w.jM = 0 27!",
&p = 0.002. !5, = 0.001. Choru;c the largest p-~ible value forM_ Compare the ._:omputational-complcxitie'i of G!:: J
de~igned a-. a single st-a~ and des.igncd as a ca-;cade H (:M _,PI:!.

M 7AIJ Another approa<:h to :he Je~ign of a computatiUfla!ly efficient FlR fitter is the prefil/er-equalner nwthDd
{Ada83i. In this method :irst a compututioruU;y effident ?TR p.-efilter H\;_! with a frequency response rcasGnably
dose to the desired response is seleded_ Next an FIR equalizer FU} is designed so that the cascade of the prefilter and
the eqaalizer meel~ the desired .~pecihcations. An .attrdctive prefilter structur~ for the design of a lowpa:,..;; FIR filter i~
the recur-sive rur.ning <um (RRS) FlR filter of order N which ha.~ n tramofer function
I - ;:--(N+:l
H(;) = --~- c J

The fir!'./ -wll of ti-:e fre-quency r~ponse of the RRS tille~ is at w = 2n: /IN+ l )_Thus, if tbe desired :;to_pbami edge
i:,. 111 w,., the order of lhe RRS fitter should be cho<:en as N ~ br ;,_,, If N i~ a fraction. then hot!-. the integer value~
nearest to 2n! m, .Jre goud candidates for the nrdcr of L1e RRS hlte1. The Par-k.-McCiel!:m algorilhm can be modified
to incuTporate the fre.qutnCy response uf the RRS filter in the wcightmg ftmctivn of the error funclion W{<"J'"'-l of
Eq. (7.45). Using the prefilter-equalizer approac~. de.tign a computationally-efficient narrowband FIR lowpas.~ fill-er
with the ful!owing "-pecifications: "'P = 0.042;r, "'·' = O.l4n:, ap = 0.2 dB, and o: 5 = 35 dB.
DSP Algorithm
8 Implementation
There are hasically two (ypes of dtgital :-.ign.al pru.:e:-.sing (DSP) algorithms: llltering algorithm& and signal
analy-.is algorithms. These algorithms um be based on ei:her the difference equation, both recursive anC
nonrecursive, OJ the d:screte Fourier trdm.fonn (DFT) and can be implemented in any one of the following
forms: hardware, firmware, and software. In the hardware approach, the algorithm may be implemented
using digital ci1 cuitry, such as the shift register to provide the delaying operation, the digital multiplier, and
the digital. adder. Aitemative\y. a special-purpose VLSI ch:p may be designed and fabricated to implement
a specific fihering algorithm. ln the fir:nware approach. the algorithm i.s implemented on a read-only-
memory (ROM) chip. Additional. control circuitry, and smrage registers, are usuaHJ• needed in the final
hardware ur firmware realization. Finally, in the software approach, the algorithm is implemented as
a computer program on a general-purpose computer such as a workstation, a minicomputer, a personal
computer, or a programmable DSP chip. This chapter is wncemed with the implementation aspects of
DSP algorithms. We ti~t examine the two major issues concerning all the above types of appmaches to
implementation. We then discuss the sofLw~e implemcnta:ioo of digital filtering and DFr algorithms on a
C(tmpurer using f\.1ATLA.B to illustrate lhc main points. It is followed by a review of various schemes for tf::c
representation of numbers and signal variable."> on a digital machine. The number representation scheme is
baste to the development of methods for the analysis of finite wordlength e:tlects considered in the following
-chap\~r. Next, we review aigorithms that are employed to implement addition and muhlphcation, the two
key arithmetic operations in digital signal processing. \\ie then briefly review operations developed to
handle ovcrflm\. Finally, two general methode; fm the design and Implementation of umablc diglt:al filters
.:ire outlined, followed by .a discus::>ion of algorithms for the ;{pproximarion of certain special functions. A
d<~taikd discw;sion of hardware, firmware, and DSP chip implementations is beyond the ;;cope- of this book.
intOrma!ion on programming_ the DSP ct,ips can be found in the books and application notes published by
the manufacturers of the~ chips. Discussion on selected DSP chips can also he found in the following
books a~d cbaptcrs [Bab95], [Cha90J, [Fad93J, fHad91}, !lfe93j, [Lin87], ftv1ar92]. tp"dp90].

8.1 Basic Issues


We examine first two :,pecific problems !hal may be encountered before a digital filter is actually imple-
m~nl<.'d. The tirst problem is concerned with the computability of the equations describing the stru<:ture,
and the s.econd problem is cunccmed with the verification of the structure developed to :-ea.lize a prescribed
tmnster fundior..

R 1.1 Matrix Representation of th-e Digital Fitter Structure


A:-; indic<tted in Chapter6, a digital filter can be described in the time-domain hy a set of equations relatin_g
the: output ~uem::e to \he :nput sequence and, in some ca'>Cs. one or more internally generated sequences.
The ordering of these equation<. in computing the output ;;amp-les is important, as discussed nex!.

515
516 Chapter 8: DSP Algorithm implementation

w w2
x -~ +'r'--'----,~--~G+r'-------r~----.

'

Figure 8.1; A ca;Kadcd lattice cligiral filter structure.

Consider the digital filter structure of Flgure K I . We can describe thi.s structure by the folJowing set
of ·~uatians relating the signal variabies Wk{.z), the output Y{:;), and the input X (z);

W!(Z) = X(z) -an'5(Z). (K Ia;·


W2\z) = WJ(Z}- JW3\Z.), (8.lb)
W3(z) = z-J W 2 (z), (8.lcj
W4(.:)= WJ(Z) + FW1(Z), (8.ld}
1
W5(Z) = z- W4(Z), {8.le)
Y(z) = fl w, (z) + yWs(z). (8.1f}

In the time-domni:n. the above set of equations is equivalent to

w 1 [nj =x[nj -aws[n], (8,2a)


l.Uz!nj= w,[nJ- Ow3fn], (8.2b)
WJ!nJ = 1L'2[n- 1l, (8.2c)
w4ln] = W:J.[n] +£-u:z[n], (8.2d)
wsfn] = w4[n - ll, (R.2e)
ylnl = tfwt(n] + yrvs[nl (8,2f)

The above set of equations do not describe a valid computational algorithm since the equations cannot be
implemented in the order shown with each variable on the left side computed before !he variable below
is computed. For example, computation of U'l [n] in the first l;tep requires the knowledge of w.s[n} that is
computed in the fifth step. Likewise, fuecomputation of w2Ln] in the second step requires the knowledge
of WJ{n] that is computed in the following step. We call the ordered set of equations ofEqs. (8.2a) to (8.2t)
noncomputabk.
S~,.;ppose we reorder the above equations and write them in the form

w:o[nj = wz[n- n (83a)


W5[nj = w4[n- f]. (8,3b)
w!ln] = xfnJ- aws[n1. (8.3c)
w2in] = w:[n]- tltt•J[n}, (8.3d)
y[n] = tfwJ[Il] + yw5[n], (R.3e)
W4(r.j = W]fn] + BU..'2[n]. (H3f)
8.1 . Basic issues 517

It can be seen that the above ordere-d set of equa1ions now describes a valid computational algorithm
since <he equations can be impkmente::i in the sequential order shown, with each Yariahle on <he left side
t:umputed before the variable below is computed.
In most practical applications, the equations chamcterizmg the digitaJ filter can be put Into a computable
lJrGer by inspection. lt is, however, instructive to examine the computability of the equations describing a
digital filter in a more formal fashion, which is de&eribed next !Cro75J.
To this end, we write the equations of the digital filter in matrix form. Thus. a matrix representation of
Eq~. (8.2a} lo (H.2fJ is given hy

~l
0 0 0 -a

y;Jin l x[n] 0 0 0 u;1ln]


w2!n] 0 u:2Lnj
W3[nj 0 0 () 0 0 0
wlfn]
W4.inj 0 +: 0 0 u 0 ll'4.[nJ
ws!n] 0 u.·5!n l
y[n] 0 0 0 0 0 0 yln l

0 0 0 y 0

l
r~gg~~~l
ll'dn- lJ
wz[n - lJ ~

+ 000000
WJ[n-l]J U-·4[11 - 1J , (8.4)
[J 0 0 1 0 0 JJ.,'5[n- 1]
0 0 0 0 0 0 J vfn- lj
whic-h we can write compactly as

y[n] = x[nJ + Fy[nJ + Gy(n - 1] (8.5)

where
WtLni
w2[nj
U'J[n 1
ytnf = w.dnl
xfnJ = (8.6a)
u.-.s[n]
y[nl _
0 0 0 -a 0

0 -6 0 0 0 0 0 0 0 0 (}
0 000000
0 0 0 0
0 1 0 0 0 0
G=
0 £ 0 0 0 0 0 0 0 0 0
000100
0 0 0 0 0 0 0 0 0 0 0

0 0 0 y (Jj
5HJ Chapter 8: DSP A:gorithm Implementation

If we examine Eq. (8-.4), we observe that, for the computation of the present value of a particular SJgnal
variable, ;:he nonzero entries in the corresponding rows of :he matrice~ F and G determine the variables
whose present and previous values are needed. If the diagonal element in F is nonzero, then thei::omputatiun
of the presenl "Value of the corresponding variable requires the know !edge of its present value indicating
the [Jresence of a delay-free loop making the structLJ"e totally noncomputable. Any nonzero entries in the
same row above the diagonal of F imply that the computati-on of the present value of the corre:;ponding
variable requires the pre~ent values of other variables thm h<n·e not )'et been computed, thus making the
ser of equations noncomputabie.
[t follows therefore for cornpmability all element<; of the F matrix on the diagonal a~d above diagnna:
mm,,t be zeros.
fn cheF matrix of this exampk, the diagonal element~ ax all zeros. indicating that there are no delay-
free loops in the structure. However, there .are nonzero entrie:o. abo"Ve the diagcnal in the first and secorn.J
rows ofF, indicating that the set of equarions of Eqs. (8.2a) to US.2f} is not in proper order for computation.
On the othe:- hand. the matrili representation of Eq!.. {8.3-".) w {8.3f) results in

0 0 0 j) 0 0

11;o;[nj 0 0 0 D 0 0 0 w,[nj
wsfnj
wrfn]
w2fnl
~
xfnl
0

0 + _, 0 -a

0
() 0

0
0

0
0

0
I 1vsfnJ
wt[n]
i w2!nJ
yLnl
W..J{n]
0
0 0 y ~ 0 0 0 l y[nj
U."4fnl

L 0 0 ,- {) oJ
r
0 0 0 0 0
' w..,[n- I]
''

I l
() 0 0 0 i u:•:;)l - I]
0 0 0 "
0 0 0 wi[n- I ]
(R. 7)

+l 0
0
0
0
0
0
0
0
0
0
[)
0
()
0
0
0
0
0
w:[n- 1]
yfn - lJ
L w41n - 1]
for whi.:h the F m;;trix satisfies the compmabJli.ty condition. and thus, the equations describing the filter
are in proper order.

8.1.2 Precedence Graph


We now describe a ~imple algorilhm for te~ting the computability of <1 set digital filter structure and for
developmg the proper ordering sequence tOr a set of equa!ions describing a computaCle structure. To
this end, we lin,; form a signal flow-graph description of the digital filter ;;tructure. Jn a signal flow-
graph [Mn:>60]. the dependent and the independent :;ignal variables are repre..<;ented by nodes. whereas the
mulliplier and the dday unirs are represented by dirl"cted hrunrhe.s. In the latter case, the dire\.·ted branch
has <m attached symbol denoting the branch-gain or lhe trammi!tance, which for a mu!tiplier branch is
the mul!iplier coefficient value and for a delay branch JS. <;imply .: -I. For example, the signal How-graph
reprc>;entation of the digital filter structure of Figure 8.1 i:.; a,; shown in Figure 8.2.
As the outpul of the delay branches can alway<; be cnmputed at any instant since they are the delayed
vatm:s of <bcir respective input signals computed at the previuus instant, we remove al! delay branches
from the complete :signal How-graph of the digital filrer structure. Similarly, aU branches coming out ot
8.1 . Basic Issues 519

-U

-3

Flgure 8.2: Sigr.al llew-g:-aph representation or ~h" d1gital filter ~!rudure of Figure H. I.

-0

y[uj

-S

Figun- 83- Rcdllce~ s1gnal l'low-grapr. n~aincd by remvving the branches J:lOing out of lhe inpul node and the delay
bra~he, from !he c.lgnal How-graph of Fi_Rure 8 2.

the inpur nnd<O! are also removed since the input variables are always availa~e al each instant. For our
example, the rcsuiling reduced signal flov.-gruph is as -:hown in Figure 8.3.
We now group the remaining nodes in the reduced signal ftow-graph as follows. All nodes with oniy
outgui:1g hnmches are grouped into one set labeled {.11/1 }. Next, we form the set {,~<\/z} containing nodes
that have branches coming in from one or more of the nodes !n the set jlV; I and have outgoing brancbes
:o the other :Jodc<;_ We fhen form a ~t {A·3l containing nodes {hat have branches coming ill from one or
mo:re of the r:ode:. in the :sct'i {...t\/1J and {Af2 } and ha,.·e outgoing branches to the other nodes. This proce:.s
1s conllnued until we have a set ;. .\'1 l containing orly nodes with only incoming braoches.
Since the signal v.uiabks belonging to the set {_,Vl} do r.ut depend on the present values of the other
>-ig:iill v::riablc.s, these variables should be computed firs-t. Next. the signal variables belonging to the set
f;V?J ('<W be computed since t!:te) depend on the present values of the signa! variables contained :n the
<.,e\ lA(1 j -;hal have already been computed. This is followed by the computation of the signal Yariables
it: the seh f-'V-,}, UV4}. etc. Pinall:, in the last step, !he ~>i:;nal variables belungi.ng tn the set l:V'ti are
computed. This: process of .o;equt:nliai computation ensure~ the development of a valid computational
alglllithm. However, ifthere is no final se1 lA'}} ::ontaining oniy incoming branches. the signal flow-graph
1-.. n.)ncomput:ili!e. The r<:arranged signal flow-graph \\-ithom the deiay branches and with nodes grouped
a~ indic;Jted abuvc is called aprecedena graph fCm75].
Fm our example, the pertinent groupings of node variable~ according to their precedence relations are
:.h follm"<s:

{.-'V--1) = {w3 ~nJ. u:5:nn,


{/\(:] ={wt!nlL
!lvJ! = {w;>lnJL
520 Chapter 8: OSP Algorrthm Implementation

_,
r-----,

'w [nj:
: 4 :
'' ''
: _v[n1 :

Figure- 8.4: Precedence graph of Figure 83 redrawn with :;igr.a! variabk:o.. gwuped according to their precedence
rdation5.

The precedence graph redrawn according to the above groupings is ;ndicated in Figure 8.4. Since the final
node ;..et {...V4 } has only incoming nudes., the strucmre of Figure 8.1 has no delay-free loop.<>. Tht---refore,
fnr our example structure, we can compute first the signal variables w3[n 1 and ws[nJ in any order, then
D.lmpute the ._ignal variable w 1[ n J, followed by the compctation of the signal variable w2 [n L and finally,
compute the: ~ignal va"iables u>.t.[n] and y(n] in any order to arrive at a valid computational algorithm.

8, 1.3 Structure Verification


An important step that needs to be considered in the hardware or software implementation of a digital
transfer function is to ensure that no computational and/or l'ther erwr;; have taken place during the course
of the realization process and that the stnu::rure obtained is indeed characterized by the prescribed transfer
function H (.<;). A simple technique to verify the structure is outlined next [Mit77a].
Without any loss of genemlity, consider a causal LTI d~gital filter structure characterized by a fourlh-
order transfer functiOfl

P(z) /)(} + P1C 1 + Pl'--Z + P3Z- 3 + P4-Z- 4


H 17 1 -- ______ "_c~cc,_~"""-o-c_~o,--cccc'o- (8.8)
,~_ -- D(:_) - l + d1 Z 1 + d22 2 + d3:;; 3 + £4z 4 .

tf {hlr. ]J denot-es its unit sample response, then


oc
H(z) = Lhln]z-". (8.9)
n=fl

From Eqs. {8.8) and (8.9} it follows that

(8 .. 10)

or_ equivalently. in the time-domain by th-e convo!·,uion sum,

Pn = h[n]@d,. (fLll)

w·,lich t:Xpli<.:illY shows the relation between the numer-ator ;md the denominator coefficients of the transfer
function H (;:) of Eq .. (8.8) and its impulse response samples_ Si.nce the total number of tran-s-fer function
8. 1. Ba~c Issues 521

coefficients is nine, we need only any consecutive nine equations of the set of Eq. (8.11) to have unique re-
lations between the transfer function coefficient!> and the impulse response samples. Writing out Eq. (8.11)
explicitly for n = 0. L 2•... , 8, we obtain

Po= h[OJ,
Pl =hi I]+ h[O]dt,
P2 = h[2] + h[lld1 + lt(O}d2,
PJ = h[3] +h[2jdl-+ h[l]d2 +h[OJdJ.
P4 = hL4l + hL3]d: + h[2Jd~ + hfl]d; + h[O]d4.
0 = h[S] + h{4ldi + h[3]d2 + h[2JdJ + hfJ]£.4,
0 = h[6l + h[S]d, + h~4Jd2 T h(3JdJ + h[2]t4,
0 = h[7j +h[6]dt + hf51d2- h[4]d3 + h(3J~.
0 = h[S] +hf7]dl +h[6Jd2 + hf5]d3 +h[4Jd4.
ln matrix form the above equations can be- rewritten as

h[O] 0 0 0 0
h[IJ h[O] 0 0 0
PO h[2] h{ I] hf01 0 0
PI
h[3] h!2] h[l] h[O] 0
Pl
h[4] h!3] h[2] h[l] h[O]
p,
p, - (8.12)
0 h[5] h]4J h[3J h[2J hf!J
0
h!6] h[5] h[4] h[3] h[2]
0
0 h[7] h[6] h[S] h[4J h[3]
h[8] h[7] h[6] h[5] h[4]

In partitioned form, Eq. {8- 12} can be reexpre-ssed as

~2
H,
[
p

0 ]=[ h ][ -l J (8.13)

where

p= [El ·= n~J O= rn (8.!4a)

[h[O] 0 0 0

g~ J. • =['[h[6]
5)]
h[l] h[O] 0 0
H1 = hf2J h[ I] h[O) 0 h[7] .
h[3] h[2] h[l] h[O]
h[4] h[3] h[2] hill h[O[ h[8]
522 Chapter 8: DSP Algorithm Implementation

H _
h[4] h[3] h[2]
h[5J h[4] h[3J h[2]
h[l]] (8.14b)
z- [
hf6[ h[5] h[4) h[3] '
h[7] h[6) h[Sj h[4J

Equation (8.12) therefore can be written as two matrix equations:

(8.15}

ll=[h H2J[~]. (8.16)

Solving Eq. (8.16), we obtain the vector d composed of the denominator coefficients:

(8.17)

Substituting Eq. (8.17) in Eq. (8.!5), we arrive at the vector p containing the numerator coefficients:

p= Ht [ -H~'h l (8.18)

In the general case of an Nth-otde1 IIR transfer function, knowing the first 2N + l impulse response
samples is sufficient to determine the transfer function coefficients. Here, the vector pis. of length N + 1,
the vector dis of length N, the vector his of length N, thematrixHt is of size (N + 1} x (N +I), and
the matrix H2 is of size N x N.
We illustrate the above approach for the reconstruction af the transfer function of a causalllR filter
from its impulse response coefficients in the following example.

' 1
82" Structure Simulation and Verification Using MATLAB 523

This gives u-. a straightforward me3ns of finding the transfer fun:::non of ;my discrete-time strucmre
knowing the fir;! 2M + 1 :;ample:. of lh!•1 1}. where M i-. the L1rder of the transfer function H (::. L This ap-
pnx~~:ll to .4lucture vcriti~:.~tion '" tilustr.ateJ later in Exampko; g.4 a.r~ 8.9. It can aho bt: used to determine
the e!fe<.:t of cocfticierl.l quantizalinn by computmg the tran..,fer funG1ion r"l.'alized with the multiplier coel-
b:ieub. quantued to the de-.ircd number of bits. This w..-tual transfer function i;, then u::.ed to compute th.:-
frcqucncy rcspon~e. to determine tZle ach.wl pole location c.. etc. Another application of the above method
is in the det.:rmi.nation of noise transfer ~-unctiom for corr.puting 6e output nmse power due ~n produd
rounJ.-oif~ in fixed-point d:gital filter implementations con;;it.lered in Section 1},6, and in the determination
l'f ;.;caling. tr:m:o..fe;- functions m~dcd for dynamic range sca\;::g Ui~cu,;sed ln SectioE 9.7. In mo;..t cases, we
can <.!'.;;umc that the dctmminatot ct,t:Hi;,;itnt~ <~rc knO\vn; therefore. the solmion of Eq. (8. 17} that requires a
matnx i:rver~imT c.an be avoide<i. The numerator coettk:ienh arc e:.si!y found from the ilr;,t M + I -s.J.mp!e.'>
of {h;n J} and using Eq. (8.1 B).

8.2 Structure Simulation and Verification Using MATLAB

A: indicakd ear!ie.r, \Ve concentrate in this book. onl;· on scf:ware :r::plementation of DSP algorithms. In
th;;, ~cti.m, \\-e Ct•nsidcr only the nnple:nentation of digita~ tiitering algorithms, The following section is
devoteD to the implementation of di<;crete Fouriertr.mstOrm algorithms.
The software tmplemcntatmn of a digit'-11 filtering algorithm on r computer is uftefl carried out hefore
the algotithm is implemented in a hardw;tre form to Yerify that the ulgorithm chosen do.:-s indeed meet the
goals <1f the application on hand. Moreover, such an implementation is adequate if the application unde
consideration doe;;. not require re"l-lime signal proce.~sing.
F·H computer s.imulation, we buo.ica!l} describe the structure in the fonn of a set of equations. Thc;;e
equation~ must be ordered prupuly lo en.s.u:-e computability. For s.imphcity, !he procedure is to expres~
:he output variable of each add<:"r ctnd the filter output vari«blc in terms of all incoming signal vuri:1hles.
For exampk. !\.::J the -:trueiure of Figure 8.1. a valid computational algorithm involving the least number
uf cquanon;; is

wJ[n] = xfn]- 2tLtln - l ] ,


wzlnl = u:dn)- i5u•2in- 1},
udn- l! + FU.'l£n!,
!L'4!nj =
y[nl = flwdnJ + }"'li.'.t[n- ll-

ThO! abo\e set nf equation<> i!-' evaluated for ;ncreasing value-; Df n starting aT n = 0. At the heginnwg, !he
mitial.:ondition~ w2f- 1! and lL'..!:- l] can be set to :my de~in:d values, which are typically Lero. After the
c.:nmpll!dwn uf lhc last equatwn at time in:omm n the computed va:ue:;. of w:;[nJ anti w.:.[n 1 re~i:~cc the
•·<due~ of u;2{n ~ l i and mj:rn- lj before the set of equation~ are ev:1luated for the next Iime inst~nt n + J.
In Chapter 6. we oullined a numhcr of methods for the realization of hoth FIR anti HR digita! transfer
functions rc:;ulting in a variety of structure;... We rc~trict oar attention to some of these. stru;::tures to
demoJ:strdte the simulation of di,gital filters using M"\ rL...,B. The structure being simulated can be verified
by :::ompLting it~ tran<>fer function Llstng the method desctibed in So;;ction K l.J. To this end, !he M-file
:c:; ~ J<_""Ie t g1ven beiow can be used.
624 Chapter 8: DSP Algorithm Implementation

func~ion \_p,dj o:: strucver\ir,N)


~ ~ ze~os(2*N+l,N~l);
t-r•::,l! = ir';
fo:r n = 2:N~l;
H(:,a) = :zeros(l,n-1) .:r(l:2"';N.,..l)-n•::•:
end
i--~1 zc~ros (N+.:;_, N ~ll :
fork= "l:t-<+1;
El{r:::,:) ""'H(k,:)~
en::'l
P.3 = zeros;N,~+l};
for k = l:N;
H3(k, :I ~ H(k+N+l, :) ;
end
h2 = H3{:,2:N~l);
;~,f = H3{:,1);
% Compute the denominator coefficie:>ts
d = -':inviH2) J*l1f;
% Compur_e the ru.::mers.tor coeffici er.ts
p Hl*[l;d];
c {l; d];

8-2.1 Simulation of Direct Form IIR Filters


The M-file filter in the Signal Processing Toolbox of MATLAB basically implements the IIR digital
filter in the transposed direct fonn IJ structure shown in Figure 8.5 for a third-order fiher. 1 As indicated in
th:is figure, d ( 1 j has been assumed to be equal to I. lf d ( _:;_) -=/=- 1, the program automatically normali:zes
.all filter coefficients in p and d to make d ~ 1 l = 1.2 The basic forms of this function are as follows:

y = fllte.::-(nilrn,den,x!
:y.sf] = filter:nurr1,den,x.s1)

The numerator and the denominator coefficient~ are contained in the vee; on; numand den, respec[ive!y.
TIJese- vectors do not have to be of the same size. The input ·vector is x while the output vector generated
by the filtering algorithm is y.
As indicated in Figure 8.5, the function f i l te r ~i.mulates the filtering operation in the time-domain
in accordance with the following representation of the digital filter:
sJ,:n+l) p(4):x(_!"!) d\4/y(n),
s2(n..-ll p(3)x(rd d(3ly(n) • s3 (n),
slin+ll p(2)x(n) d\2)y(nJ s2 (nj,
y(n) p(l:.x(c)
" sl (n)

In the second fonn of the function f il te:r, the initialcooditi:on:s of the delay (state) variables, sk (n),
k =: J. 2, ...• can be specified through the argument sl. Moreover, the function filter can return the
final values of the delay (state) variables through the output vector sf. The size of the initial (final)
condition vector s i (sf) is one Jess than the maximum of the sizes of the filter eoefticient vector~ n~m
l5ee Section fi_4.J for the devdopment of llR direct form structure~-
1tt .'\l'nuld be noted tl:at in Figure 85 we ha\-e uY..-d the MA TLAB mxat<on~ for vect<J£ elements. mSlead uftbe nol!alions used elsewhere
ir: !te text for representing filler coefficteru: and signal variable;.
8.2. Structure Simulation and Verification Using MATLAS 525

Figur-e 8.5: Transpo;,ed direct form H IIR s<ructun;.

and deo. The final values of l~ ~tate Yariahles given as vector sf are useful i.f the input vector to be
processed is very long and need;, 10 be segmented into smzll b1ocks of data for processing in ~tages. In
sut:h a situation. after the ith block of input data has been processed, the final state vector sf is fed as the
initial state vector s i. for the processing of the (i + 1)th block of input data., and so on.
For simulaing a cau»dl UR filte-r realized in the di.rect f~·rm U structure, the function direct 2 giver
below can be employed.

func::ion ty,;:-,J:] - d.l.re:::t.2(p,d,x,s~);


't Y = DIREC-:::'2 (?, D, X :• f i l te:-s i::~pu t data vee t.or X wi. th
% the fi:..ter described Oy vecLors P a.c:.d D to create t!:e
% f:ltered Gaca Y. The f i:::. ~er is a · oi :-ect Form I:::·
% j_r:;plement.at.icn of che differer,ce eq'.lation:
% y(l:) = p(l)*x(n) + p(2)*x(n-l) "- ... + p\np+l)*X(n-np)
1i d(2i*yln-l) - d(nd+l)*y(n-nd)
'?, ~Y,.=:Fl = DIRECT2(P,D,x,s:::J gives ac.,:::ess to initial and
% ::inal condi:::ions, SI and SF, of the delays.
3len = length(dJ; plen - lengt~lp);
;;; = ma.x\dlen,plen); ~ =l.engo:h(x);
s[ = zeros(l,K-l); y = zeros{l,MJ;
it na:r-)"in 3,
sf , si;
c~nd
:..f dle.l < p.len,
d [d ze:::-os(J ,plen dlen) ]
e..cse
p [oze:'::·ostl, dlen plen) ]
end
p p/d(l);
=
d C./d(l};
=
for n = l;M;
t·me·w ~ [l -C.U:Ni]*fxin) s~]
K = ~wnew sf1;
y\r..) K*p'
sf= h·me1--; sffl:)J"-2);;

The! followi~g example illu.s.trates the application of theM-file filter in generating the impulse
response coeffictents of a -causal HR digital filter.
Chapter 8 DSP AJgoflthm lmJ> ems 11at1on

EXAMPLE 8,.2 D.."1L"'i''I''LDI! md plol lbr: 11M 2..1 snmplcs IJ'f lln: 111 pubc rcspon!OC :o~C~~Uel ~. fro " - 0 JD
11= ~4. of t'lc- IIR low pUS d.,gj 1 Iii es- cleiw:nbtd b)' th rnrMfeE" [un£<btlD ol £q (7_1Q I rrpmlc..J bclo IR fi'JI"T!l
s:ui~h: far di~ 1bnn impf.ef'I\CI\tntu:lft:

O.ll602It2(l + '3.:::- 1 t Ji::-2 + :--:1,


fl( } (:i!.l .,.
~ == I- 09J~C:.l4lz-1 0 ..56712r:rllt- 5 -fl t01~11)7;:-3o .
'(bc.MAfL•dJ ~111JV''I.'!II balu.ocm be ~41;1 CDmpl.llf.: lhl: in pul ~~-

t frOQE'~ 8_1
' r~ula~ Pespon~ Compu~a~ian

11 • inp·.L'L( • I!lrrp..t ... s~ response length .;ic~:tr d • 'I 1


nu::.. - ir.~n:. ~ 'loi\.Une,~.·~ tor c.oo;:.f f · c i · t~ .. ' : r
den • in~t('Denorni ~~or c~~!~~ie~~~ • ')i
~: 11 zer~911,n~lll:
oLder~ n~xClena~h~num).len9Chtco~it-J:
1!1.1 t l:oeTOl:l ( l . ord ;r;:) ~ •
y fiit rln~~,den,x,~lt;
t1tc:n Cn-l. y~
xlabel I 'Tin~a. ind.~ n'': ylabel C 'IIITipl it· sdo' ~;
td tle C''llllpu1 se r:- ~pen-; !al!inp- ·~' •
During uecation., the pmg:a,m li~ ~~ lhe J.e~L'l o l..h£ •n•tJLll!le respc:uase- 'o lx- CCillJ:XIu:..i. The pn mn1 tfl
ICQ.!XIIt& tbt: 111JI11eratnf IJlU'I ~inp.tnr vedo1'1i ln be l~d ll'l. 1tu~!le ~ C'liW be U~llg the ~
eu. Aherueculloo, il pl lbe rmptJ~~ n in Fig~ R.6 fllr e llR iilt«ffiEq. ti:U'9).

T he- digi•a.l filtcrin_g applic... li ur 0 1 Lh:' M -fill! fi 1 t · :: '" ~;nns•ll~~• 111 th~: ru..•... l ::o.c\ .......1 ..: :!~unpl~~-

IPLE u 'c Ulu~U'ab:: in ilils cxaroptc the. filtering: (If 01 '-iiJllll ~nm~ of lhc ' m or t'IIIU ,ini..AioOicb.
of eonnalimd nngubr ftc:qu ic:!. 0 hr IIDd o.Slll', a· Ulc li.)WJ;ID!.X liR d[Jtital ~•~ ~r c.q. r$.19). MA.TLAII
Prosrtllll 8_2 ji¥~ below cl:lC t-.e Cl'll.lizcd fi::Jr llus ~

11!1 Pr~ram "8_2

'\ Ill•r.:-JC:::rl'tion of P' l r.r:i~g by a r.cnorpa!'<a L 1. ~1.lter

' G911or:.r t;.e the ;J.np:.


k ... .i~'S-:
q...:.,.n<.;•l

vl - 0.S~pl~W2 ~ ~ - ~ ~pl;
A • 1.'5;B • 2.0;
xl c A .. cos(·ooti•4:k: :1)• Y.A ~ B+co~:d\42"'0. lll~
JC • xt~x~;
!i G n re~ t:ne outpll aeQ\HmcE by fi 1 tering ~he input
t::i - to o 01 ~
num ~ o.Ob~~212•[1 ~ ~ 11;
d n"' l'l -0-~J ... GH"J .;J.S~'?12-b9 u O.l01!;9lO"Ij;
V • !~1~ L(num,d n.~.~i~:
'l Plot:- t.h~ i.npu <!lnd t..ho outp'-'t . 'Q\14"'~c!'1
~uopl o t 12. 1. 1) :
r,;C: (k- ,;t!l ~ axia HO so -4 ~Jl:
x -abo t 'Tl i. ndex n'}; vlabe! t • k·o.~pllt::lcie • ~ ;
t.1.t.le (' lnpvt :Seq"J~e· I;
aubplot: ~2.1, 2. •
8.2. Structure &mulation and Verification Using MATlAB 527

0.4 'i'

~ 0.3
02

Iirr.c inde;.; n

Figure 8.6: lmpube response samples of the ITR digital filter ofEq. (8.19).

8.2.2 Simulation of Cascade Form IIA Filters


ln the following example, we illustrate the simulation of cascade form realizations of UR transfer functions.
We first verify the simulation using the method described in Section 8.1.3.
528 Chapter 8: DSP Algorithm Implementation

!i
~ "I'~"OrmT
< _, ~~~{Tit~
2f: 3ll ---:"':c-- 20 ]{) 4U 50
Time ,,oc,., Timei~n

(a) (b)
Figure&. 7: Illustration of filtering by an fiR lowpa-.s filter: (a) input sequence. md (b) output sequen..--e.

yf f \&''>:YJ/th,: A:-5 :V~"'k0 :


':<7 ";H +
"'";: sxm'\ ,:_,;: t::,:: '* ±±.n;,, s;uLp ., ; v :2"'
•) j JGit/ f r. t t" "'\Ji)L \ tfAiiL0LC/Lt L;:J&t:f>{ 2 +1\i: 2' f' f\i/!0' ~ f •l~l&lll? f£1' • f• k'

""·::.±+->}
v •
·ith!v10: r·rv.( \ .. H>< .S &£Tt
!; .. .} \1\::Jj· "A;r t<: fdo \

·-
w •. } ?d.
w'L "' t; -''"rvl; "":

0 v Y•!r?ftt
t::t t,>!GrG~ll''X*E { ";:
··+ . z·:) 7J~>x +1 ,
y} ;,:_: t.Grt (tTl <;< "'rD':
""0 j •} tr
.; i 1L
'\/\(~: ·: s v··s"';; 1 }'U:+t:-;t
"'~' t,;,;;+~+oi;,VH!: .. \tl +'i
8.2. Structure Simulation and Verification Using MATL4B 529

-·--·
0 l<J

Figure S.S: Output of the cascade realization or the lmo,·pass filter of Eq. (8.19).

8.2.3 Simulatio11 of Overlap-Add Filtering Method


A<i indicated in Section 3.6.2, a long inpu: sequence can be filtered us.ing the overlap-add method in whlch
the input sequence is segmented into a set of contiguous. short input blocks. each block is then filtered
separately, and the overlaps in the output blocks arc added appropriately ~o generate the long output
sequence. This method of filtering can be implemented on MATLAfl using theM-file ffc:. f i 1 t. It can
also be easily implemented using the second form of the M -file :E i 1 t e r. Here, the final values of the
internal variable. vectDr sf at any stage of filtering is fed back in the following stage of filtering as the
initial condition vector s i.

'%' ?·{ 0'\0: t" q;pr ::;:,


if :: {:.:L <:tr XTt\ $••h"H} X:f t 2/4\! t' { Jii!F J,ditS V' f ,'L t r r t r.r;
l
'
w'R - ;;:·. r "ri " u_l
As"" .'Sf;ts"' 2.·>;
{1! ): \ fo. *' t:tJTI
J& - +.1-.:.I;

:a-. w Q,\::AiilfZ0'?X• t' 1r ! bi:


.0 ·"" (J •• 4V"\.L' t,/ t:,'\0;T\,i'!;"f 0 "'{) :>,;;~ \;

;,. \ lh
>++ ftv., r· 1 , ;; t
530 Chapter 8: DSP Algorithm Implementation

\t ; { "

XdArHJ \0 ', ;,'} '

8.2.4 Simulation of Direct Form FfR Filters


Both f i ::_ ter and direct2 can be used also to simulate direct form FIR filter structures by setting the
denominator ve;,:tor den equal to 1. The following example illustrates the u.se of the former M-file.

l ~·r::r<n vt~t~ ill .}·


't i i ,&}It; ttl\ P/10 r;{' {r lf trd j J''X
t
'* ll+trnm:ix:uw !:hw r;;y,
'["" ;.{f fLf .. !

L, ;t; f
! 'Cif:t) \: '!&0'<:;;1LAtPiT'\&

""' .L:hP+ "' £ "»;


it:L 11 f:f';'·:untw'f*i'ft: f:i %(7'" «"'ttliL'W£¥ '\
IS " :Ji,J.vx~,;

"'"
""
,b b},j X
j8,!*J\I: <<ttG 0;71!0

0LX£@11!v{fL •• 1 " 4{ t
w?Uwl{ "1'£/liiii!R :tt"ltibcyt' ;, Gs }j(Li!\i:mtl.::"l+~}: VJ/;J!!b"A t
0;JJ:ti't4Wt ::
8.2. Structure Simulation and Verification Using MATLAB 531

~
'
! -' l
20
Trrne
30
i~de~n
-:i

-!---cc--
~ 10 40
i 50

(a) (b)
Figure 8.9: Jllustration of filtering by an. FlR low pass filter; {a) input sequence. and (b) output sequence.

_6.2.5 Zero·Phase Filtering


The Signal Processing Toolbox of MATLAB includes theM-file f il ~ :Eil t which implements the zero-
phase filtering ;;cheme discussed in Section 4.4.2. In its basic form, y = f i l <:. f i 1 t i p, d • x) im-
plement;; zero-phase filtering by performing both forward and time-reversed processing operations. The
resulting filter thus bas double the orde!" of the filter characterized by the coefficients p and d. If H(z)
denotes the transfer function of the original filter, f i l tf:_l c: then realizes a filter with a zero-phase fre-
quency respcnse given by! H(e 1 "')! 2 and therefore. has a paN>band ripple in dB and a minimum stopband
attenuation in dB that are twice those of the original filter. respectively. We illustrate below one possible
application of the function f i l t_f i l t.

tn:
~ tnif~A('l#$Ni tJ".JS £1 J1<hii!: 1\'M:;i +iiAl1ftL
it w 'hfrh

'"" .& ::;;; j "' iL++t


* 0 1 v ·;.;(f * t?"<H:n+ ;wz "'+ :ip·l 7 v 7

':if " ff .L{f':i!U.' +'!f. 1!: x V; j


)GJ::t.JS }£."; , 40 ; 0HA i d{
t'LM. ;r:t!w t +• ;;,{#4>Vtl 1vf3!p{ j L;p}p rt r
kt:MJ ;;;.;:.J&.f'h\&E:.' ";f fhv t:r;Lhi\. 04W¥1\PGI01M' tJ
532 Chapter 8: DSP Algorithm Implementation

Out;>trt sequ<:!I'OC generared by fuoctioo f ,lre..-


•,-- :~--~----_

e:) ':
e':
t Jrl'Jw,
E
< o-
'
fW"ill ~ ,~~~
I~ <Y
c i''' " C

_j -'r
-4-- ---------
,
-·~---
0 :c w u :o 2C
Time
30
in&~,-,
41) :-<l

(a) (b)

~-·--.oc---=c--
(1 \(j w

(c)

Figure IUD: E:<ample 8.8: (a) low-frequency component of the input, {b) the output generated by forward-only
filtering. and (c) the output genecated by both forward ll!ld time-reversed filtering.

8.2.6 Simulation of Cascaded LaUice Filter Structures


The function la tcf i l::: in the Signal Processing Toolbox can be used tu simulate the llR and the FIR
cascaded lattice filter slructure of Sections 6.8 and 6. 9.1, respectively. The basic forms of this function are
8.2. Structure Simulation and Verification Using MA>LAB 533

[£,9] _j_atcfilt{k,x}
a, gJ laccfilt(k,alp~a.x)
l f, g] la=cfilt(k,l,x)
In the firsl form [ f , g l -= la t c filL l k, x) simulates an FIR cascaded lattice filter structure with
lattice filter coefficients given by vector k and generates the forv.md output vector f and the backward
output vector g for an input vector x. The second form i :c., g; = ::.a t:cfil t ( k, alpha, x) :-;imulates
an IIR tapped cascaded lattice structure with lattice coefficient vector k and the ft:edforward multiplier
vector alpha. The last form [ f, g J "' la tcfil t ( k, 1., x) simulates an all-pole JIR cascaded Janice
filter structure.
We illustrate in t!le next W.to examples the simulruion of both HR and FIR cascaded lattice fihcr
structures.
534 Chapter 8: DSP Algorithm Implementation

Figure 8.11: Cascaded lattice realization of the JIR transferfuoction of Eq. {8.21).

w1 [n]

Figure 8.12: Cascaded lattice realization of the FIR transfer function of Eq. (8.22): lq = 0.5, k2 = 1.0, k3 =
0.2173913. and k4 = -{UJ8.
a3. Computation o1 the Discrete Foc.~rier Traqsform 535

8.3 Computation of the Discrete Fourier Transform


The discrete Fourier transform (OFT) is another .,.,.~dely used DSP algorithm. As indicated earlier, ir can be
employed to lmplement the linear convolution of two sequences, a key digital filtering operation. It is also
used for the spe...."tral analysis of )>ignals, discussed in Sections 11.2 to 11.4. Because of the widespread use
of the DFf, it is of .interest to investigate its efficient implementation methods which are COflSiden:d in thi~
section.
In Section 3.2 we introduced the -concept of theN-point DFT X[k] of a sequence x[nJ of length N
as the N samples of its Fourier transform. X (ei "'), evaluated uniformly on the w-axis at Wf< = 2-'r k/ N,
O~k:"SN-1:

O.:Sk:"SN-1. (8.23)

Since a finite-length sequence is always absOlutely summable. the DFf is thus. the samples of its z-transfonn
X (z) evaluated on the unit circle at N equally spaced po~nts:

(8.24)

As can be &een from Eq. (8.2.3), the computation of each sample of the DFr sequence requin::s N com-
plex multiplications and N - l complex additions. Hence, the computation of the N -point DFr sequence
requires N 2 complex multiplications and (N - l}N complex additions. In the case of a sequence oflength
N, it can be shown that the computation of its N -?Oint DFf sequence requires- 4N 2 real multiplications
and (4N- 2)N real additions {Problem 8.12). As a result, the tota) number of computations to compute an
N -po:nt DFT increases very rapidly as N increases. For large N, the number of complex muh:iplic.ations
and additions is approximately equal to N 2 . Hence, it is of practical interest to develop more efficient or
fast algorithms for computing the DFT.

8.3.1 Goertzel's Algorithm


An elegant approach to computing the DFr is to use a recursive computation scheme. To this end, the
mosr popular approach is the Goertz.el 's algorithm derived next [Goe58J. Titis algorithm makes use of the
identity
W·~i.N
N
-]
- , (8.25)

obtained using the periodicity of W,Vk"-. Csing the above identity, we can rewrite Eq. (8.23) as

N-1 N-1 N-1


X[kj = L x[£JW!,1 = WNkN L xftHv.tt = L x[liW1~k(N-tJ. (8.26)
!=0 i=O .f=O
Chapter 8: DSP Algorithm lmplementatior

x.,Jnl -~(,+)-----,--~ yk In]

x,J.Nl=O yk{-1] = 0

Figure 8.13: Reom;ive computr..tion ol the kth DFT sample.

The uhoveex.rressi.on can be expresS<!d in the form of il convo~ution, To this end, we define a nev. sequence

Yk[nl
"
= ,Lxe[EJW!/(n-1), (8.27)
(={)

w:uctt is a direct convolution of the caus<ll sequence x ... (nl defined by


0 :s:: n :-;:: N - l,
rl -(x[n].
Xe n - O n<O,n:;:N,
(8.28a)

\o,.ith a Ulm;a] infinite-length sequence

n:::: 0,
n < 0.

It fol~awg from Eqs. (8.26} and (8.27} that

By taking the z-tran;;fonn of both sides of Eq. {8.27) we arrive at

. ) x .. (::) \8.29)
Yk{Z = k t'
1-W.Nz

wbere 1/(l - W,V~C 1) i<; the z-transform of the causal sequence hk[n] and Xe(z) is the z-transform of
x,.!n]. The above equation implies that yk[ni is the output of an initially relaxed LTI digital filter with a
transfer function
(8.30)

Wi<:h an input x[nJ as indicated in Figme 8.13. W~en n = N. the output of the filter .niN] is t)recisely
XU:],
From the above figure, the DFf computation algorithm 1s given by

O.::cn~N, (8 3J)

wilh )1.! -1] = 0 and xtNJ = 0. Since a complex multiplication can be implemented using four real
muftip!ications and two real additions, computation of each new value of Y.t[n] tbus, in general, requires.
four real muitipltcations and four real additions. 3 As a resulr, the computation of X[k] = y.~[N} involves
_\-[<"""be ~1-r:>Wn :ha: a simple E~<>d!iica!iC>n of the co~r.p!ex ma!tiplica!im:: algonthm ::an reduce th.: numb.,-(>( real mu/tipl!cati•m~
!" 3 while i11::nc,a~,ng the number of real additim1~ lo 5 {,;ee Problem K 13!.
8.3. Computation of the Discrete Fourier Transform 537

~'xinl
x.Jn J ------Ji.(+i)---~--'T-----~(+!)--~ yk [n]
x!Nl~O (2:lk:) >'~;-{-i.l-v;,(-2\.=0
,.. 2cos - -
N

Figure 8.14: Modi lied app!ThlCh :o the recursive computation of the krh DFT sampie.

4:;v real multi?licatlons and 4N real additions, resulting in 4,''>(1 real multiplication.<; and 4N 2 real additions
for the cornpctation of all N DJ-<T samples.
Hence. the above algorithm, in comparison to the direct OFT computation, requires the same number of
real multiplications but 2N more real additions. As a result, i! is computationa!ly slightly more inefficient
than the direct approach. The advantage of the recursive algorithm. however. is that the N complex
coefficiems. wt" required to compute XUcl do not have to be either computed or stored in advance, but are
computed recursively as needed.
The algorithm can be made more efficient by observing that Hk(<-) of Eq. (8.30) can be rewritten a~

l Wkz-l
~----~"-~~N~--~~
- wNkz ( l - w..,'·z 1 )(1- w~z-l)

1 - W~z- 1
(8.32)

re:-;ulting in the new realization >ihown in FiguTe 8.14.


The DFT computation equations are now given by

vk[n] = x[n}-+- 2cos (---


2rrk) Vk[n- IJ- vk[n- 2], O_sn_s:N, {8.33a)
. 1'1 f

X[k] = y.<;[NJ = vk(N]- W~i'k[N- 1]. (8.33b)

Note that the computation of each sample of the intermediate variable Vk[n] involves only two real
multiplications and four real additions;>~- 1be complex multiplication by the constant WL
needs to be
performed only once at n = N. Thus. to compute one sample X[k] of the N-point DFT, we need
(2N + 4) real multiplications and (4N + 4) real addilions. As a result, the modified Goertzei's algorithm
for computing the N -rmint DFT requires 2(N + 2)N real tr.ultipllcalions and 4(N -7 l)N real additions.
Further sa..,ings in computational requirements urn be obtained by comparing the realization of HN-k {z)
wi~,h that of HI.(<.) shown in Figure 8.14. In the case of the former, the multiplier in I1Je feedback path is
2cos(2n(N- k)/N) = 2cos(?.rrk/N) which .i~ the :same as in Figure 8.14. Hence. V'N-kln] = vk[n],
iru:licating that the intermediate variables computed for de1ermlning X[k) need no looger be computed
again for detennining X[N- kj. The only difterence between these two structures ism the feedforward
path, where the multiplier is instead W{; -.t = WNk, which i.~ the complex conjugate of the coefficient W~
uu-d in Figure 8.14. Thu~, the computation of lhe two samples of theN-point DFT, XlkJ and X{N - k1.
538 ChapterS: DSP Algorithm implementation

n!quires 2(N + 4) real multiplications and 4(N + 2) real additi'?n~. C!r in other y,:ord_s, all N srurples
of the DFf can be determined using approximately N 2 real multtphcatmns and apprmumately 2N real
additions. The number of real multiplications is tlms. about one-fourth and the number of real additions is
about one-half of those needed in the direct DFf computation.
The MARAB M-fite uf f t ( x, K, :<c) given below implements the modified Goertzel's algorithm to
compute the kth DFT sample of an N-point DFT of the sequence x. The computed DFT sample is XF.
The length of the input sequence must be less than or equal to 1"- Jfthe length is less !han N, the sequence
length i.s increased toN by zew-padding.

% Funr:::::ion t.o Cc.mpute 2. DE"T Sarr:p1e


% Using Goertze L' s Algorithm
% XF = gfft(x,N,k}
% X is che :;_nput sequeLce of :._ength <= N
• N is
% k is
the DFT length
the specifi~d bin nu~ber
% XF is the d~sired OFT sample
%
f~nction XF ~ gff~(x,N,kl
if ~ength{x} < N
xe ~x zero.s(l,N-length(x}}~;
else
xe X;
er:d
xl ~ [xe 0l ;
dl ... 2*cos{2*pi*k/N) ;N"' expl-i*2*pi*k/N1;
y = :=i::._ter(l, ~1 -dl l] ,xl};
XF "'"y:N+l)- W*y(NJ;

Goert.zers algorithm is attractive in applications requiring the cornputa.tiQn of a few samples of the
DFI'. One such example is the dual-tone multifrequency {DTMF) signal detection in the TOUCH-TONE@
telephone dialing system discussed in Sec;ion J 1.1. In such an application, the inputx[n] is a real sequence
and the square magnitude of the DFT sa;nple, IX~k]?. is of interest Since x[n] is a real sequence. the
intermediate sequence Vk[n] generated in the modified Goertzel's algorithm is also a real sequence. As a
result, we obtain from Eq. {8.33b):

(8.34)

The above scheme uses only real multiplications, avoiding the complex multiplication required for the
.computation o:· yk[N! as indicated in Eq. (8.33b).
We next d<"scribe a fast algorithm for the computation of the DFT when the length N is a composite
number. As we shaU demonstrale, in the new algorithm in the case of N that is a power of 2, the total
number of computations can be made proportional to N log2 (N) and is highly preferable in applications
requiring the computation of all DFT samples.

8.3.2 Fast Fourier Transform Algorithms


The basic idea behind all fast algorithms fQr computing the di:\crete Fourier transform {DFT), commonly
called the fast Fourier tranifonn. (FFf) algorithms, is to decompose successively theN-point DFT com-
putation into computations of smaller-size DFfs and to take advantage of the periodicity and symmetry of
8.3. Computation of the Discrete Fourier Transform 539

the complex number Wk".


Such decompositions, if properly carried out, can result in a significant sa>'ing.-.
in the computational complexity. There are various versions of the FFT algorithms. We review here the
maln concept" behind the two most bas..i:: FFf algorichms \Coo65].

[)ecimation-in-Time FFT Algorithm


Consider a sequence x;nl of length N that is assumed to be a power of2. Using a two-band polyphase
decornposition-5 of x[n] we can express <1S .r-tmmform X\:-) as

X(:::) = '
XnC:"') + z- '·X 1 {z2 ), (835)

where

j-! -'t-t
Xo{z) = L xnlnJz-" =I,: x[2n]z-". (8.36a)
n=U '1=0
~-t ¥-1
X 1 (z) = L XJ[n].:-" = L xl2n ..,t.. l]z-". (8.36b)

Thus. Xu(z) is the z:-transfonn of the (N ;2)-length sequence xo[nl = x[2nj fonned from the even-indexed
Si:mples of xln], while X 1{z) isthe z-transform of the {N /2)-length sequence XJ ln] = x{2n + 1] fOrmed
from lhe odd-indexed samples of xfn].
Evaluating X (z_) on the unit circle at .V equally spal.'Cd points, z = W,Vk, "le arrive at the N-pllint DFT
of x!n] given b;fi
O~k:S:N-1, (8,37)
wftere Xolkl and X 1 [k] are the (N/2)-point DFTs of the (N /2)-len_gth sequences xo[n] and Xt[nj, respec-
ti•rely, i.e.,

'f-J
Xolkl = L xo[r]W~kj 2
r=C
i -I

L x[2r]W,~~ 2 . (8.38a}

'""'
Jf-c
Xt[kj = L xl[rlWN~2
d}

-f -· l
= L x[2r + :tJWN:·z· 0 5: k ;:s J- L (838b)

It is instructive at this point to examine a block Ciagram interpretation of the modified DFT computation
scheme of Eq. (8.37) that computes an N -point DFT of the origina11ength-N requence x[nl by forming
~See Sec:JOn 6.3.3 far a diocu>..-;;i<m on :he pclypha.<e decomposition.
t'U<lNr~ =k mo<Julo(N/2).
540 Chapter 8: DSP Algorithm lmplementaticn

x[n I --& xn f,,] = x[2n1

(a)

xtn]~
~~~
(b)

Figure 8.15: tal Generation of a subsequence containing even-indelled input samples, and (b) generation of a :rubse-
quenc<: containing <nki-intlextXi input sample;,.

~[nJ XI}[< k >Nt2J


.t{n] '1-point X!kl
DFT

x [nJ
1 '1-point
DFT X)<k>.v 12 )

Figure 8.16: Structural interpretation of the DFr decomposition scheme of Eq. (8.37).

a weighted sum of two (N /2)-point DFTs of twu (N /2He-ngth subsequences formed from the even-
indexed samples. xolnJ = xf2n 1 and the odd-indexed samples Xtlnl = x[2n + I]. To th1s end we neeC
the dmvn-.mmpler, introduced earlier in Section 2.1.2. to develop the two subsequencel> xo{n) and Xl [n~
from xfnJ. It follow·s from the definition of Eq. (2J 8_), if the input x[n] to a facfor-of-2 down-sampler is
a Length-N sequence defined for 0 ::s n ::::; N - I, its output XC![n] is a length-(l\1'/2) sequence defined fru
0 _::: n :::;: {N /2) - 1 and is composed of tbe even-indexed samples of x[n]. i.e., xolnl = x(2n], as shown
in ~igutc 8.15(a). To get>erate the subsequence xtlnJ composed of the odd-indexed samples of xfn J, i.e .•
XJfn} = .t[2n + lJ. 0 ::;:n _:::::: (.\'/2) -1, we needtopassx[n + l]througha factor-of-2down-s:ampler.
The sequence xln + 1] can be developed from the ~quence x[nl by means of an advance operation The
pmce-;.s JS illustrated in Figure 8, 15(b ).
Prom the above discussion it follows then that s block diagram .interpretation of the DFT computation
-.cheme of Eq. (8.37) is as indicated in Figure 8.16. Figure 8.17 show~ its How-graph representation for
the case of N = 8.
Before proceeding further, let us evalua.te the computational requirements for computing anN -point
DFT u~ing two (N /2)-point DFTs based on the d~ornposltion ofEq. {8.37). Now a direct computation of
an N -point OFf requires N 2 complex multiplications and N 2 - N ::;;::; N 2 complex addi.t:ons. On the other
hand, computalion of an N-_point DFT using the decomposition ofEq. {8.37) requires the computations
of two (N /2)-point DFfs that need to be combir..ed with N complex multiplications and N ;::omplex
additiL'ns, resulting in a total of N + (N 1j2) oomplex mu\tiplicatioos and approximately N + (l'i2/2)
comple;.;, additions. It can easily be verified that for N ?: 3, N + {N2/2) < N 2 •
We can continue the above pl'ocess by expressing each of the rwo (N /2)-point DFfs, G{k] and H[k],
a_<; a weighted combination of two (N/4)-point DFfs since by assumption N /2 is even. For example, we
can expl'ess Xo(kJ a'>

(8.39)

where Xno(k] and Xm[k] are the (N /4)-poin{ DFI's of the (N/4}-iength sequences, xoo(n] and XOJ fn],
S-3. Computation of the Discrete Fourier Transbrm 541

Xo{Uj
x!Ol X!IJJ
Xnfl]
x{2J N
';; -pmm

DeT Xo!21
xf4]
XuPJ
<[6)
X1!'Jl
, r1 !
-"l
XJ[I j
x[J] -pc"ll Xl5]

<[ 5] ' DIT X1!2l


Xl~
XJ~31
tl7] XI7J

F~ure 8.17: Flo\\--gn!pb of :he fir~: stage in the dccimaticn-in-t;me: PFT algorithm for 1V = R

generated from the even and odd samples of xo[n]_ rC<,pectivcly. Likewise, w<>: can express Xi fkS as

(8.40}

where X wfk l and X 11lkJ are the tN /4)-point DFf;, of the (N j4)-length sequences, xwln] and XJJfn].
<>enerated fnJm the evee1 and odd samples of _\"J [n l, respectively.
<0> Sub:otituting Eqs. (fU9) and (8.40) in Eq. {8.37) and making use of the identity W1f2_ = w.~~, w-~
then arrive at the two-s;agc- decomposition of an N -point DFT computation in tenns of four (N /4 )-point
DI-'T\. a!-> indicated hy the block Ciagram of Figure 8.18. The corre,-ponding flow-graph representation h
;,hown in Figure 8.19 for N = 8, In the case of the 8-po-int DFT computation illustmted in Figure 8.20, the
( 1'."/4}-point DFf is a 2-poim DFT and no further decomposition is possible. The 2-point DFTs. Xoo!kl
XtL [k] ..Yw[kl, and X;: [k}, can be easi!y computed. For example, for the computation of Xoo[k] the
pertir.cm expressions are

Xno!kJ = Lxoo[nJW2'k = x[O] + wfxf41. k = U, L (8.4!}


•r=O

The corresponding fiow-graph is indicated in Figure 8.20 wllere we have used the identity w§ = W,~:v;l;k_
Replacing each (}f the 2-point DJ-Ts in Figure 8.19 wHh their respective flow-graph r~presentations, we
iina!ly amve at the complete flow-graph of the bas-ic decimation-in-time DFT algori!hmas shown in Figure
8.21.
If we examine !he flow-graph of Figure 8.2l, we note that it consists of three stages. The first stage
computes L"Je Jour 2-_poim DFTs, the second stage computes the two 4-poinc DFTs, and finally the last
stage computes the de-Sired 8-pomt DFT. Moreo-..--er, the number of complex lT'nltiplicati.ons and additiom
perfom1ed at each stage is equal to 8, the size of the transform. As a resuit, the total number of complex
multiplications and additions in eomputing all & DFT samples is equal to 3 x 8 = 24.
It foEows trom the above observation that in the general case when N = zt:, the number of ~-tages of
computation oi rhe (2~-' )-point DFT in the htsl algorithm will be p = lof!:J N. Therefore, the tntal number
of c-omplex multiplications and additions in computing all N DFr ~pl.es i;; equal to N(log2 N), In
developmg this count, we have f0r the present con<>idered multiplications with W2 = I and w~'/2= -l
tube complex. in addition, we have not taken advantage o:-the symmetry property W,~;"/Z)+k = - w,t.-.
These properties can be made use of in reducing the computationa1 complexity further.
542 Chapter 8: OSP Algorithm !mpleme:-ttation

x[n j '~ -p•)inc X00 [<k >,..- 14 1 + X0 1<.( >VI"-!+, X[k]


DFT
__v;~/2

a;,,' I:

Figure 8.18: .Stw>Ctltral interpre:ation of the two-stage DFf de.:ompt-";;tien scheme.

XoolO;
t: :l I N -pomt
'7 >-Wt
~DFr x,~,[ll w;::
X[lj
X<::J[OJ w,
':_z]
'zv -p<:>mt X{)J[ J j wJ
X{2j
DI·T
X"Jtll
w, , X~31

-' ~
"
51
if. -?0111(
" DFC X mill
XJ4]

XJ-'ij

t~ }j
X,;!OI '
N
X]6J
!:! -ooint
c[IJ o--
' 11FT
. Xn!<!
XPI
w_t w'N

Figun! &. i9: fkw,·-gmph u:- tht> second .\!age in the de<::lmation-in-time FPT algori!hw fm N = 1>.

xjO; ~ X00[0J
.. ,o ,.,o
n~"'HN=J

xj4J Xw1 I J
< .,N2
l2=
¥ ...- =-1

figure 8.20: Eow-graph vf the 2-poillt OFf.

Computational Considerations
Exc:minution of the flow-graph ofFi.gure 8.2! al;;o :revcab that ea-ch stage of the DFI ~amputation process
employs the ;,arne basic compuwtional module io which two output variables are generated from a weighted
comhmatmr; of two input variables. To see thi~ aspec: more dearly, let us label theN-input and theN-
output variables in the rth stage of the Df.T computation as '+'r[m] and *r+:(mJ, respeclively, with r =
l, 2. . .. ;L, and m = 0. L ... , N- l. Ac(;ording to thi;; laheling, in the case of Figure 8.22 fOrthe 8-point
8.3. Computation of the Discrete Fourier Transform 543

•,la'~'l'";[a]
, '. t
"·\'

'-¥r!~l~ \)r_,_tfl3l
wt +U~:12;
"
Fi_gurt' 8.22: F1ew gr3ph ,,f lht:: bask comput.alim:.a! moduic- 111 the Jecimatiou-m-llmc FFf mcthftd.

DFTcompu\ution, '4< 1iOj = xiOl. I.Jl 1! Ji = .<!4-i. and s.oon. Similarly. hne. ¢"'-[OJ= X[O]. '-lt.,jlj =X [I!,
and >o uo. Ba;;ed on this lahelin.; >cheme. it can be easily n.:rifieJ t:lili the b:1sic computational module is
r~prc~o:ed by the flow-graph of Figure R.22 and de,.,crihcJ by the following inpul-outpul relatlons:

(R.42a}
I
(8.42b)

H..::cau-..: of 1b ~dnpe, the basic cmnpuialiunal mnJu!e ot Egurc 8.22 \,referred to ln the titera:ure as the
!J!il!njfy t·omputation_
, ,- . -~+iN. 2;
Suh\.lltltlng iJ..- _v - 1.-V(. in Ey. (S 42h) \\·c can 1ewrite n a<.,

(8.42c)

The nh,ddieJ huuc:--Jly computil!ior; is thus as indicated in r·igure B.23 and requires only one complex
mui!Jp!Jc<lion Usc of!his :nc>diticd hut!crfiy compulatlonal module ic lhe FFT cwr._putation lead« to a
redudion in ll:e :o1.1l nt.l-:~he·· of complex multiplica~im;:, hy :"'O(k, a-; ;,:an be seen frnm !he new flm~-graph
for the ,V -= E cw-,c 1Uv-:trated in I·ig.ure 8.::'.4. Furthn sanng-; in the computational i:Dmplexity ario.e b)
taking inw comideratio:J J~at m:.:h:plica!ion<> by w.~· = 1. \-\-',~·,.'? = -1. w;>' 4
'-- j. ;:md w_~/"'-1- = -)
can b..: <1vqid<:'.d m the Df·T ~:omputation pnxes1i.
Another 'lllr::~ctive feature of the FFT ~ligorithm described above is with regard to memory req~.;irernents_
Sin<:e ea(,;h stnge employ~ ~he s.mlc: huttertly computallOn !;>compute rhe two uutput n1ri3bles >.l-'r+lfuJ
;_;[iJ '+',. I II fi l !'rom the input vari.lhle:-, I.Vr f 0' I and t¥, [fJ ]. after '41,- '-I !Q' I and tp' -1 ; rfj I haw been determined,
they !.:an be sture-J lll the .;.ame memory Jnc.ation.s where '-Prl" i ilnJ 'llf!/11 wen: previou::.Jy stored_ Thus, al
1he l'nd nfthCc ::umpulatiun a any ~tage. the output variahk, '-¥..-+! jmJ can be stored in the same registers
544 Chapter 8: OSP Algorithm Implementation

Fignre 8.23· Flew-graph of the modified hmterfly computational module.

x!'ll X[O]

x!4] X!lJ
w~ -I
xt2~ X[2j
w',,
-•(6~ X[3j
-I w2
xj! ~ "'~ " X!41

x:5J X[5J
wu -I
x~3 ~ ' w~
X[6J

:d_7! XF1
w2 -I it':' -I
N

Figure 8.24: Flow-graph of the mo::iificd de'-'imation-in-lJmc FFT algorithm.

prev:OUsly occupied by the corresponding in~ut variables 'l-'r!m l- Thi, type of memory location ~baring
fe.1ture is (."Ommonly known as the in-place compuwtion, resulting in a sig~ificant saving in the overall
m~mory requirements.
lt should be noted, however. fmm Figure 8.24 that while the Dl-1 samples X[kJ appear at the output
in a ~L"queolia! order, the input time-domain samples x[nJ appear in a different order. Thus, a sequcntiaJly
ordered input xjn) must be reordered ai)prOpriately before the J-il--T algorithm described by the above
stmctun·- can begin. To understand the basic scheme, in the input reordering -,cheme consider the }',-point
DFT computation illus:rated in Figure 8.24. lf we represent the arguments of the input sampk~ xln 1 <Jnd
th•!i.r sequentially ordered nt.•w representations W1 [m] in binary forms, we arrive at the following relations
between m and n:
m n
{)()() fi(i{j
{)()! !!){)
010 010
011 110
lOO ()()!
101 101
110 Oll
Ill I II
It fotlows from the above that if (h1b1bol represents the imlex. n of xfnl in binary for.n.then the sample
x[72b,bnJ appears in the location m = hob!bl as W[hobl be 1 before the OFT wmputati•>n is started, or in
other words, !:he location of '-l-';lml is in biJ-n:versed order fmm thai of the original input array ..~:[11 J.
Various alternative forms ofL'rte FFT cvmputation can be easily obtained by reordering rhc computztions,
Slh~h as the input in normal order and the output in bit-reve~ed ord.;r {Problem 8. J 6), and both input and
output ~n normal or-der(Problem 3.17}.
8.~t Computation of the Discrete Fourier Transform 545

The FFT aJgc:-ithm outlined ahove assumed the input sequence xf n J to be a power of 2. If it is not. we
can c.\lend the length of the sequence x[n] by zerJ-paddiog (see Section 3.2.3) to make its length W be
~~ pn,.e.r of 2_7 Even after zero-padding, the DFf complltation hased on the fast algorithm derived above
m:.~y he computationally more efficient than the direct DFT computation of the- original shorter sequence.
All<:>rnativdy, wo.: can develop fast algorithm~ that make us;; of polypha~ decomposi!ion with more than
c\\·o 'Wb~ucnces. To illustrate ;_his modi5cation to the ba<>ic FFT algcrirhm. 20nsider a sequence x[nj of
a lenglh that is a power uf 3. Here. the DFT sample'.' in the first .-,tage are compmed using a three-band
pdypha'ic decompo;,i.tion of X {::-L

{8.41)

(8.44}

whe<c Xp; k j, X 1[kj, and X2[k] arc now (N /3)-point DFis. This. process can be repeated w wmpute en.ch
ef ~he\;\' /1)-point DFTs in terms of three (N j-9)-point DFfs. and so on, until the smallest computational
mcdule is a 3-pnint DFf arul no further decomposition i:;;; possible.
Th..: FFf computation schemes described above are cailc-.d decimation-in-time (Din FFT algorithms
sin~·e here the input sequence xfnl is fir.,t decimated to form a set of subsel{uence~ before the DFT is
compu;cd. For example. the relation between the input sequence x!n} and ib even and odd parts., xo(nj anJ
x1 fnj. respectively. genemJ:ed by the fint $tage of!he DIT algorithm shown in Figure 8.16 is as follows:

x (n l : J.' [OJ x!ll x[2l xl31 .tf4] xi5J x[6J x!71


xolnl : .rfOJ xf2J x[4J xf6j
xdnJ: x[ 1} xl3] x[S] x[7j

Likc~JSe. the relation between the input sequence x!nl and th,., sequence~ xoo!n], xe~dnl. xw[n]. and
r1lfn 1. gcn~rated by the two-sfa!!c decomposition of the DIT algorithm and i]lustrated m Figure 8.19, is
giYtn by
x!n! : xi OJ xfi) x[21 x[31 xj4j x[5J x[6] x[7l
xoolnJ x[O] xf4]
XOJ [n 1 ; x121 x[6l
xw[n]: x[iJ x!51
xu[nl: x[31 1:~7J

Or in other words, the wbsequenccs X(IIJ[n]. xm[r.], xwfni, and xn!nj can be generated directly by a
faclor-of-4 decimation proces-s, leading to the single-stage dc;::omposition given in Figure 8.25.
If at each stage the decimation is by a factor of R, the resulting FFT algorithm is called a radix·R FFf
algorithm. Thus, Figure 8.24 show:- a radix-2 DJT FIT algorithm. Likewise. Figure 8.25 illustrates the
firsl stOlgc of a rariix-4 DJT FFT algorithm_ It alsu follows from th~ above discussion that, depending on
the value of N, various oombinations of decompositions of X!kl can be used to develop different types of
DJT 1-'FT algorithms. If the ~heme uses a mixture of decimations. by different factors, iC is called a mixed
rad}x FrT algorithm_
f-'oc N which i~ a compoo;ite number expressible in the form of a product of integers,

N =rJ·r-1···r, .•
---- - - --------
' h sh·~ulJ be nollid tha;: zero-padding i:acr<e.J>e~ the cftrxuve length n! lh:: original >«iuem:e.. aOO hence, me l<>nJ,>er-kngth DFr
-.ampks ~1..: ;Hfer<:nt fre'Jlli"'....-y s.<.rnplcs <.>f !he frequency respon>ie .and a;: m<"" du.•cly ~ll4U:d :>H !he unit urde than !har of t!lc
'-)[])!in;tl stt•:_..--u,r-kn:::.th DFT samples.
546 Chapter 8: DSP Algorithm Implementation

XoJ<k>N] Xo!<k >N]


.r [n]
00 T
.rfnl
N -point
4 '+ K:_k]
DFr
'
WN/2

-~m:< o:;:.~]
1 xm[•ll
N .
-pomt 4
' 4
DFf

Xa.J<-k>,vj X 1[<k>,..,,J
_tw!n] N -pomt
.
'
4
4
DFJ'

~ :1•
xll [n} N
4
.
-pomt
DFf
XliJ<k.>Nl
4

Figure 8.25· First stage in a rndix-4 decimation-in-time FFT algorithm.

ir can be shown that the total number of complex multiplications (additioos) ira aDIT FFf algorithm based
0.:1a t•-sragc decomposition is given by {Problem K 18)

No_ of multiply (add) operations = (t: r; - v)


•=I
N. (8.45)

Modified Goertzers Algorithm Based on OIT Decomposition


lr ,orne applications, it may be convenient to apply the Goertzers algorithm in computing the smaller-size
DFfs after one or two or more stages of decimation-in-time decompositions of the input sequence x[n ]. For
nample, the four ( N /4 )-DFfs m Figure 8.1-8 can be computed using Goertzel's algorithm. This approach
redu.::es the oYerall computational complexity of the direct Goertzel's. algorithm while still permitting the
computation of a few DFT samples.
It ,;hould be noted. however, that the structural :interpretations. of the DIT FFf algorithms given in
Figures 8.16, 8.18, and 8.25 make use of the advance block wi.th a transfer function z ihat i~ not physically
realizable-. If a hardware implementation of the-Se structures is desired, we can jnsert an appropriate
amount of delays at the input and then move them through the chain of advance blocks to make the overall
s.tructure phy~ically realizable. Figure 8.26 shows the reaiizitb!e version of Figure 825. Moreover. the
input sequence x[nJ shou:d be delayed by 3 sample periods before the down-samp;ing operations are
carried oul. The relations between the subsequences xoofn], XOJ{n], xw{n}, and Xtt[n] and the original
sequence x!nJ are now

x[nJ ; xtOJ x[i] xj2j x!3] x[4j x[5J x[6} x[7)


xooln! : x~O] .\[4)
xm[nJ: x[21 x[6]
xw[n]: x[l] x[5)
XtJlnl : xL3J x[7]
8.3 Computation ot the Discrete Fourier Transform 547

w'
N

Y~gure 8.26-: Modified structure for-lhe !lrst stage of a radix-4 deeimation-in.-time FfT algorithm.

Decimation-in-Frequency FFT Algorithm


The basi..: idea behind the decimation-in-time FFf algorilhm is to decompo~ sequentially the N-point
~que nee x~n] into sets of smaller and smaller subsequences and then form a weighte-d combinations of the
DFI<> ofthesesubsequences. The same 'idea can be apphed tD theN -pomtDFT sequence X[k] to decompose
it sequen!:iaUy into sets of smaller and smaller suOOequences. This approach leads to anotheT class of DFf
comp';Jtation schemes collectively known as the decimation-in-frequency (DIF} FFT algorithm.
To iJJustrate the basic difference between the above two decompo;irion schemes, we develop below the
first stage of rhe DIF FFr algorithm for the case when N is a power of 2. \Ve first express the z-transfomt
X ~z) of xin1 as
(8.46)
where
{N/2)-l (N/2)-l
Xo(z;) = _L x(n.]z- 11
, Xi(Z)= L x[-'t+nJz-". (8.47)

Evaluating X(;:} on the unit circle at z = w_;k, we get from Eqs. {8.4-6) and (8.47).
IN/2;-1 :N/1)-l
X[k}= L x(n]W.~k..;-w,~:""i2 )k 2:: x[q- +n]W_Z.l:. (8.48)
r.=O oe=O
The above equation can be rewritten as
(NJ1)-l

X[k]~ L (x!nl-'-(-l)'x[tc-nJ)w,:;', (8.49)

whae we have used the identity w.~"'/1.Jk = (-0"- Twu diffeTent forms of Eq. (8.49) are obtained.
depending on whether k i.o; even or odd:
(N/2}-l
X[U] ~ L: (xtnl +X [ ~ + n)) w;;"'
"'""n
548 Chapter B: DSP Algorithm Implementation

xJfr
-<IDI X{ OJ
;:Jl,
-'fi I
xcl2J
.!i --poim Xj2)

t)2] DFI X[4j


•c[31
•l3l Xl6l
-<,IO_
ct.:-J Xj lj

lj5j
w2 x 1!li
Jf--pomt X(3j
w~ z,[:OI DFT
.t[6l XjSJ
... ~ •PI
xpj XDJ
w.~

Figure 8.27: Flow-gr:;ph nf the first stage of t.'le decimation-in-fre(luency FFT algorithm for N = 8.

(Nr2)--l
'<;"'
L.....,. ~·X'!
[ J+ X [N. ')'""'
1 -t-nj WSJl•
O<f<~-1.
- - 2
!8.50a)
n=O
(N!::>)-1
X[2f+l!= L (xfn]-x[4+nDw.::.Pt+n
r.=O
(N/2}-1

= L {x[nl-x[.q-...;..n])U/~W~~ 2 , O<t<~-1
- - ' {X .5Gb)
n=O

The above two e.xpression;; represent the ~-point DFfs of the following two ~-point sequences:

xo[n] = (x{nJ+x[-'t +n]),


x:[nJ = (xinl-x l4 +n]} n'.~- O<n<oY.-1.
- - 2 - (8.51)

re:;pectively. The ftov.•-gmpb of the first '>!age of the DFT computation scheme defined hy Eq ... (8.50a)
aiJd {8.50b) is shown in Figure 8.27 for ,V = 8. As can be seen from this figure, here the input sample;.;
Hr·! in a sequential order. while the output DFf samples appear in a dedmatedfonn with the even-im.lexed
samples appearing as the outputs of one iN /2)-point DFr and the odd-indexed samples appearing as the
outputs. of the other (N}2)-point DFf.
We can ccntinue the above decomposition process by expressing the even- and odd-indexed sampko.
of each one of the two ( N /2)-point DFTs as a sum of two f N /4 )-point DFTs. This process car. continu<"
umil the smaEest DFr:.-, are 2-point DFfs. The complete flow-graph of the deci_mation-in-frequency DFT
computation scheme fer the N = 8 case ~s shown in Figure 8.28.
It can lk ~cen from Figure R.28 lhat in the DIF FFT algorithm, the inputx[n] appears in the norm~l
or-der while the output X[kl appears in the bit-reversed order. Just as in the case of the radix-2 DIT FF•T
algo:r'.thm. the total number of complex multiplications per :c.1age in the rad.ix-2 DIF FfT algorithm b N /2.
Hence, the total number of complex multiplications for computing the N -point DFT samples here i~ ab-o
equal to (N /2) Jog 2 N ignoring the fact that multiplications by
4
w:Jli
U, l = U, l, 2, 3, 4. can be avoided.
As before, various fom1s of the DIF FFr algorithm can be generated.
The DIT and DIF FFT aigorithms described here are often referred to as the Cooley- Tukey FFT algo-
rithms.
An examination of the ftow .. graph of the first stage of the radix-2 DIF FFf algorithm _given in Figun:
R.27 reveals that the even~ and odd-indexed samples of X [k j can be computed independently of each other.
8.3. Computation of the Discrete Fourier Transform 549

X~l

X[41
-I
w!
XJ2l

_, w~
X{S!

XII!
X[5j
-I w~
Xfl!
_, w'
XFJ
'
Figu~ 8.28: Complete flow-graph of the decimatior.-;n-frequency FFT algorithm for N =K

!t llas been shown by Duhamei[Duh86] that significant reduction in the ;;:omputational complexity can be
obtained by using a radix-2 DIF algorithm to compute the even-indexed DFf samples and a radix-4 DIF
FFT algorithm ro compute the odd-indexed DFT samples. This type of computational scheme has been
called a split-radix FFT algorithm (Problem 8.29).

hwerse DFT Computation


An FFT algorithm for computing the DFf samples can aho be used to calculate efficiently the inverse DFf
(IDFT). To show Ibis, consider an N -point sequence x[n] with an N-point DFT X[k]. The sequence x[nj
is related to the ..amples X[k} through
i>"-1
x(n] =~ L X!kJW,V"k. (8.52)
k=O

If we multiply both sides of the above equation by Nand take the complex.conjugate, we arrive at
N-1
N-t*inl= LX*fkJWZ.k. (8.53)
k=O

The right-hand side of the above expression can be recognized as the N -point DFT of a sequence X*{kl

then obtained as

x[n] = N
1 " - .I
{
..
~ X"[kJWN1 .
,.
and can be computed using any one of the FFf algorithms discussed earlier. The desired IDFr x[n] is

(8.54)
..-=0
In summary, given anN-point DFT X{kJ. we first form its complex conjugate sequern::e X"(kJ, and then
compute the N-point DFT of X"'"!k]. form !he complex conjugate of the DFT computed, and finally.
divide each sample by N. The invers.e DFT cnmputalinn process is illustrated in Figure 8.29. Two other
approaches to iRVerse DFf computation are described in Problems 8.23 and 8.24.

8.3.3 OFT and lOFT Computation Using MATLAB


The following functions are induded in the MATLAB package for the computation of the OFf and the
IDFT:
550 Chapter 8: DSP Afgorithm Implementation

1
Re{X[kJ} ----..f--1----./ >"-'-~Re{x[nJ}
N-point
Im{X[k]} L~D=FT:.:.._j---p-,1-~ lm{.:tln]}
N

Figure 8.29: Inverse DFT computation via OFT.

fft.(x), fft(x,N}
if ft. (X), ifft(X,N)
These programs empJoy efficient FFT algorithms for the computation. TheM-tile f ft (X) computes lhe
DFT of a vector x of the same length a<; that of x. For computing the DFf of a specific length N, !he
M-file f f t {x, N) is used. Here, if the length of xis grearer than N, it is truncated to the first N samples.
whereas if the length of x is less than N, the vector xis zero-padded at the end to make it into a length-N
sequence. Likewise. the M-Ille iff t (X J computes the IDFT of a vector x of the same length as that of
X, whereas the M-fiJe i f f t (X, ~;. can be used to compute the I OFf of a specified length N. In the Iauer
case, the restrictions on N are the same as that in theM-file f ft (x, N). It should be noted that theM-file
iff t in MATlAB employs the IDFT computation scheme outlined in the previous section.
MATLAB uses a high-speed radix-2 aJgorithm when the sequence length of~ OT xis a power of 2.
Moreover, the radix.-2 FFT program has been optimized specifically to compute the OFT of a reaJ input
sequence faster than lhe DFT of a complex sequence. If the sequence iength is not a power of2, it employs
a mixed-radix FFJ algorithm and usually takes a much longer time to compute the DFT or the IDFT of a
su;uence of length N that is not a power of2 than those of a sequence of a power-of-2 length that is cJosest
toN.
Since vectors in MATLAB are indexed from l to N instead of 0 to N - 1, the DFT and the lDFI'
computed in the above l\IATLAB functions make use of the expressions

N
X[kJ= Lx[n]w_!;-l;(k-1). l:SkSN, (8.55a)
n=l

{8.55b)

\ ®l¢%0ii7' aw
t :rvrti
t
:PJ AAt;ttM;:'L f' ; " -
;
8.3. Computation of the Discrete Fourier Transform 551

Q 9
'' ''
''
''
''
''

'
Figure 8-..'lO: The magniludes of a 32-poi:nt DFT of .a ~nusoid of frequency lO Hz sampled .at a 64-H:z. rate.

"
~10 0


'
10 15 lO
~i!Kit~k

Figure 8.31: The magni.tudes of a 32-point DFr of a sinusoid of frequency I I Hz sampied .at a 64-Hz rate.
552 Chapter 8: DSP Algorithm Implementation

'
% \ S/P,j0"1Uit f) t: H"J'
}
!AS

<i' 0 :I!Pj:<J'Lf : "2 :'"' ''if\'«Zi''S


fr; /\ j1ij'bt' f ;/ ""li""i '"*'" j\, "W' r
<I# ""' 1t? »+xv:M +~, t, rv,: 1)> ;{},
li®; v \ :j\ '%01P0H H '31!: I'Yfr\ h; 0'"
tt ., Ut '1il<'ff c0 r Ht; tf!v:';:
y w {},

'
!i~:::!i,i:i"'@lffT"N" "j

~ I

"'"• "~"'"}""""
,3'' ,'Jifj :,:;,;
j, ' ffiJ ::::

8.4 Number Representation


The binary representation is used to represent numbers (and signal variables) in most digital computers
and special-purpose digital signal processors used for implementing digital filtering algorithms. ln this
form. the number is represented using the symbols 0 and l, t:alled bits, 8 with the binary point separating
the integer part from the fractional part. For example, the binary representation of the decimal number
11.625 is given by
10lJAl0l
where a denotes the binary point 'The four btts. lOll, to the left of !he binary point form the integer part
and the three bits, 101, to the right of the binary point represent the fmctional part. In general, the decimal
equivalent of a binary number 11 consisting of B integer bits. and b fractional bits,

is given by
B-1
L a,.2',
i=-b

where each bit a; is either a 0 or a I. The leftmost bll a 8 _ 1 is called the most significant bit (MSB) and
the rightmost bit a-1> is called the least significant bit 'LSB).
To avoid the confusion between a decimal number containing the digits I and 0. and a binary number
containing the bit" l and 0 we shaH include a subscnpt 10 to the right cfthe least significant digit to indicate
8 bit is an abbre•ialed form of binary dtgit.
8A NLmber Representation 553

a decimal number and a subscript 2 to the right of {he least significant bit to indicate a bmary number.
Thus., for example, I 101 10 represents a decimal number, whereas liOl2 represems a bmary number whose
d<x:imal equivalent is l3;n. Jf there is no ambiguity in the n.,presentation, then the subscript is dropped.
The block of bits representing a number is called a word. and the number of bits m the word is called
the word length or word si;:.e. The wurdlength is typiCally a positive integer power of 2. such ao; 8 or 16 or
32. The word sile is often expressed in unit~ of eigh! bit" called a byte. For example, a 4-byte word is
equivalent to a 32-bi:t-sizc word.
Digital c1rcmt-; implementing the arithmetic operation~. addition and multiplication of two binary
numbers. are specificaily designed to develop the result:-;, lhe sum and the prodoct, re:>pectively, in blnary
form with their binary points in the assumed locations_ There are two basic types of binary representations
of numbeTs, fixed-point and tloatiitg-point. as discus...."'-'Cd below.

8.4.1 Fixed-Point Representation


In thh type of representation. the binary point is assumed to be fixed at a specifi.c location, and the hardware
implementation of the arithmetic circuits takes i.nro account the fixed Jocatio11 in performing the arithmetic
operatiom. lmplementation of the addi.t;on operation carried out b) a digitai adder circuit is independent
offr,e location of the binary points in the two numbers being added as tong as they are in the same location
for both numbers. On the other hand. it is not simple to locate the binary point in the product of two
binary numbers unless they are both integers Of" are both fral·tions. In the case of th-e multiplication of two
integer.; by the multiplier circuit, the resdt is also an integer. Likewise. multip-lying two fractions results tn
a fraction. In digital signal processing apphcations, therefore, fixed-point numbers are always represented
a; fractions.
The range of nonnegative integ-ers 11 that can be represented by 8 bits in l:! fixed-point representation 1s
given by
(8.56}

Similarly. the range of positive fractions rr that can be represented by B bits m a fixed-point representation
i~ given by
{8..57)
In eicher case, the range is fixed. If 1/m;.,\ and 1/mm denote, respectively. the maximum and [he minimum
values of the nurr..bers that can be repres..:nted in a 8-bir fixed-point representation. then the dynamic range
R of the numher:> that can be represented with B bits i~ given by R = 1fmax - 1/mjn, and the resolution of
the representation is defined by
R
'~---
2Fl - 1.
(8.58)

where b is abo known as the quantization few:!.

8.4.2 F~oating-Point Representation


In the normalized floaling-point representaiion. a positive number 7) if> represented using two parameters.
the mantissa M and the expom•nt or the characteristic E in the form

(8.59)

where the mantissa M is a binary fraction restricted :o lie in the range

4:sM<l. (8.60)

and the exponent E is either a positive or a negative binary imcger.


554 Chapter 8: DSP Algorithm Implementation

IoI I • • • •••
E M

Figure 8.32: IEEE 32-bit floating-point format.

The floating-point system provides a variable resolution for the range of numbers being represented. The
resolution increases exponentially as. the magnitude of the number being represented increases. Floating-
point numbers are stored in a register by assigning 8 E bit!> of the register to the exponent and the remaining
B M bits to the mantissa. If the <>arne number of total bits :s used in both floating-point and fixed--point
representations, i.e .• B = BM + Br:, then the former provides a larger dynamic range than the latter
{Problem 8.44).
The most widely followed Hoating-point formats for 32-bit and 64-bit representations are those given
by theAr-.i:SI/IEEE Standard 754-1985 {lEEE85]. In thisfornat, a 32-bit number is divided into fields. The
exponent field is of 8-bit lengtb, the mantissa9 field is of 23-bit length. and l bit is assigned for the sign.
as indicated in Figure 8.32. The exponent is coded in a biased form as E- 127. Thus, a floating-point
number lJ under this scheme is represented as

(8.61)

with the mantissa .in the range


O.:::;M<L (8.62)
The following conventions are followed in interpreting the representation of Eq. (8.61}:

i. If E = 255 and M # 0, then r; is not a number {abbreviated as NaN).


2. HE = 255 and M = 0, then I'J = (-1) 5 . oo.
3. lfU < E < 255, then I)= (-1) 5 · 2E-l21{1.t;M).

4. If E = Oar:dM #- 0, then TJ = (-1) 5 . T 126 (0bM).

5. lf E = Oand M = 0, then lJ = (-1) 5 . 0.

where t t.M is a number with one integer bit and 23 fractional bits and DaM is a fraction. The range of a
32-bit floating-point number in the above formal is from 1.18 X JO-JS to 3.4 X lOJS {Problem 8.45). The
IEEE 32-btl standard has been adopted for number representation in almost all commercial floating-point
digital signal processor chips.

8.4.3 Representation of Negative Numbers


To accommodatr the represen1ation of both positive and negative b-bi.t fractions, an additional bit, called
the sign bit, is placed at the Ieadln~ position of the register to indicate tbe sign of the number (Figure 8.33).
lndependent of the scheme heing used to represent the negative number, tbe sign bit is 0 for a positive
number and l for a negati"oe number.
A fixed-point negative number is represented in one of three different forms, In the sign-magnitude
format, if the sign bits = 0. the b-bit fraction is a positive number with a magnitude Lr=l a_,z-',and if
s = l, the b-blt frac(ion i;; a negative number with a magnitude 1
Lf_
iL;2-'.
'"In !he lEEE floaling-pomt s<andacd, tbe mantissa is called the signifo'ami
8.4. Number Representation 555

•••

Figure 8.33: Repr=entation of a general ~igned b-bit fixed-pcint fraction.

In the ones ·-complement form, a positive fraction is represented as in the sign-magnitude form. while
its negative is represented by complementing each bit of tile binary representation of the positive fraction.
In tllis representation, the decimal equivalent of a positive or a negative fraction is thus given by -s( 1 -
2 -h) + L..r=l
"b a_,..,.
"'>-i ·
Finally, in the tlN'O's-compfement representation, the positive fraction is represented as in the sign-
magnitude form, while its negative is represented by complementing each bit {i.e., by replacing each 0
v;ith a J, ami vice-versa) of the binary rep~entation of tbe positive fraction, and adding a 1 to the LSB.
the bth bit. In thls form, the decimal equivalent of a positive or a negative fraction is tbns given by
-s + zj'= 1 a-;ri.

Th-bJe 8.1 illustrates the above three representations for a 4-bit number (3-bit fraction and a sign bit}.

8.4.4 Offset Binary Representation


In the offset binary representation, used primarily in bipolar digital-to-analog conven;ion, a b-bit fraction
with an additional sign bit is considered as a (b + I )-bit number representing 2"+ 1 decimal numbers.
About half of these numbers represent the negative fractions and the remaining half represent the positive
fractions. as illustrated in Table 8.1 forb = 3. It should be noted that the two's-complement representation
can be converted to the offset binary representation by simply complementing the sign bit.

8.4.5 Signed Digit Representation


The radix-2 signed di_@t (SD) format .is a 3-valued representation of a radix-2 number and employs three
digit values. 0, 1, and l (with the last symbol representing -1}. ln many instances, the SD representation
of a binary number requires fewer nonzero digits and has been exploited in developing an algorithm for
a faster hardware implementation of the multiplication operation (see Section 8.5.2). A simple algorithm
for the conversion of a radix-2 binary number into an equivalent SD repres:entation is as follows (Boo51].
Leta_ tD-2 • · • ah denote a binary number with irs equivalent SD representation denoted by c _ 1c_ 2 . · · cb.
The bits c_, of the SD number are determined through the relation

l =b.b-1, ... , I.

wherea_b-l = 0.
The following example illustrates the rudix-2 SD representation.
556 Chapter 8: OSP Algorithm Implementation

As can be seen from Example 8.14, be SD represe-ntation i.s not unique, and a representation with
the fev.-est number nf nonzero dtgits is called a minimal SD representation. A minimal SD representation
containing no adjacent nonzero digits is called a canonic signed-digit (CSD) representation. An algorithm
to deri.,e the CSD reprerentation of a binary number is give~ in Hwang [Hwa79l.

8.4.6 HexadecJrna1 Representation


In writing the eodes for programmable DSP chips:, v.>e often prefer the hexadecimal representation of a
binary number for compactness. In this format, each binary number is divided imo groups of four bits
beginn[ng at the binary point. The decimal equivalent of each 4-bit group is then represented by one of I 6
symbols formed by the 10 decimal digits Othrough 9 and the six letters of the alphabet A through F, with
A representing the decimal 10, B representing decitmll 11, etc. The conveTSion process is iUustrnted next.

8.5 Arithmetic Operations


The two basic arithmetic operations in the implementation of -digital filtering aJgorithms are addition
(subtraction) and multiplication. We illustrate the corresponding implementation algorithms for binary
numbers.

8.5.1 Fixed-Point Addition


The procedure for the addition of two binary numbers represented in the signed-magnitude representation
is similar 10 that used in the addition of two decimal numbers. The serial addition of two positive binary
fractions Or..tLJa-2 ···a-hand Ot:.d-1d-2· ··d-b is. carried out in bsteps. A€ the liTSI step. we add the
two LSBs a-n and d-b whose sum win consist of two bits: the first bit is a carry bit c_b and the second
bit is the LSB Lh of the final sum. At all other steps, we add three bits, a-i· d_ 1, and c:_, _1 , where C-i-l
is the carry bit generated at the previous srep. Their sum again will consist of a carry bit c; and a sum bit
1>;. If the addition of the two po:;itive binary fractions is still a binary fraction. the above process yields a
correct result. This happens if at the bth step. the addition of a-t, d_ 1, and c _2 does not create a carry
8.5. Arithmetic Operations 557

Table 8.1: Binary number re~entations.

Decimal Sign- Ones·- Two's- Off""


equi...mem magnitude complement complement binary

7/8 o~ 111 Oalll O,a.lll 1~ I l l

6/8 Oal!O OllllO Ot;llO l.a,liO


5/8 O&HH 0,-.101 0~101 l.t.lOI
4/8 0,-.100 0,!.100 Oe.lOO lAIOO
3/8 OaOll Oc.Oll o,...on 1.!.011
2/8 OAOIO o,..oiO 06010 l.t.OIO
1/8 0 6 001 o,..,_ooi 0,A001 laOOl
0 0"000 OAOOO OaOOO 1AOOO
-0 1a000 J A ill NIA NIA
-1/8 1,~001 1,-.110 1,..111 0,-..lll
-2/8 la010 lL:t.101 l.t.llO 0& 110
-3/8 lAOll lt;IOO lalOI OAlOI
-4/8 lulOO 11!.01] lnlOO 0 6 100
-5/8 lt;lOl 1,..010 laOll 0.1.011
-6/8 .... 110 1.~001 IA010 Ot;OIO
-7/8 1 ,.,.Ill 1"000 laOOl OuOOl
''
-8/8 '' NIA NJA 1AOOO 0"000

bit l. However, if their sum is no longer a fraction, an overflow is said to have occurred. and the cany bit
Ci = I generated is used to indicate the overflow.
We iUustrate the addition process in the following example.

'-
558 Chapter 8: DSP Algorithm Implementation

--
The subtraction of a positive binary number from another positive binary :~umber, or, equivalently, the
addition of a positive binary number to a negath-e binary number irt signed-magnitude form. is slightly
more complicated since here we may have to introduce a borrowing process. The following example
Jlfustrates the method.

-
Note that in the above example, the subtrahend is smaller in value than the minuend, resulting in the
correct difference. However. in the opposite case, the arithmetic operation will cause an overflow.
As illustrnted above. the algorithms for the addition and subtraction operations for binary numbers
in the signed-magnitude form are different, and therefore, separate circuits are needed in their hardware
implementations. On the other hand, both operationS can be carried out using essentially the same algorithm
for bi:naty numbers in either the ones• -complement or the two's-complement form. The bask difference
in the arithmetic operntions between the two representations is in the handling of the carry bit resulting in
the last step. The foJiowi.ng two examples illustrate the respective a1gorithms.
8.5. Arithmetic Operations 559

+a a

I?. I '' • I

-

• I
{ +·
<)
' It

8.52 Fixed~Point
@;m~JJ!114PRIW¥1\lt
1liDt ~ 1Kt

MuftipHcation
tbei:U'I:l:
t
nfllll(t

- I

The mult-iplication of two b~bit binary fractions., A = a,,A<Lta-2 · · ·11--J. (called the multiplic&nd) and
1 D = d:lld-Jd-2 ···d-b {called the multiplier), in sign-magnitude fonn is carried out by forming the
product p(hJ of their respective magnitudes first, and then assigning the appropriate sign to the product
from the signs of the nru1tip1ier and the multiplicand. The product of the magnitudes can be implemented
serially in b steps where at the ith step the partial product p(i! is determined as follows:

p(i) = (p(i-l}+d-b+i-!·A) ·2- 1, i=l,2•... ,b, (8.63)

with plO) = 0. It follows from the above equation that if d-h+i-1 = 0, the new partial product is obtained
by simply shifting the previous partial product to the right by one bit position. On the other band. if
d-b+i - l = I, the multipHcand A is added to the previous partial product and then shifted to the right by
one bit position to arrive at the new partial product. The final product is a 2b-bit fraction. The following
example illustrates the algorithm.
560 Chapter 8: DSP Algorithm Implementation

-
[f the multiplicand is a negative fraction in either two's-complement fonn or in ones'-complement
form, and the multiplier is a positive fraction, the algorithm of Eq. (8.63) can be followed without any
change. except that the addition of the negative multiplicand is canied out according to the method outlined
in Section 8.5.1. and after shifting the sum, the sign bit of the swn is left as is. On the other hand, if the
multiplier is a negative fraction, a correction step is needed at the end to arrive at the-correct result [Kor93!.
In Booth's multiplication algorithm. employed in most DSP chips, the serial multiplication process
described above is implemented. with the multiplier recoded in the SD form resulting, in general. in a
faster operation [BooSt]. For example. a muhipller 0~ 01100111 h requires six additions, whereas Us SD
representation would require four addlsubtcact operations. Booth's algorithm generates the correct product
if both the multiplicand and the multiplier are represented in the two' H:omplement fonn provi~ the sign
bit of the multiplicand i$ employed in determining whether to perfonn an add or a subtract operation at the
last step.

$\It '!

y;
I I
• ! w !4 ffi

M
• ••


+I
I
!
II
!PliO!'
I
I
I
' M
-
M

"M

M
8.5. Arithmetic Operations 561

8.5.3 F!oating-Point Arithmetic


Addition of two floating-point binary numbers can be carried out easily, if their exponents are equal. Thtts.
in a floating-point adder, the mantissa of the smallernurnbeTis shifted to the right by an appropriate number
of bits to make its exponent equal to that of the larger number, and then the two mantissas are added. If the
sum of the two mantissas is within the range of Eq. (8.58), then no additional nonnalization is necessary;
otherwise, a normalization is carried out to bring it to the proper range with a corresponding adjustment
10 tbe exponent.

!ft "' H
o;: :« J£1 Hi tW

1?} * W'$' fi f1\fh4

r; ;'11\;i<r tl!:lC 014$!1 rn{J


"
¢@!1\(jfk
tt \&!!{WmuV!ic tmd

PH 1· 172 ,m

'* k}

In a floating-point multiplier. on the other hand, the multiplication is carried out by multiplying the
two mantissas and adding their corresponding exponents. Since the range of the prodttet of the mantissas
is now between ~ and I, a nonnalization of the product is carried out if it is less than 4along witb a
corresponding adjustment to the new exponent. -
562 Chapter 8: DSP Algorithm Implementation

/
- 1 .'
-1 /

____/ -1 -1

Figure 8-14: (a) Sa!uration overflow, and (b) two's-complement overflow.

8.6 Handling of Overflow


In Section 8.5.1, we pointed out that the results of the addition of two fixed~point fractions can result in
a s11m exceeding the dynamic range of the register storing the result of the addition. thus resulting in an
ovt:rflow. Occurrence of overflow leads to severe output distortion and may often result in large-amplitude
OS( illations at the filter output (see Section 9. J 1.2). Therefore, the sum should be substituted wilh another
number that is within the dynamic range. Two widely used schemes for handling the overflow are described
next.
Let 11 denore the sum; then in either of the two schemes, if 17 exceeds lhe dynamic range [-1, 1). it
is liubstituted with a number $ which is within the range. In the saturation owfrjlow scheme shown in
Figure 8.34(a), if rr :::: L iris replaced with J - 2-b. where the number is assumed to be a b-bit fraction
with an additional bit for the sign, and if q ::: -l. it is replaced with -l. On the other hand. in the
two 's-complement overflow scheme shown in Figure 8.34(b). whenever '1 is outside the range f-1. 0. it
is replaced with;- = {'1-t- l}2. Bas.Ically, here, when 'I is outside the range, the bits to the left of the sign
bit (overflow bits) are ignored. The second scheme is usually implemented in nonrecursive digital filters
employing two's-complement arithmetic. In most applications, the first scheme is usually preferred.
We shall discuss in Section 9.7 the dynamic range scaling of the digital filter structure to either eliminate
completely or reduce the probability of overflow. The effecth of the two overflow handling schemes on the
performance of the digital filter are also considered in Section 9.7.

8. 7 Tunable Digital Filters


Many application<> require the use of digital filters with -easily tunable characteristics. In Section 6.7,
we outlined the design of tunable first-order and second-<nder digital filters that may provide adequate
solutions in some applications. There are other applications where use of higher-order tunable digital
filu~rs may be r.ecessary. The design of such tunable fihers is the subject of this section.

8.7.1 Tunable I!R Digital Filters


Tht' basis for the design of tunable digital filters is the spectral transformation discussed in Section 7.5,
which can be t:.sed to tune a given digital filler reallzation with a specified cutoff frequency to another
rea!iz::.tion with a different cuu:>ff frequency. Thus, i:f G,lct(z) is the transfer function of the original
8.7. Tunable Digital Fitters 563

realization, the transfer function ofrhe new structure is Gnew(z) where


(8.64)

in which F- 1(z) is a stable allpass function of the- form given in Table 7.1 with the parameters of the
trn.nsforrnation being the tuning parameters. One straightforward way to implement this transformation
1
would be to replace each delay block in tbe realization of Gold (_z} with an allpass structure realizing F- (z}.
Howev--er, such an approach leads, in general, to a structure reaJizing Gnew(z) with delay-free loops and
cannot be implemented, as explained in Section 6.1.3.
We describe now a very simple practical modification to the above approach that does not result in a
structure with deJay-free loops [1'.-1it90b]. In Section 6.10, we outlined a method of realization of a Jarge
dlSS of stabte llR transfer functions G(z) in tbe fonn [Vai86aJ

G(z) = i {Ao(z) + A 1(z)}, (8.65)

where Ao(z) and A 1(z) are stable aiJpass filters. The conditions for the realization of G(z) as a parallel
connection of two allpass sections are tbat G(z) be a bounded real transfer function with a symmetric
numerator and that it have a power complementary transfer function H (z) with an antisymmetric nurneratm.
These conditions are satisfied by all odd-order lowpass Butterworth, Chebyshev, and elliptic rrnnsfer
functions.
The all pass filters. Ao(.z) and A 1 (_z) can be realized using any one of the approaches discussed in Section
6.6. We consider here their realizations as a cascade of first-order and second·order sections. As indicated
in Section 6.6., there are a farge variety of structurally 1ossless canonic structures realizing the first-order
and second-order allpass transfer functions [Mit74a], [Szc88]. These structures use only one multiplier
and one delay for the realization of a first-order allpass function, and rwo multipliers and two delays for
the realization of a second-order allpass function.
Consider first the tuning of the cutoff frequency of a lowpass IIR filter realized by a parallel aUpass
structure. From Section 7 .5, we note that the lowpass-to-lowp-ass transformation is given by

z-l ----+
p-1(
z~I) = .::-!-a
J -az
,. (8.66)

where the parameter u is related to the old and new cutoff frequencies, We and We, respectively, through

sin[(w, - & .. )/'"]


a~ (8.67)
sin[(w.,. + W,;)j2]
Substituting the transformatwn of Eq. (8.66) on a Type l first-order allpass transfer function

dl + z-;
ai(Z) = I d I' (8.68)
+ "
we obtain the new first-mder allpass transfer fuoction

(dt -a)+ (l-ad 1).z- 1


- (8.69)
0 ad1) + (d 1 a)z 1
564 Chapter 8: DSP A~orithm Implementation

(a)
-2u(l +d2 .H ad1 ' ad1(d2 -1)

+ +

~ d,
{b)
Figure 8.35: Multiplier replacement scheme5 in the constituent a!lpass sectmns for designing a tunable UR filters: (a)
Type l allpas:;. network and (b) Type 3 aJJpass network.

If a is very small, we can make a Taylor series expansion of the coefficien~ (d, - a)/(1 -add of tte
allpas:s function Ql (z) of Eq. (8.69} and arrive at an approximation

. ,.._ [d,+a(df-n]+z- 1
a,{z) ~
l + [ dt+a(dj, J
I) z 1
, \8.70)

which is seen to be a Type l first-order allpass rran-s.fer function with a coefficient that is now a linear
a
function of cr. lbe approximated allpass section 1 (z} can be simply implemented by replacing each
mult:plier d 1 in Figure 6.23 with a para1le1 connection of two multipliers, as indicated in Figure 8.35(a).
N•Jte that fora= 0, .2t(Z} of Eq. (8.70) reduces toa 1 {z:) of Eq. {8.68), as expected.
Applying a similar procedure to the Type 3 second-order all pass transfer function

(8.71J

Y.'e arrive at

(8.72)

Again, if we assume a lo be very small, we can rew-ri[e the above expression as


1
lh(z) ;:::: ~dz - ad1} + (d: - 2all + d~ D z- + {l - ad_1 )z-2
(J adt}+(dl 2a[l +d2:~)z l +(d2 aa 1)z 2

__ (.i;.-u.:l)-'- (dl
. 1-u : ' I
2;;:.,[1+d1l)_-l
01d 1 "-
+ Z -2
{8.73}
1+ (d1 I
2c[!
"'-JI
+dzf),~ 1+ (".z, ,;,d
"'-dt)
1 Z
z
8.7. Tunable Digital Filters 565

Figure S.3ti: Gain re~ponses of u fifth-nrder tunable <;'\\iptic lowpa:;s filter for three values of a.

-I
+
Figu~ 8..37: L{lwpass-tn-band?ass. tram;fonnal:ion.

by neglecting coefficients containing a 1 . Next, an approx:nmtion based on !he Taylor ~ries expansion of
the la-;.i term in the ab.._ove equation results in

(8.74)

which is seen to be a Type 3 second-order allpass transfer function with coefficients that are a linear function
of a. The approximated allpass section il 1 (::) can be simply implemented by replacing the multipliers d 1
and d2 m Figure 6.15 with a pru-a.IIel connection of two multipliers. as indicated in Figure 8.35(b). Nrne
that for a = 0, Ql(Z) ofEq. (8.74) reduces to al(Z) of Eq. (8.71) as expected.
Nme that in the case of both the first-order and the second-order allpass filters, the tuning TUle is
D<:)W a linear function of u. Even though this wning algorithm is approximate and has been derived for
small values of a. in ?£actice, tuning ranges of seve.-ai octaves have been observed to hold m Ihe case
of narrowband \owpass elliptic filters [Mi190bl Figure 8.36 shows the gain responses of a fifth-order
tunable elliptic Jowpass fill:er fvr three values of the tuning parameter «. The prototype filter (a = 0) has
a passband edge at Wp = 0.4rr, passband ripple of0.5 dB, and minimum stopband auenuat:ion of 40 dB.
By applying a lmvpass-to-bandpass transformation
_, + f3
z:
-'
-
F-. (
::
-1-
}=-z
-1 ·''--;_c_,
l-+--{h I OU5)

to a tunable lcwpass IIR tiher, 1.\<e can design a tunable bandpass filter whose center frequency WI) -is tuned
by adjusting the parameter /3 = coswo, and the bandwi<lth- is tuned by changing ct [Mil90bl- Unlike
the lo)wpass-to-lowpass. transformation of Eq. (8.66), the lransfonnation of Eq. (8.75) can be directly
implemented on the structure reali7jng the tunable low pass ~Iter by replacing each delay with the structure
shown -in Figme 8.37 to arrive at a tunable bandpass filter ~tructure.
5613 Chapter 8: OSP Algorithm ~mplementation

8_:r.2 Tunable FIR Ojgital Filters


Th<:: spectral transformation approach of Sec-tion 7.5 can abo be applied to an FIR filter to de;-·elop a
filtt:r structure with tunable characteristic~. However, the resultmg 5tructure is no longer FJR since the
replac~ment of the delays in the prototype FIR structure by all pass sections implementing the spectral
transfonnation makes if an IIR filter. We outline hclow a straightforward method for designing tunable
linear-phase lowpass FIR filters and later show how to modify the method to lhe design of other types of
tun.ili1e FlR filters LJar88]. The method discussed preserves the FIR structure and permits easy tuning of
the cutoff frequency.
The basic idea behind the tuning procedure is the observation that. for an ideallowpass FlR filter with
a zt:ro-phase response given by

for 0 ~ !wj :s: w,.,


(8.76}
for We < ~wl :S: r.:.

the impulse response coefficient's are given by

(8.77)

as derived in Example 3.3. We can truncate the above expression and obtain the coeffic1ents of a realizable
apJ:.rux.ima.tion given by
c[n)wc. forn =0,
hLp[n] =
0,
wh.:re w .. is the 6-dB cmoff frequency, and
I
c[n]sin(w,.n). for l :S: jn: :s: N.
otherwise,
(8.78)

c[n) = l~l~Jr, forn=O,


(8.79)
fJrn, for I:;:: jn' :S: N.

It foJiows from the above that once an FIR lowpass filter ha:o.. been designed fer a given cutoff frequency,
it can be tuned simply by changing We and recomputing the filter coefficients according to the above
expression. It can be shown that Eq. (8.78) can also be used to design a tunable FIR lowpass filter by
equating the coefficients of a prototype filter developed using any of the FIR filter design methods outlined
in Chapter 7 with those ofEq. (8.78) and solving for clnJ [Jar88]. Thus, if hL p~n] denotes the coefficients
of Ute prototype lowpass filter designed for a cutoff frequency wn from Eq. (8.78) the constants c[n] are
givt~n by

dOl= hu·{O], (8.80a)


w,.
c[n] = hLp[n] I :s_ In! :5 N. (8.80b)
sin{wrn)

Then, ~he coefficients hLF[nj of tile transfonned FIR filter with a cutoff frequency W,. are given by

h[_p[O] = c[O]{~,. = (~~) hl.f'[O], (8.81a)

l:S:Inl:s:N. (8 81b)
8.7. Tunable Digital Filters 5Ei7

Figure 8.38: Tunable lowpass FIR filter of length 51.

This tuning procedure has been recommended for filters with equa1 passband and stopband ripples. It has
also been reco:nmended that the prototype filter be designed such that its coefficients have values not too
close to zero.
We illustrate the design of a tunable FIR filter in the following example.

For the design of tunable FIR filters with unequal passband and stopband ripples, the following mod-
ification is used. The impulse response coefficients of the tunable filter of odd leng!h are now given
by
h [I !c[n]we+d[n], forn =0, (S.&Z)
LPn = c[n]sin(wcn)+d[n}cos(w.,n), furl~lni.:SN.
The constants cf:n j and d[n j are derennined. from two different optimal prototype filters, subsritufing them
in Eq. (8.82), and then solving for these constants. For example, if the passband weight WP is greater than
the stopband weight W:.. the cutoff frequencie5 We.: and UA-b of the two filters are chosen as
0.25 0:.25
Woo = 0.8- N' Web= 0.8 + N·

On the- other hand, if the passband weight W P is less than the stopband weight Ws. the cutoff frequencies
(t)ca and Web of the two filters are chosen as
O.Z5 0.25
w..~ = 0.2- N' Web= 0.2 + N·
The twQoptimal filters l!a[n] andhb(n] are then designed using the Parks-McClellan algorithm. As before,
the prototype filters should be designed with nonzero coefficients.
The above modification is not recommended for the design of tunable wideband or very narrowband
filten>. Ftgure 8.39 show-s the gain responses of a lowpass FIR filter of length 41 and a transition width
of 0. irr with a variahle 6-dB cutoff frequency for various values of passband and stopband weights.. The
cutoff frequencies of the filter-s vary from 0.2 to 0.9. as indicated in Figure 8.39.
The above methods can be directly applied to the design of a tunable highpass FIR filter from ahighpass
FIR prototype filter. Alternatively, the la~ter can be designed as a delay-complementary tunable Type 1
FlR lowpass filter. 10
10 See Section 4.8.l.
568 Chapter B: DSP Algorithm lmplementation

----,

''
-29\
---::::
'\"""~
\ \ ~........:::: """'"'-~ ~" \l~
\'\'"\•\
\\
\\\\ \''
• _, ~,
.
\
' \ \ \
' \ \ \\
..
\~~
0
·!i ..
~f ' \
0
-$()~
.
-100+ -!00~
I .
-no'
om O.b •
' (1.2" !1.41
Nomllll0d ~r

(c)
F:igure 8.39: Length 41, transition bandwidth ofO.IJr, and passband/stopband ~ights. of (a) 1/l. (b) t/lO. and {c)
l/50.

To develop the medtods for designing a tunable bandpass FIR filter, we observe that a fiiter HBp(::.)
with a symmetrical bandpass magnitude response can be derived from a lowpass prototype filter Hu (.:}
by applying a frequency trans..lation to its frequency response. This process results in

(8.83)

where wo is the desired center frequency of the bandpass filter. The relation between the impulse responses
of the bandpass and the prototype lowpass- filters is given by

hap[n] = {e-jW()>l + eiWfjll) hu-[nJ = 2 cos(~)hLPfnl. (8.84)

If Bp and 5s denote the passband and stopband ripples of the lowpass prototype. the corresponding ripples
of the bandpass filter are .8p + S.- for the passband and 28.,- for the itopband. Equation (8.84) forms the
basis for designing a tunable bandpass FIR filter with adjustable center frequency. A similar approach can
be followed for the design of a tunable bandstop FIR filter, in which case the prototype is a highpass FIR
filter.
II should be noted that the tunable FIR filters designed using the methods suggested above have the
same hardware requirements as that of their prototypes. They also have linear phase if the prototype filter
has linear phage.

8.8 Function Approximation


Often there is a need to use transcendental functions and random numbers in certain DSP applications.
For example. the FFT computations require the generation of the complex exponential sequences. Certain
C: B. Function Approximation 569

Error of a;>pnncimalirm
I
-' 10
--

./

!)
.
~;L:" ---
0.5
j
l
1.5
., - -
G
--cc-----c----~
0.5 15
Angle. ndlam Angle, raduns
{a) (b)
t'igure 8.40: (a} Plot (Jf the sine value C~Jrnputed us.ing Eq. (8 ~S_l and (b) plol of the error between the actual '>me
valm: aad the approximation.

dLgital ~:ummunication systems require sinusoidal sequences. Both of these sequern:es can of course be
g.:nemted by ~cond-onlerdigital filter structures described in Section 6.11. We outline below an alternative
approach to the generation of transcendental functions based on truncated polynomial expansions. These
type" of expansions are often used in D~P chip implementations [Mar92].

8.8.1 Trigonometric Function Approximation


Tire sine of a number x can be approximated using rhe expansion [Abr72J
sin(x) :;; x - tU66667x 3 + 0.008333x 5 - O.OOUI984x 7 + 0.0000027x". (R.85)

where the argumenl x is in radtans, and its range is restricted to the fint quadrant, i.e., from 0 to .1r /2. If
x is outside this range. its sine can be co:nputed by making use of the identities sin( -x) = - sin(x), and
~iili(Jf /2) + .tj = sin[(:rr j2) - x j. Figure 8.40 shows the plots of the sine approximation computed using
Eq_ (8.85) and the error due to the approximation.
For computing the arctangent of a number x where -1 :::;: x :::;: I, a recommended expansion i;. given
by 1Abr721

tan- 1(x) ~ 0.999866x - 0.330299Sx 3 + O.l8014lx 5


-G.085133x
7
+ 0.0208351x 9. (8.86)

f--Igure 8.41 shows the plot;. of the arctrcngent approximation computed using Eq, (8.86) and the error
d~e to the approximation. If x _::: I, then ils arcangent can be computed by making use of the identity
tan- 1(J:) = (Jr/2)- tan- 1
(lj.~-).
Exercises MS. 12 and M8 .13 list several poJynomial approximatlons for computing these trigonometric
fu·Ktions. Various trigonometric functions can also be computed from Eq. {8.85) using trigonometric
idt~nt;tie!'>.

8.B.2 Square-Root and Logarithm Approximation


The square root of a positive number x in the range 0.5 -::: x :::_:: I can be e~aiuated using the 1nmcated
polynom1al approximation fMar92l:

../X;: 0.2075806 + 1.454895x - L3449Ix 2 + l.I068I2..t- 3


- 0_536499x 4 + O.l12l216x 5 (8.87)
s:ro Chapter 8: DSP Algorithm Implementation

/""'--, /\
/ \ I \

\ I'
/ '
\
1

\ JI
/1
'
\

I, I
0/ \.__j v
-·~~~~~--~--~---c
0 0.2 0.4 0.6 (U!
Tangefil values Tange~L valu''"
~a) (b)
Figure 8.41: (a) Plot of lhe arctangent value computOO using Eq. {8.86) and (b} plot of the error between the actual
an,!angen! value and the ~pproximation.

Figure 8.42: Plot of the error belween the actual square-root value and the approximation given by Eq. (8.87).

Figure 8.42 shows the plot of the error due to the above approximation. If x is outside the range from 0.5
to l, :t can be multiplied by a binary constant K 2 to bring tile product x' = K 1x into the desirable range,
compute ..,Jx' using Eq. (8.83), and then detennine ../X= R,t K.
Polynomial expansions have also been advanced for the approximate computation of both the logarithm
(base 10) and natural logarithm (base e) of any number x bel ween one and two. Two such expressions are
given by lBur73J, [Mar92]:

log 10 (x) ~ 0.433t)ol42(x - 1} - 0.21278385(x - 1) 2 + 0.1240692(x ~ 1) 3


- 0.05778505(x- 1)4 +0.0!3626l(x- 1) 5. {8.88)
Jog_.(x} ~ 0.9991 15(x ~ l} - 0.4899597(x - l}2 + 0.285675l(x ~ 1) 3
-0, I33<J5665(x - 1) 4 + 0,031372071 (x - 1) 5 (8,89)

For the cakulation of the logarithm of a number x that is outside the range from I to 2, the number x rnus.t
be scaled by an appropria!e factor K with known logarithm to bring the product x' = K x to this range
and then the logarithm of the product x' must be computed. From the logarithm of the product x', the
logarithm of K is subtracted to get the logarithm of x.
571

8.8.3 Random Number Generation


Random number generation is used in a number of applications. For example. if is used as a training signal
for the adaptive equalizer in high-speed modems [Mlll92l Another application is in the elimination of
limit cycles in IIR digital filter structures using the random rounding method (Section 9.1 1.4 ). A variety
of appcoaches has been proposed for the generation of uniformly distributed random numbers [Knu69j.
One rather simple method is based on the linear congruence method and is given by the recursive equation

x[n + l] = (ax[n]- f3!M. (8.90J

where the modulus M is a positive inreger. The rules for the selection of the constants a and f5 are given
in rKnu69]. The above equation generates a periodic pseudo-random sequence having a period M with
proper choice of a and p fairly independent of the seed value x[O]. The samples of the sequence generated
by Eq, (8.90) within a period are uniformly distributed integers from Oto M- J. A family of such random
sequences can be generated by choosing different seed values for each sequence. To ensure the randomne<>;:
of each sequence generated, M should be chosen as large a~ possible.
1be M-file rand can be used to geaerate random numbers and matrices with clements unifomlly
di:;tributed in !he interval (0. 1). Various versions of these functions are available as indicated below:

rar,diN). ~and\M,Nl, ::-and(si::.e(A):.


rar.di 'seed' ,N), rand!. 'seed' l

TI:!e output of rand (Nl is anN x N matrix and that of rand {~-1, N j is an M x N matrix wjth random
entries. ranO. (size ( P.. I ) generates an output of the same size as A. If rand is useri without any argu-
rrn~t. .its output is a scalar with a value that changes each time it is called. The output of rand { ' seed' )
is the current value of the seed employed by the random number generator. Finally, rand ( 'seeC' , N!
us-~ the value N for the seed.
A similar set of M-files. for the generation of random numbers and matrices with elements that are
normally (Gaussian) dislribured with zerc mean and unity variance are given by

randn (N), randn(M,N/, randn (size lA)),


randn ['seed', N), randn ( ' seed' )

The function ra:1d wru: used in Programs 2_3 and 2_4 of Example 2.14 to generate a random sequence.

8. 9 Summary
Some of the common factors in the impiementatior. of DSP algorithms, which are .independent of the type
of implementation being canied out, are discussed in this dtapter. The implementation is usually carried
out by a sequential implementation of the set of equations describing the algorithm. These equations can be
oo;ved directly from the structure realizing the algorithm. The computability condition of these equations
is derived and an algorithm for testing the computability is described. A simple algebraic technique to
verif}' the structure from the input and output samples is also outlined.
MATLAB-based software implementation of digital filtering algorithms and the discrete Fourier trans-
form (DFT) are then considered. The basic ideas behind the fast Fourier transform (FFT) used for faster
computation of the DFT samples are explained and several commonly used FFT algorithms are derived.
Various schemes forthe binary representation of the numbers and the signal variables that are employed
in the digital computen and special-purpose DSP chips are reviewed. followed by a discussion of the
algorithms used for the implementation of the addition and the multiplication operation. In certain cases,
Chapter 8: DSP Algorithm Implementation

the result of an addition of numbers may -cause an overflow of the dynamic range of the register storing
the sum. Two methods of prevenfing physi~;aUy the overflow are outlined.
Methods for the design of JIR and FIR digital filters with tunable characteristics are introduced next.
The chapter conclude."> with a discussion on the approximation of -certain functions that are needed jn a
number of applications requiring lmplementatio:~ using DSP chips. In particular, the approximation of
trigonometrk functions, square roots, logarithms, and the generation of random numbers are considered.

8.10 Problems
8.1 Develop a set of time-domain equations describing the digital filter structure of Figure PS.l in terms of the
iilput x[r.], ouiput y{n], and the intermediate variabJes w~:l.n} in a sequential order. Does lhis set describe a vahd
c:.mputational algorithm? Justify your answl!r by devclopmg a matrix representation of the digital filter ~tructure and
by eKamining the matrix F.

"n
x!n] + _y[n]

w [n]
1
+ +
-I
u,
w3 lnl w [n]
5
+ +

"' -l

w [n] w {n)
4 2
+
a,

Figure P8.1

8.2 Develop a computable set of time~domain equation:s des..----ribing the digital filter "Structure of Figure P8.1. Verify
d:e computability condition by fonning an equ1valent matrix repreo;entation and by ex.amining the matrix F.

8.3 Develop a set of time-domain. equations describing the digital filter str'Jcture of Figure P8.2 in tenns of the input
x:n), output yfnj, and the inrennediate varit~ble.\ Wt[r:J inn sequential order. Does this set of equations de'>{;ribe a
valid computational algorithm? Justify your answer by developing a matrix representatiOn of the digital filter structure
3fKl. by examining the matrix F.

8-.4 Develop the precedellCe graph of the digital filter structure of Figure P8.1 and inve:;tigate its realizabtlity If
the scructure is found to be realizable, then from the precedence graph, determine a valid computational algorithm
dt::>t.-"Tibing the structure.

8.5 Develop lhe precedence graph of the digital filter structure of Figure P8.2 and investigate its realizabJiity. If
lh~structure is found to be realizable, then from the precedence graph, determine a valid compt~tational algoritbrn
<.kscribtng the structure.
8.10. Problems 573

"o
x[n] + yf.nJ
w 1n}
1
a,
+
w 2 In]

w [n)
3
+ +
w (nj
4
-I
w 5 [nj
+
w 6 (nl

Figure P8.2

8.6 (aJ Write down the time-domain eqmnions relating the node variables Wt[nJ, yjn] and the input xln} of the
digital filter stnn.:ture of Figure P8.3. Check: fonnally the computability of the set of equations- if the equations
are ordered seque~:tially with increasing values of the node il!dice:s.
'l_b} Develop a signal flow-graph representation of !his digital fiiter structure and !hen determine its precedence
graph. From :the pnx:eden<:e graph, develop a set of computable equations deKribing the structure and show
formally thai these equations are indeed computable.

Figure P8.3

8.7 The measured impulse response samples of a causal second--ocier llR digital filter with a transfer function H (Z) =
P\1 }/1 1 - 3z-l - 5z - 2 ) are given hy

h!OJ = 2.1. hf!] = 1.1, h[2] = -3.2, ht3} = -15.l, h[4] = -29.33,.
Dewrmine the numerator P{:d of the transferfunction H(z).

8.8 The first five impulse response samples of !I causal.second-ofder HR dig1tal filter .are given by

hi:O] = 3. h[li = -2. h[2] = -6. h[J.J = 12. hf4l = -15.


Qe,_ennine the tran~fer function Hi.::).

8.9 Determine th.= tran&fer functlon of a £bird-order cau~al HR digital filter whose first ten impulse response samples
aFe gm~n by
lh[n]l={2 - 5 6 -2 -9 18 -7 -31 65 -30}.
574 Chap1er 8: OSP Algorithm Implementation

8.1.0 The first four impulse response samples of a causal third-nnkr !IR digital filter with a tran~fer function G(:: =
Pl,;:_/;(2- 0;::-1 + 8::-2 + HL_- 3 J ;o.re given by

g[OJ = 3. fflll = 7, K[2J = i:_l. J?f31 = -3.

Determme the numnator polynnm;al P ( :) of the tnm~fer function

8.11 The first ! I impulst= respon::;.o samples of a causa! IIR transfer function

PD+Pl::.
_, +
PF - _, p;:-
__ -,. '
P.:>Z.
_, + -·
l!i-;l =
l+2z 1 +2: ?+3.:- 3 +3.: 4
are gi"en hy
{h[nj} = [2 0 -5 - iO -· 10 15 ~0 !85 125 -455 - 1830}.
Determine ;L-; numerator cocfficienh {p, I.

8.12 Show that the direct computation of the ,V -poim DFT of a lengtb-N sequence require~ 4N 1 real :-!luitiphcation~
and (4N - 2H•i real aillhtiom•.

8.13 Develop an algorithm for the .;ompiex QUitlplication of two complex numbers using only three real mulliphca-
ticm; Elld five real addi1ions..

8.14 A digital oscillator working al" sampling rate of 2500 Hz can generate any one of four sinusoid.>! sigaals tJf
frequencies ISO Hz. 375 Hz, 620 Ht:. and 850 Hz. We would like to detect the tone frequency of the s1gnal being
genem1ed by computing only four o;amples of an N-pomt DFT using Goertzers method. What is the smalle~t vaiue
uf the DFT :ength N so that the fol!r IOIX" frequern:les f<~Jl a.~ do>e a~ possible to- foLr DFT bin ind~-s to make the
leakage Co adjacent bins. w; small a~ possible" The OFT length N should be the same mall four cases.

8.J5 Let Hk (::; denote the transfer function of the kth tiller used in the Goertzel's algOOthm to c:akulme lhc ,V -point
DFT Xfkl of a length-N sequen<.'t: xjnj_ Con~1der the following input sequence applied ro Ht:Cz):

:l"[n]=jl. 0, 0,. ,l. 0. o.... !.


0 Ni2

"'here N /2 1s assume-d w bt;, <.livi~1hk by k . .,._'hat 1s the output sequeoce vir. I fork = I'! What i~ the output --;eq~ncc
v(n J for k = N !2? Give a qualitative ~ketch,

8.16 Develop the Row-graph t~:r the FFT algorithm from the radi:..-2 DlT FFT algorithm of Fig;J.re 8.24 for theN = S:
o.:a.;e in which Uk input is in nornwl order rutd the outpm is in tbc hit---re-ven;ed urder.

8.17 Developilie tlow-gmph forth~ FFT algorithm from the r.!dix-2 DIT FFT algorithm of Figure K24 fot theN = ll-
ca~ i:rl WhKh hoth the input and the umput are in norms! o.--de-~.

8.Ji8 Verify Eq (8.45).

8.1:9 D~e!op the o-;truc-tural interpretation of the first stage of the radix- 3 D!T FFT algorithm.

8.20 Dcvefop th<: flow-g!aph for lhe r<~dix-3 DIT FFT algorithm for theN = 9 case in which the input is in digit·
rever~d order ;md the outpm i~ in the normal order_

8.21 Devdop t::e flow-graph for a mixed-rarEx DlT FFT algorithm fnr the N 15 c-ase m whkh the input is in
o.lipt-oeveNetl order and the output i\ in the norrnal order.
8.10. Problems 575

8.22 Fonn the transposed graph of the flow-graph of Figure 8.25. Replace each c:omplex multiplicand Wf;; in the
rrans;msed flo~·graph with 1Wfir. Show that the final flow-graph implements an inverse DFf if the input is Xtkt
8.23 A second approach w the inver<;e DFT computation u:<;ing .a OFT algorithm h illustrated in Figure PSA. Let
X[kj be theN-point DFf of a length-N sequence x[n] Define a leogth-N time-domain sequence qfnJ as

Re{qlnll = lm X(kJik=.1 • ImlqinJl = ReX(kllt=n,


w1th Q{kj denoting its N-point DFf. Sh<lW that

'
Rep:[nll I ·lm Q[kj
= ---;
N
I
k=,.
.

l
N
Re{Xlk]} ~-n-~ Re{xin]}

Im{J({kH Jm{x{n}}

Figure P8.4

S.:W A third approach tc the mven.e OFT cQfnputation using a DFT aigonthm is described below. Let X!kJ be the
N -point OFf of a length-N sequence .(In]. Define a length-N lime-domain sequence r{n] as

r(r.] = X[k]:.~:=l-")N.

with R[k] denoting its N-point DfT. Show that

_t[n] = N · R[kjl•="'.

S.:lS We wish to determine the sequenc-e y[nJ generated by a line.ar convolution of a length-8 real sequence x[n }.and
a length'S real sequence h[n!. To this end, we can foll<:m.· one of the following medrod.s.:
M-ethod# 1. Direct computation of the linear convolution.
Method# 2. Computation of the linear convolution via a single circular convolution.
M<ethod # 3. Comporaticn of the lmear convolution using radix-2 FFT algcrithms.
Detennine the least number of real mulliplkations needed in each of the above rnelhods. For the radix-2 FFI algorithm,
do not include i:t the couct multiplications by ==:-1. ±j. and w£..
3..<:6 Repeat Problem 8..25 for the computati<m of the linear convolution of a length-S sequence x!nJ with a length-6
sequeoce h[nj.

8.27 An input sequence .r{nj Qf !el'.gth 1024 is to be filtered using a linear-pha:se FIR filter h{n] of l.ength 34. Thh
fillering process involves the linear convohnion of IV!O finite-length sequences and can be computed using the overlap-
add algorithm discussed m Section 3.6.2 where the short linear convolutions are perfonned using the l.JFf-based
approach of Figure 3.14 with the DFfs impJemented by the Cooley-Tukey FFT algorithm.
(a) De!eJ"mine the appropriate power-of-2 transfonn length tlwt would result in a minimum number of multiplica-
tions and calculate the total number of multiplications that would be required.
JJ) V."hat WOtlW
1 be the total number of multiplicatiQfls if the direct coovclution method is used?
576 Chapter 8: DSP Algorithm Implementation

8.28 We pointt:d out in Eq. {3.37) !hat the vectOJ" X of OFf samples can be expressed as the product of the DFr matrix.
DN and the vector x of iaput samples, where DN is given by Eq. (3.40).
(a) Show that the 8-point DIT FFT' algorithm shown in Figure 8.18 is equivalent wexpress:ng the OFT rnatriJt as
a product of four matrices as indicated below:
(&.91)

Determine the matrices given above and show that multiplication by each matrix Vt. k = 8. 4. 2, requires at
most eight complex multplications..
{b) Since the DFT matrix DN is its own transpose, i.e., DN = 01.
another FFf algorithm is readily obtained by
fanning the transpose of the right-hand side ofEq. (8.91 ). resulting in a factorization of DN g1ven by

DN=E TyT2v4v8.
T T {&.92}

Show th&t the flow-graph representation of the above factorization is precisely the 8-point DIF FFr algorithm
of Figure 8.27.

8.:2.9 The bask idea behind the split-radi:.; FFr {SRFFf) algorithm is explored in this probiem {Duh86]. As in tl;e
ca~ M the decimation-in-frequency FFf algorithm. in the SRFFT algorithm, the even-indexed and the odd-indexed
samples of the DFT are romputed separately. For the computation of the even-iodexed samples we write, as in tl~e
DJF FFT approach.
N-1
X[2t] =L x!n)W~tn
ndl
(N/2}-1 N-1
= L x[n]W~ln- L x[nrN~tn, i = 0, I, ... ,!}- L (8.93)
n=N/2

Show that the above can be reexpressed as an (N i2}-potnt DFT m the form

(N/2)-1

X[UI= L (x!nJ+x[n+~])w,~j 2 • l=O,l,.."'q--1. (8.94)


,.=(}
For the compumtion of the odd-indexed samples, we write two different ~pressioRS, depending on whether the
frequency index k can be ex:pressedas 4e + 1 Ql'4i + 3:
(N/4)-1 (N/2)-1
X[4£ + IJ = L ;;[n]W14l+l}n + Z:::: x[nJW14f+l)n

(3NJi9-l N-l
+ L x[n]W~4i+l)n- L: x[n]W.ijl+l)n'

£=0,1, ... ,/f-l. (S.95a)


{N/4)-t (N/2)-l
X[4l + 31 = ,L x[nJWJ:!+J)n + L x[nfW~4t+3)n
n=O. n=N/4
(3Nj4)-l N-1
+ L x[nJWt't+3}n + L x[n]W~41+3fn,
n=N!l ... =lf'l/4
i=O,I, ... ,ll-t. (8.95b)
8.10. Prob~ms
577

Show that Eqs.. (8:95<1.} and (8.95b) can be rewritten as two (N /4)-point DFTs of the fonn
(N/4)-l
X[4l~ll~ L !(xlml-x[m+~])
m~

- j (x [m + IJ-] -x[m + ~])} W::JW.~i4· (S.96a)

(N/4)-1
X[4f+31~ L l(x[m]-x[m+ ~])
m=O

+J (x[m + ~] -~ [m + ¥])} w~mwfrJ4 • {8.96b)

where f = 0. l .... , -:f - I. Sketch the flow-graph of a typical butterfly in the ab<w.e algorithm for l::omputing two
even-numbered and two odd-numbered points. and show that the iiplit-radix FFr algorithm requires only two complex
mulllplil::ati()(l.i p« butterfly.

8.3G De>ieiOp the flow-graph foe the computation of an 8-poirn. DFf based on the split-J"adix fllgorithm des<:ribed in
Problem 8.29. What is the total number of real multiplkations needed to !Illplemem thi!> algorithm? How does this
numbeJ compare with that required in a radb.-2 DIP FFT algorithm? Ignore mu\tipht.:ations by ±1 and ±j.

&.31 Develop the fiow-graph f= tbe computation of a 16-point DFT based on the split-radix; <:lgorithm described m
Problem 8.29. What is the tQtal number of real multiplications needed ro implemem this algorithm? How does this
number compare with that required in a radix-2 DJF FFT algorithm? Ignore multiplications by ±I and ±j.

8w32 An alternative approach to the development of FFf algorithms is via index mapping [Coo65j, which is studied
in this problem. Conslde:" a length-N :.eque.nce x(r~J with X{kj denoting its N-point DFT where N = N] Nz. Define
the index mappings

I
O~r.J ~Nt-l.
(S.97ai
o::-::n 2 ~N2-L

0 :<:: k! ~ N1- I,
I
(a) Using the above mappings, show !hat X[kJ can be e;o;pressed as
O~k2:5N2-I.
{8.97b}

X[ki = X[N2k1 + k2)


1 1
N£: [(Nf 1+ N,r~ziW,~n-z) w!2-"t]w~':"t.
ll[=O "~""0
xtn

O~k; ~ Nt -1, 0_:::.1'1 ~ N2 - L (8.98}


As indicated above. the computation of theN-point DFT X[kl is now carried out in three steps· (I} compute
the N:t-point Dl'"Ts. Hin1 ,kzi of the set of NJ sequences x\n 1 + Npiz] of length Nz, (2}multipl.y these DFfs
with the twiddle fa..-tors W~Y<J to fmm B[nt. kzJ, and (3) compute the N2·point DFrs X!N2 k 1 + kzJ of the
se1 of Nz sequences. HinJ. kzl of length N1.
(b) Show that for N 1 = 2 and N2 :::.:: N j2, the above decomposition .scheme Jeads fo the DIT FFT .algorithm anC
fm- Nt = N /2 and Nz = 2, the above decomposition scbemt:: teads to the DIF FFT algorithm.
(l::} If 'R (N) denotes the total number of multiplications needed to compute anN -point DFT, show that for the DFT
-computation algorithm based on the aOOve decomposition scheme ou:lined above,
R(N) = Nt R(1''2J...,.. N2RUV:) + N
~ N [A~
"'I
1
R{Nj) + - -R.(Nzl + 1] .
N2
(8.99)
578 Chapter 8: DSP Algorithm Implementation

·:d) Determine the rota! number of multiplications needed for the FFT algorithm based on the above decomposition
scheme for N = 2~.

8..33 Develop the index mapping for implementing an N-poin.t DFT X{k] of a lengrh-N seqt.~ence xlnJ usbg the
Cooley-Tukey FFf algoritllm for {a) N = 12, (b) N = 15. (c} N = 21, and (d) N = 35.

8 ..34 (a} The twiddle facto~ needed in the OFT computation scheme outlined in Problem 8.32 can be eliminated,
resulting in a more computationally efficient FIT algorithm fOT the case when the facton: N1 and N2 are
relatively prime by using the following indel!: mappings [Bur77]:

iJCSnt csN 1 - l ,
l OCSn2CSNJ-l,
0 < kt < N, - 1.
(8.100a.l

I O:::;k 2 CS:N2-L
(8.100b)

Show that the twiddle factor:; are eliminated totally if the constants A, B. C, and D m the above mappings are
chosen sati~fying the following conditions:

(AC)N = l'h \BD)N = N1, and (A DiN= (BC)N = 0.

•:h) Show that the following set of constants satisfy tl:e above condition
1
A= N2. B = Nt. C = N2\N2 lN:. and D = N, (N1- 1}N;•

where <NillNz lknotes the multiplicative inverse of N1 evaluated modulo Nz. Based Qfl this dlotce of
~;:onstants. develop the
algorithm for the computation of X[kl. FFf compuration sc:bemes~ed on this approach
are called the prime factor algorithms (Bur& 1]. [Knl77).

&.35 Developtl:.e index mapping for implementing an N·pomt DFT X[k] of alength-N sequencex[n} using theprlme
factoralgorithmfor{a)N= 12,(b)N =
l:S, (c)N =21,and(d) N = 35.

8.36 Consider the computation of a 12·point DFT X[!l:]of a length-tV sequence x[n.]. The index mapping to be used
car, be either

wh.~re A. B, C. and Dare appropriately chosen constants. Use of the mapping g{n:J, n1] = rl(An t + Bn.2)12l results
;.
2 3
G[k 1• k 1 ] = L ,L: g[nt, nzlwf'*2 Hr~lkt, (&.101)

(8.102}

If. on the other hand. the index mapping used is h[I''J. 'Q1 = x[{Cn 1 + Dn2) 111. we obtain
2 3
H[k.J,k:.d= L L hint.n2]w;2k~w~~k~. (8.103)
nt=C ...:z=O
Y{\A.lq + Bk.2J 12 ] = Hfk1, k2 ]. (8. 104)
wt,at is the relation between XIkJ and Y{k ]?
8.1 o. Problems 579

8 ..J7 De¥elop the flow-graph for the cmnput&tion of a 10-point DFf based on the prime factor algorithm,

8-:18 Develop the fl.ow-graph for the computation of a 15-poim Dl-1 based on the prime faL'lor algorithm.

s_;~ Develop a scheme to .:::ompute the 3072-point DFT of a sequence of length 3(172 using 512-point FFr modules
and camplell: multiplications and additions. Show the ~heme in block diagram fonn. How many FFf modules and
complex multiplications and addilinns are needed for the overall computation?

8.40 A 1024-poinl OFT of a length-1000 set;uence .t(nT is lobe computed. How many zero-v.alued samples should
be appended w x[n] prior to the computation of the DFI"? What are the total number of compleK multiplications and
additions needed fOC" the direet evaluation of all DFT sampies? What are the total number of c~ex multiplications
and additions needed if a Cooley-Tukey type FFT is used to compute the DFf samples?

8..41 Let H(z} = r;;:~l h[n]z~" and X(z} = r;;:~J xln l:Cn be two real polynomials of degree (N - l). Their
pmdud Yf:;:J fsthen a polyn01mal of deg~ (2N- 2) and a direct evaluation of Y(z) re(jUires N 2 mult:iplicatioro;; and
N + I additions. N~e also that the coefficir<nts y[n] of Y(z) ar-e the Sl'line a5 that obrained by a linear ronvolrnion
of the two length-N sequences h[n] and x[nj. The number of multiplications in computing the product H{z)X(<.)
cart be reduced but with an increase in the number of additions by making use of the Cook-Toom .algorithm [Aga77],
[Knu69J which is studied in this problem.
Let~.\:. k = 0. l. ... , 2N - I. be 2,V- l distinct point>. in the z-plane. Then f'fzil = H(<.J:)X(ZJ:) represent lhe
(2N ~ !}-point NDFf of y[n]_ and fmm these 2N - I sampfes we can uniquely determine Y(z) using W Lagrange
interpolation formula .as discussed in Problem 3.11 0. By -choo~ing the values of Zk .appropriately, hzt J can be
enluated with ilnly 2N - I multiplications i:f we ignore multipliG"ltions {or divisio11si by a power-of-2 integer that call
be implementeC using simple shifts in a binary representation.
ta} We first develop the Cook-Toom algorithm for N = 2. Here.

Using the Lagrange interpolation fonnula express Y(z) in tenns of its 3-point NDFT samples eva1uated .at
zo = -I. Zl = oc, and 2:2 = + l. Develop the expression tOr yfOJ, y[l], and y[2] in tem1s of the parameters
X(zkJ and H(z,t) and show that the computations of {y[nl} require only .a total of three multiplications.
Detenniue the total number of additions required by this .algorithm. Note that in many applications hjnJ
represents a fixed FIR filter and, hence, the multiplications by integers needed to evaluate y[nJ can be included
in the coriStants H (z;,J to eliminate them from future computations.
(b} Develop the Cook-Toom algorithm foe the linear convolution of two length-3 sequences.

8.~12 Let H{z} = h!OJ +h[l]z~ 1 and X(z) =:.:[OJ+ x[I]C 1. Show that Y(z) = H(z)X{z) = yfO) + y[lj.;:- 1 +
vl:~jz- 2 can be written as [Jen9l]

Y (z) = h[O]:.:[O] + [(h(OJ + h l I J)(x !OJ + xfl)}


- (h[OJ.t[OJ + h[ llt [I JJ1 z-I + h[l]xil Jz -l.

The:evaluati~m of the product of the two first-order real polynomials thus can be carried out using three mull!pli.catWn:s
insread of four as required in Ihe direcl producl Equivalently. a Jjnear convolution of two kngrh-2 sequences can thus
be implemented usi-ng only three multiplications.

8..43 Develop .a multistage algoritl'tJTt to compute the linear convolution of two length-N real sequences based on
the scJteme outlined in Problem 8.42 for the case when li is a power of 2 [Jen91J. What is the !east number of
muhiplicatiOlls required to compute the convolution using the mult:stage algorithm?
SllO Chapter 8: OSP Algorithm Implementation

BA4 Let a 32-:,it register be used to represent a ftoating-poinl number with E bits assigned for the eJ\pcaent and M
bi::s plus a sign bit f~ the mantisw._ Detennme the- approximate dynamic range of this floating-pain{ representation
for the following pain. of bit assignments fru the exJKHlem and the mantis~ by evaluating the values of the smallest
and the largest numbers that can be represented by the floating-point representatiol'.: (a) E = 6 and M =
25. (b)
E = 7 and M = 24. and (c) E = 8 and M = 23, Determine the dynamic range of a 32-bit fixed-point repre~tation
of a l>igned integer. Show l~tat the floating-point representation provides a larger dynamic range than the fillfd-point
re:~>entation.

8.:15 Show that the range of a 32-bi[ floating-point number in the IEEE standa..-d is from 1.18 x w- 38 to3.4 x to33 .
8.:16 Show that the decimal equivalent of a positive OF a negative binary fTactlon given by saa-ta-2 ···a-h in
two's-complernent fonn is-s+ Lf'=t a_i2- 1.
8.47 Show tha1 the decimal equivalent of a positive or a negative binary fraction given by s.:.n-ra-2 ·· ·a-b m
oces' -complement funn is -s(l - 2-b) + zj'= 1 <Liz-i.

SAS Determine the 9-bit sign-magnitude, ones' -complement. and two's-complementrepresentlitionsof doe following
negative decimal fractions:
(a) -0.625JO, {b} -0.71343751{). {c) -0.36328125w, (d) -0.94921875w.

S.:f9 Determine the 7-bit offset binary representations of the following decimal numbers:
(aJ 0.625w, (b) -0.625to, (c) 0.359375m, (d) -0.359375]0. (e) 0.90625to. (f) -0.906-25to-

8.50 Develop the signed-digit {SD} represem.ation of the following binary nwnbers:
{a)Ouli!OliOi. (b}Oa01111l01. {-c)Oal010111L

&.51 Develop the hexadecim.al representation of the following binary numbers::


(al l!OiOlJOl iOOOf II, {b}OlOl J 11110101001. (c) IOllOlOOOOIOlllO.

8 ..>2 Perform lhe following binary additions' of positive binary rrat:tions and comment oo your results:
(af Ot. 10101 + o...._ 01111. (b) Ot.. 01011 + Ot.t.IOOOI.

8.$3 Compute the following differences of positive bmary fractions by performing binary additions of a positive
fu,ctlun and a negative number rqnesented in two's-crnnplement fonn:
(a:'Ol!..IOlOl -O.t<.OIUI,(b)OaHXXJI-O,..,_OIOII.

!l.S4 Repeat Problem 8.53 by representing the negative number Jn ones" -complement form.

8.S5 Consider the binary addition of the following three numbers represented .in two"s-complement form with five
bits: 111 = 0.6875Jc. tJ2 = 0.8125 w. and 113 = -0.56!510· The addition is carried out in two steps: first we form the
su.n t;! + '12 and then we form the sum (q 1 + rn! + 113. Since the magnitudes of '11 and 112 are tJoth greater than 0.5,
their sum will lead to overflow indicated by a 1 in the s.ign bit of lhe sum. Ignore this overflow by keeping all bits in
tht' partial sum and add I]J _ Show that the final sum is correct in spite of the over"fio\<,.· generated by the lim addition.

8.!i6 Develop- the pwducls of the following binary fractions. where the negative numbers are 1D two's-complement
fotm:
{a)iO_a.lllOl) X (I6,J0lil),(b)(l 6 10!Ql) X (Ol!..lQllt).

8 ..5:7 Repeat Problem 8.56 with the negative numbers. considered to be in ones" -complement fonn.
8. 11 . MATLAB Exercises 581

&58 The Taylor structure of Figure P8.5(a) developed for the realization of a Type llinear-phase FIR transfer fun .."tioo
in :Prol)lem 6.J8 has been proposed for designing tunable FIR filters (Cro76b], [Opp76]. Fro.m Eq. (6.131} obx:~
that !he zero-phase frequency response. also called the amplitude response.. of a Type l FIR transfer func(!on of kngth
2M+ i is given by

i/{w) = }
. -''""
M
a[n] (co.>w)n. {8.105)

Let W denote the angular frequency variable of th~ transformed FIR filter fi(z). Show that a lowpass-to-lowpa.<;s
tranl>fonnation can be achie\>ed by substituting

cosw =a+ {JcosW (8.106)

in Eq. (8.105). Show that this tran.sfonnation can be implemented by replacing each block with a transf.er fullCtion
(I + ,;:- 2 )/2 in Figure P8.5 by a block with a tranSfer functiou
az- 1 + ~(! +z- 2 ). (8.107)

U::t we and We denote. respectively. Ole cutoff frequency of the prototype filter and the deSl.red cutuff freq•ency of the
trunsiormed filter. Show that if We < we. it is. convenient to choor.e /3 = l - a. with 0 :-:;: a < L Sketch the mapping
fmmcoswc locosW., for this case. On the other hand, if We > We. show that it is convenient to choose fi = I + Q,
Wlth - I < « -:5c 0. Sketeh the mapping from COS We to cos We for thu .secOfld case.

o[M- lj

+ +
(a)

(b)

Figure P8.5

8.11 MATLAa Exercises


M 8.1 Using Program 8 _I determine the first 30 samples of the impulse response coefficients of the fifth-order elliptic
lowpass filter developed in Example 7.21.

M 8.2 U;iog Program 8_1 determine the first 35 samples of the impulse response coefficients of the fourth-order Type
1 Chebystlev highpass filter developed in Example 722.

M n.J Using Program 8_1 determine the first 30 samples of the impulse response coefficients of the eighth-order
Butterworth bandpass filter developed in EJ~;ample 7 .23.
582 Chapter 8: DSP Algorithm Implementation

M 8.4 Modify Program 8_2 lo denwnstrate filtering of a sum of two sinusrndal sequences by an arbitrary UR causal
di~:ilalfilter. Tk input data to the modified program should be tfle angular frequencies of the sin!,lsoidal 5eqt.u;:rn::e..<:,
and tt.e numef"3tor and denominator coefficients of the transfa function of the HR digital filter. Apply the modified
program to filter a sum of ~"O ">inusoidal seq~:ences of angular frequencies 0.3.n and 0.6.1! by the filter developed in
E)( ample 7.21 and verify its low pass filtering property.

M 8.5 Apply the modified program developed in Problem MBA to filter a sum of two sinusuidal sequeoces of angular
frequencies 0.3Jt" and 0.6n by tile filter developed in Ex.ample 7.22 and verify Irs highpass filtering property.

M8.6 Modify Program 8_3 to demonstrate the filtering of a sum of two sinusoidal sequence~ by an arbitrary IIR
causal digital filter lmplemented in a ~;:a;scade form. The individual sections in the cascade are either second-order
or fiflit-onier with real coefficients. The input data to the modified program should be the angular frequencies of the
$inusoidal sequences. rfle numemtor and denominator coefficient~ of the transfer function of tile individual sections in
thf cascade. Apply the rncdified program to filter a sum of two sinusoidal sequences of angular frequencle!i 0.3n and
0.6n- lry the filter developed in Example 7.21 and verify its lo'.'.•pass filtering property. The output plot generated by
the modified Pr-ogram 8_3 should be identicallo that generated in Problem M8.4.

M 8.7 Apply !he modified prognun developed in Problem M&.6 to filter a sum of two sinusoidal sequences of angular
frequencies 0.3Jr and 0.6:r by the filter developed in Example 7.22 and verify its hlghp~'> filtering property. The
oul.put plot generated by the modified Program 8_3 should be identical to that generated in Problem MH..S.

M 8.8 Wnte a MATLAB program using the function direc:.2 of Section 8.2.ltosiroolatethedirecr form lJ strocture
and demonstrate the filtering of input sequences. Apply this program to filter a sum of two sinusoidal sequence.-. of
an1~lar frequencies. 0.3:rr and 0.6n- by the filter developed in Example 7.21 and verif)· its lowpass filtering property.
Th.~ output plot generated by your new program should be identical to that generated in Problem M8.4.

M 8.9 Using theM-file function gff.:. described in Section 8.3.1 for computing a single DFT sample by Goertzel's
algorithm, write a MATLAB program t-o compute the N-polnt DFfof an arbitrary iengeh-N sequence and compare the
OFT £amples generated with those <lbtained using the M-file function f. ft. Verify the OFf computation -of sevem1
set;',uences of lengths N = 8, 12. and 16.

M 8.10 Write a MATLAB program to verify the plots given in Figure 8.38.

M S.ll Write aMATLAB program to verify the plots given in Figure 8.39.

M l-12 A polyr.omial expanskln suggeSted fur the approximation of the .sine of a numbe< xis given by !Mar9Z]:

sin(nx) =: 3.l40625x + 0.02026367x 2 - 5.3251900: 3


+0.5446778x 4 + l.800293x 5 • (8.108)

wh~re x is normalized by 1t", and its range is. restricted Co the first quadrant given by 0 < x < 0.5. (Note: 0.5 is the
.normalized value of 7r/2.) Usl11g MATLAB compute and plot the vJ.lues of sin(;rx j given by Eq. (8.108) and the err<Jl"
dm; to the above approximation. Compare this approximation with that given by Eq. (K85).

:M ll.l3 A polynomial expan..,ion suggested for the approximation of the an;tangent of a number x in the range
-1 ::; x::; lis given by [Mar92]

tan- 1(x/rr} ~ 0.318253x + 0.003314x 1 - O.l30908x 3


(8.109)
Using MA.TLAB compute and plot the .,.·alues oftan- 1(x) given by Eq. (8.109}and the error due to the above approx-
iimtiorl. Compare ttris appmximation with that given by Eq. (8.86.1.
Analysis of Finite
Wordlength Effects

So far, we have assumed thar we are dealing with di.screle-time systems characterized by linear difference
equations whh constant coefficients, where both rhe coefficients and the sigp.al variables have infinite
precision taking any value be-lween -oo and oo. However, when implemented in either software form
ort il genentl-purpo.<;e computer or in special-puf?Ose hardware form, the system parameters along with
lhe signal variables can take only discrete ·values within a specified range since the registers of the digital
machine where they are stored are of finite length. The discretization process results in nonlinear difference
equaLions characterizing the discrete-time systems. These nonlinear equations, in principle, are almost
impossible to analyze and deal with exactly. Fortunately, if the quantization amounts are small compared
to the values of tile signal variables and filter constants, a simpler approximate theory based on a statistical
model can be applied, and it is possible to derive the effects of discretization and develop results that can
b<!- verified experimentally.
To illustrate the various sources of errors arising from the discretization process in the implementation
oF a digital filter. consider for simplicity. the first-order IIR digital filter of Figure 9.1 riefined by the linear
c.mstar.t coefficient difference equation
y{n] = ay[n- I]+ x[nJ, (9.1)

v.-here y[n] and x{nJ are the output and the input signal variables, respectively. The corresponding transfer
function describing the above digital filter is given by
1
H~z) = ----="
71 ~
z-a
(9.2)
"'
When implemented on a digiral machine, the filter coefficient a can assume only certain discrete vaJues
& and, .in general. can only approximate the original design value of a. A1; a result, rhe actual transfer
function implemented is given by
(9.3)
z-a
which may be-different from the desired transfer fu!lCtion H (z) ofEq. (9.2). Therefore, the actual frequency
response may be quite different from the desired frequency response. This coefficient quanlization problem
is similar to the sensitivity problem encountered in analog filter implementarion.

x[n] :rL....~-----<---$~-~~· )in]


a<J
FiguR 9.1: A first-order HR digital filtcL

583
584 Chapter 9: Analysis of Finite WOfdlength EffeCts

If we assume that the input sequence x!n J has been obtamed by sampling an analog signal x_.(t), it is
discretized by the AID converter being employed to convert the output of the sample-and-hold to digital
samples. If we represent the output oftbe AID convener as _ifn], then the acma1 inpm to the digital filter
of Figure 9.1 is. given by
X[n] = x[n] + e{n]. {9.4_}

where efn] ls the AJD conversion error generated by the input quantization process"
The quantization of arithmetic operations leads to another source of errors. In the case of our simple
digital filter of Eq. (9.1), the output -of the multiplier v[n] gene:rated by multiplying the signal y[n - 1]
with ct.
vl_n] =ay(n -I]. (9.5)

is quantized tc fit the register containing the product. The quan1ized signal V(n] can be represented as

V[n] = v[nl + eo:[n]. (9.6)

where eo:[nl is the error sequence generated by the product quantization process. The properties of this
type of round-off error are somewhat similar to tbose of the AID conversion error.
In addition to the above SQt.i.rces of errors, another type of error oc~urs in digital filters due to the
nonlinearity caused by the quantization of arithmetic operations. These errors manifest themselves in
th{: form of oscillations. called limit cycles, at the output of the filter, usually in the absence of input or
sometimes in tbe pre.'>ence of constant input signals or sinusoidal input signals. In this chapter, we analyze
the effects of the above sources of quantization errors and then describe structures that are less sensitive
to the~ effects.

9.1 The Quantization Process and Errors


There are two basic types of the binary representations of data, fixed-point and ftoating-point formats,
as described in Section 8.4. In each of these format~, a negative number can be represented in OR(; of
three different forms. The arithmetic operations involved in digital signa) processing are the addition
(subtraction) and the multiplication operations, discussed in Section 8.5. Various problems can arise in the
digital implementation of the arithmetic operations involving the binary data due to the finite word length
limitations of the register5 storing the numbers and the results of the arithmetic operations. For example.
in fixed-point arithmetic, as demonstrated in Example 8-20, the product of two b~hit numbers is 2b bits
long, which has to be quantized to b bits to fit the prescribed wordfength of the registers. Moreover,
in llixed-point arithmetic. the addition opera1ion can result in a sum exceeding the register wordlengtb.
causing an overflow as illustrated in Example 8.16. On the other hand. there is essentially no overflow in a
Boating-poim addition. However, the results of botb addltion and multiplication may have to be quantized
to fit the prescribed wordlength of lhe registers.
An analysis of the various quantization effects on the performance of a digital filter in practice depends
on whether the numbers are in fixed-point or floating-point fonnat, the type of representation for the
negative numbers being used. the quantization method being employed to quantize the data. and the digital
filter structure being used for tmpiementation. Since the number -of all possible combinations of the type of
arithmetic. type of quantization method, and digital filter structure {of which there are literally thousands)
is vel')' large, we consider in this chapter analysis of quantization effects in some selected practical cases.
However. the analysis presented can be easily extended to other cases.
We now describe the three different types of quantization that can he employed. As indicated in Section
8.4., it is a common praclice in digital signal processing appli~ations, to represent data in a digital machine
either as a fixed-point fraction or as a floating-point binary number with the mantissa as a binary fractiorL
9.2. Quantization of Fixed-Point Nvmbers

2 "

•••

Figure 9.2: A general (b + l)-b1l fixed-point fraction.

Figure !1.3· The quannwtion process model.

We assume the available wordlength is (b +I) t>its with the most significant bit (MSBj representing rhe
:;ign of the number. Consider first the data to be a (b + 1)-bit fixed-poim fraction wirh the binary point just
~:o the right of the sign bit, as indicated in Figure 9.2. The. l>mallest pos.ltive number that can be represented
:n this format wiU have a least significant bit (l.SB) of I, with the remaining bits being all O's.. Its decimal
equivalent is 2-b. Numbers represented with (t + 1) bits are thus quantized in steps of 2-i>. called the
quantization step or the width af qu.antiz.ari.on.
Before quantization, the wordlength ls much larger than that indicated above. Assume that the original
data x is represented as .a (jJ + I)-bit fraction with fJ > > b. To convert it mto a (b + 1}-bit fraction, to
be denoted as Q(x ). we can employ eitl":er truncation or rounding. In either case. the quantization procb;S
•:an be modeled as shown in Figure 9.3. Since the representation of a positive binary fraction is the same
independent of rhe format being used to represent the negative binary fraction, the effect of quantization
of a positive fraction remains unchanged. Howe¥er. the effect on negative fractions is nm the same for the
1hree different types of representations.

9.2 Quantization of Fixed-Point Numbers


To~runcate a fixed-point number from (/3 + l) bits to (b + i) bits. we simply discard the least s.igni:ficant
</3 - b} bits. as indicaied in Figure 9.4. Let Fr den«e the truncation error defined by
Et =Q(x)-x. {9. 7}

For a positive number x, the magnitude of the nu:nber Q(x) obtained aftertnmcation i& less than or equal
to the magnitude of x. Therefore, E 1 ::S 0 for a positive x. The error £ 1 is equal to zero if all bits being
discarded areO's and is largest if all bits beingrliS<"arded are l's. In the latter c.ase, the decimal equivalent of
the portion being discarded is equal to z-b - z-P. Hence !he range of the error £ 1 in the case of truncation
of<! positive number..:: is given by
(9.8)
For a negative number x, each one of the three differenl representations needs to be examined indi-
'liduaHy. For a negal.!ve fraction in sign-magnitude fonn. the magnitude of the truncated number Q(x)
is smaller than that of the unquantiz.ed negative number .x. Thus, i.t follows from the definition of the
quantization error Er given hy Eq. (9.7) that here

(9.9)
586 Chapter 9: Analysis of Finite Word!ength Effects

•••

To be discarded
•••

FJgUre 9.4: U:ustration of the truncation operation.

F<lr a negative fraction x in ones' -complement form 1 6 a_ 1 a-2 . .. a_p, the numerical value is -(1 -
2-'~) + Lf=l a_;l-i. The numerical va.lueofits quantized version Q(x} is thus -(l-2-.b)+ Ef=l a_,2-;
and hence, the error Er is given by

' fo
E! = Q{.r) -x = -(1- z- 0 ) + La-,2-' + (1- 2-/5)- .La-;2-i
i=l i=l

= (2-b- Tfi)- L' a-;2-i. (9.10)


i=b+l

Thl~ truncation error for this representation is always positive and. has a range
0:5 e 1 :5 z-b- 2-P. (9.11)

Now. consider a negative fraction x given in the two's-complement fonnat ltia-J0-1 .. . a-fJ. Its
numerical value is given by (-I + Lf=l a_,2_1.}. The representation of Q(x), obtained after truncating
x, is given by IMLIG-2 ... G-b. with a numerical value ( -1 7 I:f=l a_i2-'). Therefore,
p
=- L a_;2-i. (9.12)
i=h+l

Tht~ truncation error is here always negative and has a range


-(2-b- z-P) ~ Er :50. (9.13)

In the case of rounding, the number is quantized to the nearest quantization leveL We assume that
a number exactly halfway between two quantization levels :is rounded up to the nearest higher leveL
Th~refore, if the bit a-(b+ll is 0, rounding is equivalent to truncation, and if this bit is 1, then 1 is added to
tbe LSB position of the truncated number. It should be noted that the rounding errors, does not depend on
the format being used to represent the negative fraction since the operation is solely based on the magnitude
of tbe number. To determine the range of E,, we observe that the quantization step after rounding has a
value 2-b. The maximum rounding error l!r therefore has a magnitude (2-b)/2. As .a result, the range of
£.~ is given by
(9.14)
In practice, /3 >> b. For example, the wordlength of a product is typically twice that of the numbers
being multiplied. Hence, we can set 2-P;;:::: 0 in the inequalities ofEqs. (9.9). (9.11}, (9.13). and (9.14).
6'.3. Quantization of Floating-Point Numbers 587

a,;(x)
C)(x)

I
l
I'
(a) (b)

Ci(x)
I
I

d
X

(c)

Figure 9.5: Input-output relationships of the quantizer: (~) rounding, (b) two's complemeut truncation, and (c)
.:me>· -complement and sign-magnitude truncation. He:e .S = z~O.

leading to the simpler inequalities shown in Table 9.1, where we have set theqwmtizati.on step as.C = 2~;,.
I' lots of the input-output charoct-eristics of the quantizer for the three different representations and the two
different quantization methods are sho'hn in Figure 9.5.

9,3 Quantization of Floating-Point Numbers


ln the case of floating-point numbers, quantization is carried out only on the mantissa. As a result, here it
is more rele"<ant to consider the reiati·ve error caused by the quantization process. To this end, we define
the relative error c: in terms of the nume;ical values of quantized floating-point number Q(x) = 2£ Q{M)
Itnd the unquantized number x = 2£ Mas
Q(x)- x Q(M) -M
t:= = (9,15)
X M
It can be shown (Problem 9.1) that the range of the relative errors for the different representations of a
Uooting-poir.t binary number are as indicated in Table 9.2, assuming 2~P << 2~b.
5!!8 Chapter 9: Analysts of Finite Wordlength Effects

Table 9.1· Range of quantiution error.


• Range of ern.;r
jTypeof • I"< umber
Q(x)- x
~-~
uantization representation
Positive number
IT runcation
Two's-complement





-0 < Er ::5 0

I negative number

Sign-magnitude

rT run cation
neg~>tivc number
0 :::. E:l < 5 I
,~ncefgoao''o'i.ei."c"o'"o=b="'c_ ~------------1
Ones'-complement

[_ ___________ __
: All positive
LRounding i and negative
I numbers
Note: 8 = z-h.

.
"Thble 9 2· Range of relative erroc---
c - (Q(x)- x)fx

r Type of Number •
'' quantizatioo representation Range of relative error
'' • 2£ <Et 50, x>O
'''
Truncation Two's-cOrnplement •

• 0 < &; <U. x<O
' Sign-magnitude '
Truncation -'lli < s, ::50
Ones'-complement
Rounding All numbers 8< f:, <8
Note: 0 = z-b.

A discul>Siall of the analysis of quantization effects of digital filters implemented using floating-point
arithmetic is beyond the scope of this text. Interested readers are referred to the pablicatians listed at the end
of this book [Kan71], [Liu69], [Opp75), !San67J, [Wei69b], fWei69aj. We consider here the fixed-point
implementation case.

9.4 Analysis of Coefficient Quantization Effects


The: effect of multiplier coefficienl quantization on digital filters is similar to that observed in analog
filters. The transfer function Hen of the digital tifter implemented in hardware or software form with
quantized coefficients is different from the desired transfer function H {z). The main effect of the coefficient
quantiT,..ation is therefore on the poles and zeros that move to different locations from the original des1red
loc:1tionf>. As a re;;nlt, the actual frequency respo:ase if (,ei"") is different from the desired frequency
respo115e H(ej"') and may not be acceptable to t_he user. Moreover, the poles may move outside the unit
circle c-ausing the implemented digital filter to become unstable even though the original transfer function
with unquantized coefficients is stable, as i.llustra!ed earlier in Example 4.6. Howeve.-, as demonstrated
·:J.4. Analysis of Coefficient Quantization Effects 589

in Example 4.7, the same transfer funcoon when ·!mplem..:nted in a cascade fonn remained stable after
coefficient quantization.

9.4.1 Anaiys~s Using MATLA8


Before we study lhe effect of the coefficient quantization on the perfonnance of a digital filter analytically,
i11~ instructive 10 investigate the effect on" computer using MATLAB.
A"' MATLAB uHes decimal numbers and arithmetic, to study the quantization effects on the digi;al
tllter 1mptemenled using binary numbers and arithmetic. we need to develop the decimal equivaJents of the
quantized representations. of bmary num~ers and signah. The latter can be quantized using either truncation
or rounding. We provide below two M-fiJes. a2dT and a2C.R.. which develop the decimal equivalent beq
of the bimll)' representation of a vector d of decimal numbers with N hils for the magnitude part by
lruncation and rounding, rcl.pectively.

iuncc::i,:;n beq - a2dr(d.n•


't BEO = A2D?i2, til} ]tmera:::es the decimal
% -:'qui valent. bcq o::' t.he bi::10.ry l'epresenta.tion
% o!:: a dec:r-l.a::. number D w·ith N bits for the
% :J.aqr:: t.c.tde pa~-t cbca:i ned ~Y trJnca-::-.ion

'
:<t = :.;dl = abs{d);
..:~:ile fix\0.1; > 0
dl = abs:clJ/',2-m};
m m+l;
e:-Id
n.=oq :i.x (d1 ""2 ~r.J;
neq sig;,'d).*beq."'2~1m·n-l);

function beq = a2d:<. (d, n)


% BEQ ~ F-.2DR (9, Nl generates t.to.e decirr.al
% eqLi-valer.::-. beq of the binary representa':ion
% o.: a dec.irr.al n~_;nber D with N bits for t!'le
% magn:.tude part ODtC.iEed by roc;.Edi:lg
%
,.,. = :.; dl = abs!d;;
vih5le fix(dli > (1
dl ~ abs(d}i\2-m);
:T ~ E!+J.;
ercd
neq [~x\dl*?.~r.~.S;;
heq sign!d,i.*bcc;:.'*2~{r.:-:·>l);

To illustrate the effect of the coetlicient quantization on the frequency response and the pole-zero
locatwns of .a digital filter we need to evaluate these characteristiL-s with both infinite :md finite precision
fi)f the filter coefficient<;. As MATLAB t>ses double-precision decimal numbers and arithmetk the filter
c~fficients and signals generated w.ing MATL<\!l can be considered to be of infinite precision for aH
practical purposes. To evaluate the effect of quantized billlii)' representation, we can use either lhe M-file
a:2d:' or theM-file a2dR giwn above tu develop the decimal equivalent of d:e quantized binary numbers
an.d signals.
590 Chapter 9: Analysis ol ~inite Wordlength Effects

VvC 5zst illustrate tbe effect of coefficient quailtization of an llR di.gital filter implemented in direct
form usmg Program 9.£ given below. In it present form, the program evaluates the frequency response of a
fifth-order elliptic fowpass digital filter with a cutoff at 0.4rr, a passband ripple of 0.4 dB, and a minimum
stopbanC: attenuation of 50 dB. With simple modifications, the program can be used to study the effect on
other type;; nf filters with different specifications. For truncating the transfer function coefficients, this
program uses rhe M-file a2dT, In addition, it uses theM-tile plotzp, which is the w.me as !heM-file
:.cp 1a ne in the Signal Proce.vsing ToolboxofMATLAB except that here the poles are shown with the symbol
+ and the zcms are shown with* in the fOle-zero plot. 1
t ?rogr2m 9_1
~ Coeff:icie;-It. Quantization Effects or: the
% Frequency Eesponse of a Cirect Form IIE Filter
%

~;::;..,=!] ~ ellip\5,0.4.,50,0.4);
(h,w] ~ fr·eqz(b,a,S:2); g = 2G*logl0(abs{h)):
bq = a2d'l'(b,Sl; aq = a2dT\a,5);
[hq,w: = f:>::-ec;:z(bq,aq,51:2l; gq = 2C*1og10(a:0s(hq));
p!_o~- (vi/pi, g, ' b ' , w/pi, gq, 'r·; '} ;griC
ax~o-:;( 10 !:_ -80 5]);
xla.be} (' \omega/\pi' l ;yla:Oel \ 'Gair:, dB');
t:it:.let'orig::.na:'..- so:___id li..:1e, quan'c...ized- dashed l'...ne');
pause
/plar.e(b,a);
v::.otzp (bq, aq);

F;gure 9 .6(a) and (b) shows the gain response of the ide-al filter with infinite precision coefficients (shown
\vith a wlid line) and the gain response obrained when the transfer function coeffil."ients are truncated to
5-bit length {s::towo with u dashed line). It can be seen from this figure that the effect of the coefficient
qu.mhzation i1. more severe around the bandedges with a higher passband ripple and a smaller transition
band. The mi:1imum stopb-and anenuation ha.» also become smaller Moreover, the transmission zeros
cJc.sest to the stopband edge have moved closer to the passb.md edge.
Figure 9.6(c) shows the locations of the poles and zeros of the original elliptic Iowpass filter transfer
function with unquantizerl coefficients and of the transfer function of the eiliptic filteT implemented with
quantized coefficients. As can be- seen fr-om this p;ot. coefficient quantization can carn;e substantial dis-
placement of the poles and zeros !Tom their desired nominal locations. In this example, the zero dosest £O
the pole has moved the farthest from its original location and has moved closer to the n~ location of its
nearest pole, \\hich is now much doser to the unit circle.
It ; ~ of interest to compare the perfomance of the direct form realization uf an HR transfer function
with that of a cascade realization when implemented with quantized coefficients. Program 9 _2 given below
can be used to evaluate the effect of the q:.~antization of the transfer function coefficients of each section
of a ca;,cade form realization of the above elliptic lowpass filter. However, this program can be easily
modified to study the effect on other types of filters with different specifications.

~Program 9_?
Coef!:.:icient Q:.Janl::_zat-io:-J. Effects on the
f:
% Frequency Response of a Cascade Form l:IR l'ilcer
%
1The mcdirlcation to the function zp l.a.o.e ,,., with permissionfmm The Matln>.'<l£4 loc,. NaJ;.;:i. MA.
9.4. Analysis of Coefficient Quantization Effects 591

if;_---:-~-:-~--=--~~\
- iO; \

:;l-~0~
~ -YI~
40:

(~-
"
(a) (b)

. o• ~

0.5 '•
J!

"

c
·~
0

"
,. •x

"'-rul x•

-•L _, .. O•
~

-()5 0 0.5
I Real Part
(c)

Figtlft' 9.6: Coefficient quantization effects on a fifth-order HR elliptic lowpass filter impleme~~ted in direct form: {a)
fullband gain respoflSCS with unqu~ntizerl (shown with so tid hneJ and quantized coeffic-ients(shown with dashed line),
l_b) passband details, and (ct pole--zero movements; Pole and zero locations of lhe :litter with quantized coefficients
denuted by ":>:" and "o". respectively, and pole and ze:o locations of lhe filter with unquantized coefficients denoted
by"+"' and"""'. respectively.

::lf;
[z,p,kJ "'ellip(S,J.4,~:o,0.4l-;
fb,a] zp2tf(z,p,K);
[b,wl = :reqz(b,a,::-,12); g = 20*l8g_:_o \abs(h)};
sos = zp2soslz,p.k;;
sosq = u2dl(scs,Sl;
~-1 sosq\1,:); R2 = sosqt2,;); R3 = sosq(3,~);
bl = conv\Rl{1~3),R2(1:3)); bq = CO;JV(R3(l:))-,~1);
al = conv(Rl{4:6),R2(4:5-)); aq"' cor.v(R3(4:6J.al};
::<q,w"i = freqz(bq,aq,512); ;Jq = 20*logl0\abs(hq)};
plot(w/pi,g, 'b',w/pi,gq, 'r: ');grid
axis(~D 1 -7C 5]/;
xlabel!'\onega!\p1' );ylabel{'Ga:n, dB'};
title('orlglnal solid line, quan~1zed- dashed line'};
592 Chapter 9: Anaiysis of Finite WordJength Effects

----= -: -=-~"
u.s - ------- - -
---
,/"'
0'=:-::::---::

""-w·
"0 .•
;:;-N•
v

-so:
' \
\
1_-.
•a
0
-1
~. -~~ ~/ l
'
I 1

-W'----~~
0 (}_2
' .\
(a) (b)

Figun 9.7; Coe:"ficientquantirntion e!Tects on a fifth-order HR elliptcc low pass filter implemented in a cascade f;:mn:
(~) fullband gain respollSes v,.ith unquamize.d {~hown with solid line) and quantized coefficient;; (shown with cht~lk:d
line}. and (b) p<mband detaih.

Figure 9.7 shows the full band gain response and the passband gain response details of the ideal cascade
for realization with infinile precision coefricients (shown with solid line} and the gain response obtained
wh·:!n the transfer fwtction coefficients of each secti:on are truncated to 5-bit length {shown with dashed
Iirn~). It can be seen from this figu:e that tlJ.e effect ;:.f the coefficient quantization here i-~ not as severe as
in the previous case. A flat loss has been added to the passband response with an increase in the passband
ripple. Here eoch complex zero-pair i"> realized by o second-order section, and hence, the zeros remain on
the unit circle. In fact, all of the zeros n:main pretty much at their original locations. The overall effect on
the stopband response is minimal.
In general, a higher-order IIR transfer function should nev-er be realized as a single direct form structure,
but realized as a cascade of second-order and first-order sections to minimize the effect of coefficient
quantization.
The above two programs can be easily modified to study the effect of coefficient quantization on the
performance of an FIR digital filter. Program 9 _3 given belmv can be employed to study the effect of the
coefficient quantization on the frequency respome of a lowpass equiripple FIR digital :filter implemented
in direct form.

% .Prag~am 9 _j
% C:;:)e:Eficien:: Qudntizatior:: Effects on the
% Frequency Response of a C1rect. Form CIR Filter
%
fpt.s = [G Q_:, 0.55 lJ; :naq = [1 1 0 D:;
1: = remez{39,fpt_s,:r,ag)
[h,w] freqzlb,l,:,l2J; g = 20*log10iabs(h));
bq = a2dT (b. S J ;
f]:q,w; = freqz~bcj,:,512) gq = 20*lo_,l0(abs(hq});
ploc- (wrpi,g, 'b' ,w/pi,gq, 'r: '; .<Qrid
axis([O 1 -60 <;.J);
.xlahel C·\omega/ \pi') ; ylabel ~ 'Ga:.c, dD' l;
t--'-t:le('origina::.. - so~id li:1e, c;uantized- dashed l~r:.e');

Figure 9.8 shows the gaill responses of the FlR filter generated by the above prograiiL As can he seen
from this figure, the effect of the coefficient quantization on on FIR filter implemented in direct form is
·::1,4. Analysis of Coefficienl Quantization Et:'ects 593

-Hl

-50'

"
,, J
(o) (b)

Figure 9.8: Coefficient qu;tn:l!zation effeds on a 39th-xder FIR equlripple luwpass f.her implemented in direct form:
(a) fullband gain responses whh unquanti7-Cd (shown with solid tine) and quantized coefficiems (shown with dashed
line), and (b) passband detaih.

to reduce the passband width, in;::rease the passband ripple, increw.e the transition band, and reduce the
minimum stopband attenuation.

H.4.2 Estimation of Pole-Zero Displacements


A me;;sure of the coefficient quantiwtion effects on the perfonnance of a digital filter is given by the
Fole-z:ero displacements from their original positions. These displacements can be evaluated analytically
as described next [~'iit74cj.
Consider an Nth-degree JX!lynomial B(.:::) with simple roots:

B(z)
N
= Lb;z' =
i=O
nN

k=:
(z- Zx), (9.16)

v:rhen.: hN = I. The roots ;:~ of B{zj are given by

(9.17}

i\iote that B(z) can be either the denominarorpolyc.omial or the numeratorpolynomialofthe digital transfer
function. The effecr of coefficient quantization is manifested by the change of the polynomial coefficients
from b, w h, + Ab;, and as a result, the polynom:al B(z) changes to a new polynomial B(z) given by

8(z) =
N

L (b, -:- t.b, );:i = B(;:J + L


i=('
N-1

i=O
(h.b; )z' = nN

k=l
(z- !k}, (9.18}

..... ith i~ denoting the roots of the polynomial iJ(z). Note that h are the new locations to which the root!.
ZJ of 8(,":) have moved. For small changes, h will be dose to Z,i; and can be expressed as

(9.19)

where ~r" and .6.&.~: represent the changes in the radius and the angle of lhe kth root due to coefficient
q11aTillzation. Our aim is to develop simple expres;;ions for estimating !:l.rk and !!l.Bk kn:1wing the changes
594 Chapter 9: Analysis of Rnite Wordlength Effects

~b, in the coefficients of the polynomial Btz). If we assume the change D.bi tube very small, the changes
in the radius and the angle, D.r.~: and 88t, of the kth root can be also COIL'>idered to be very small, ar.d we
cail rewrite the expression for Z.i: in Eq. (9.19) as

Zk = (q + !J.r;Je 1 1!. 9~e 1 (J* ~ (rt I Ark){l- JD.Bt)ef~'h


;;::: r~;e 1 f4. + (~rk + }r.~:!J.e~r)e 1 f-'>. (9.20)

neglecting higher-order terms. The root displacement can now be expressed as

(9.20
Now CQnsider the rational function 1/ B(z). Hs partial-fraction expansion is given by

N
1 I: Pk
(9.22)
B(;;) = k=l Z - Z_t-

where Pk. i>: the residue of If B(:;:) at the pole z. = Zk, i.e.,
(;:- Zk}l .
Pi<= BCz) ~~. = Rk + JXk. (9.23)
.-,,
If we a,;sume that Z.~: is very close to Zk, then we can write

1 P>
--- (Y.24)
B(Zk) Zk- Z~·
oc
(9.25)

N-1
i3(2k) = 0 = B(h) + L {libr)(Z,d. (9.26)
<d>
Th·!refore. from Eqs. {925) and (9.26), we arrive a1

(9.27)

assuming that h is very close to Zk- Rev.T;ting Eq. (9.27) we obtain

(9.28)

Equating real and imaginary paru of the above, we arrive at, after some algebr...

D.rt.. = (- R~r_P;; + X11.Qk) · 8B = S~' · D. B. (929a.)


1
,,
!19~ = - - (X,tPt + Rt.Qk) · LJ.B = S~. LiB, (9.29b)
9.4. Analysis of Coefficient Quantization Effects 595

F~gnre 9.9: Direct form II realization of the second-order IIR transfer function of Eq. (9.31}.

where

P.~; =[coset n; r'f_cos~- · ·r(- 1 cos(N- 2)~J, (9.30a)

Qk = [ -sinlfk 0 rfsine,. · · ·rf- 1sin(N- 2WJ:], (9.30b)

"-B ~ (M>o M 1 M2 ·· · MN-1]'. (9.30c)

s:
It should be noted that the sensitivity vectors S~ and 5 depend on only B(z) and are independent
of ~B. Hence. once this vector has been calculated, pole-zero displacements for any sets of 6B can be
rapidly calculated using Eqs. (9.29a) and (9.29b). Moreover, the elements of .6B are multiplier coefficient
changes only foe the direct form reaJizarion.
We illustrate in the follrn.vin_g example the application of Eqs. {9.29a) and (9.29b) in computing the
pole displacements of a second-order direct form IIR digital filter structure due to coefficient quantization.

-
596 Chapter 9: Analysis of Finite Wordlength Effects

We now extend the results ofEqs. (9.29a) and (9.29b} to an arbitrary structure with R multipliers given
by at. k = 1, 2, ... , R. Due to coefficient quantization these coefficients change to a.~; + D.a.t. Since the
multiplier coefficients a 1 are multilinear functions of the coefficients b; of the polynomial B{z), we can
express the change Abi in the transfer function coefficient b; to the change 6.ak in the multiplier coefficient
a~; through

13.b; = ER ::;--
ab,
Aak>
k=l rl_U.k_
i=O.l •... ,N-1. (9.38)

In matrix form the above can be written as

AB = C · l\a:, (9.39)

where

c~ (9.40a)

lihN-l it!:IN-1 iibN-1


~ aa: a.....
l!.ct = [Aat Aa2 · · · L'iaRJT. . (9.40b)

Substituting Eq. {9.39) in Eqs. (9.29a) and (9.29b), we anive at the desired result

l!..r~c = s;,lr . c . .6.0!', (9.41a)

tifit = ~~ · C · D.a, (9.41b)

where the sensitivity ~tors are as given in Eqs. {9.29a) and (9.29b). It sboukl be note-d that C depends
on the structure but has to be computed only once.
The application ofEqs. (9.41a) and (9.4lb) in computing the pole displacements of a second-order DR
digital filter structure is treated in the following example.
9.4. Analysis of CoeffiCient Quantlzatlon Effects 597

Figure 9.10; A second-uder COLlpled form structure.

9.4.3 Analysis of Coefficient Quantization Effects in FIR Filters


The analysis of the displacement of the roots of a polynomial due to coefficient quantization outlined in
the previous section can of COUTse be applied to FIR transfer function to determine the :sensitivity of its
zeros to changes in the coefficients. A more meaningful analysis is obtained by examining tbe changes in
the frequency response due to coefficien~ quantization as described next.
Consider an Nth-order FIR transfer function:
N
H(z) = Lh[n]z-". (9.47)
n=O

Quantization of the filter coefficients results in a new transfer function


N N
H(z) ~ Lh[n]c-• ~ L (h[n] + e[n]) ,-•. (9.48)
n=O
Chapter 9: Analysis of Finite Wordiength Effects

H(:::)

£(:_)

Figure 9.11: Model of the FlR filter with quantized coefficients.

which can be rewritten as


H(z) ~ H(z) + E(z), (9.49)
where
N

E(z) ~ L:einlz-", (9.50)


.~

Thus, the FIR fiher with quantized coefficients can be modeled as a parallel connection of two FIR filters,
H (z) and E(z.), as Ylown in Figure 9.11, where H(z) represents the desired FIR filter with unquanrized
cmfiicients, and E(.z) is the FIR filter representing the error tn the transfer function due to coeffi.cieru
quantization.
Without any loss of generality, assume the FIR filter H{z) to be a Type I linear-phase filter of order
N with impulse response h[n]. Hence, E(z) is also a Type l linear-phase FIR transfer function. The
frequency response of H(z) can be expressed as

(N-2){2 )
H(ej"')=e-i"'NJZ h[~]+ ; 2h[n]cos[(~ -n)w] . {9.51)
(

The frequency response of the actual FIR filter with quantized coefficients b. [n] can be expressed as
fi(~m) = H(e}w) + E(ejfzJ), (9.52)

where E (eJ"') represents the eiTO£ in lhe rlesired frequency response H (ej"'):
N
E(ei"') = Le[n]e-i"-"'. (9.53)
l"J=(i

with e[n 1 = hln] - h[n]. The error in the frequency response is thus boUI1ded by
N N N
jE(eF"")! = L>·[n]e-jwn :OS L.le[n]; je-j"'"! .:<:= Lk[n]i. (9.54)
n=O ~~~ n=O
Assume each impulse response coefficient h[n] is a (b + !)-bit signed fraction. In this case, the range of
tht:· coefficient quantization error e[n] is exactly the same as that indicated in Table 9.1. Using the data
given in this table, an upper bound on IE(ej""')! can be derived from Eq. (9.54). For example, for rounding,
kfnJ! ~ .S/2, where & = 2-b is the quantization step. As a result

I E(e
jw i
)I .:S (N +2 1)5 . (9.55)
9.4. An~sls of Coefficient Quantization Eflects 599

lr·-- -~ -·----
,,11

Figure 9.12: Plot of a typical WN{w).

The above botmd is rather conservative and can be reached only if aH errors in Eq. (9.54) are of same
sign and have the max:imum value in the range. A more realistic bound can be derived assuming e[n] are
s!atistically independent random variables [Cha73]. From Eqs. (951) to (9.53), we obtain

(9.56)

From the above, we observe !hat E(eF") is a sum of independent random variable;;. If we denote the
variance of e[n) as <Tj, then the variance of E(ejw) is simply given by

,.
aj_.(w) =a;
( •..
I+ 4 ~cos
- 2
(am)
) .,
= cr"'- (v +
.s.m(N+l}w
.

sinw ) · (9.57}

lJsing the ootation


W,v(w) =[
2N+1
l (N +sin(,:+ l)w)Jii2
smw
(9.58)

the >.tandard deviation of E(eiw) can be expressed as

a_E{w) =a., ( ../2N + 1) WN(W). (9.59)

For unifonnlv distributed e[nl ae = ti,f..,/12 = z-b-l i .Jj_


A plot of a typ)cal weighting function W.v(w-) ls. sketched in Figure 9.12 In fact, it can be shown thm
WN(w) is in the range (0. I), and hence, the standard deviation O"E(W) is bounded by

(9.60)

Based on the abm·e bound, Chan and Rab-iner [Cha73] have advanced a method to estimate the
wordlength of the FIR filter coefficients to meet the prescribed filter specifications.
600 Chapter 9: Analysis of Fmite Wordlengtb Effects

Ideal xinJ= x)nT) i[n] = <;/(x[n 1)


X (() ~
5ampler
Qc~Jnti.ter Codti '-----+
"

Figun 9.13- Medel of a pradi<.:al AJD ,_-on=rsion systerr:.

9.5 AID Conversion Noise Analysis


In many applirations. digital signal proce~sing technique« are employed to pwr.:ess L'Dnlinuous-time (ana-
to_~) handlimi:ed signals that are either voltage ur current waveforms. These analog signals must be
converted intc digital form before they can be pmces.sed digitally. According to the ~ampling theorem
g:i •e:r. in Section 5.2.1. an analog bandlimited signal Xa(t) can be represented uniquely by its sampled ver-
sion x, (n T) if the sampling frequency ~h = 1/ T is greater than twice the highest frequency n.., contained
in x,. (t ). In order to ensure that the sampling frequen._-y chosen does satisfy this condition, an anti-aliasing
fil[er is used 1:0 bandlimit the an<J.log signal to half of the sampling frequency. The discrete-time &equence
x,.(nT) = x[n] is then converted into a •hgital sequence fnr digital signal processing As indicafed by
Figure 5.1, rhe conversion of an analog signal inlo a digital sequence is implemented in practice by a
ca<;eade of two de•; lees. a sample-and-hold (S/H) circuit followed by an analog-to-digital (AID) convc.rter.
The d1gital samples pruducedby the A/D converter are usually represented ina binary form. As indiLated
in Se::tion 8.4. there are several different forms of binary re:oresentations of which the two's-cornpJemenf
repre~ntation is usualty employed in digital signal processmg for com'enience in the implementation of
the arithmetic operations. Conse()uently, the A./D ccnverters used for the digital signal processing of analog
si~;nab in general are those that employ the two's-compiemen( fixed-poinl representation to represent the
digital equivalent of the inpu<. analog signal. Moreover. for the processing of bipolar analog signals., the
AID converter generates a bipolar output represented as a fixed-point signed binary fraction.

95.1 Quantization Noise Model


The digital sar::ple generated by the AID converter is the binary representation :)f the quantized version of
th~11 rrnduced by an ideal sampler with infimte precision. Because of the finite wordlength of the output
register, the digital equivalent can take a value from a finite set of discrete value~ within the dynamic range
of the register. For example, if the output word is of length {b + l} bits including the sign bit. the total
number of discrete le\•els available for representation i~ 2°..,_ 1. The dynamic runge of the output register
de;Jends on the binary number representation se-lected for the .A./D converter. The operation of a practical
analog-to-digttal convegion system consistmg of a sample-and-hold circuit followed by an AID converter
therefore can be mudded as. shown in Figure 9. 13. The quantizer maps the input analog sample x[nJ into
.r [nj, one of a set of discrete values, and the coder determines its binary equivalent .I,q[r.] based on the
btnary representation sdJ.eme adopted by !he ND c;mverter.
The quantization process emp!oyed by the quantizer in the AiD convener can be either rounding or
truncation. As-:uming rounding is used, the i:nput-output characteristic of a 3-bit AID converter with the
ow put in two's-complement form is as shown m Figure 9.14. This figure also shows the binary equivalents
of ·:he quantized samples.
As indicated earlier and shown -ex.plicitly in Figure 9.14, for a two's-complement binary representation,
(he bi:Jary equivalent Xcq[nJ of the quantized input analog sample .X[nJ is a binary fraction in the range

(9.61)
~.5. AJD Conversion Noise AnalySJs 601

xlni=Cl{x[n])

Oil

~
001

~_c9~s<--_c"".s'--:c-~-o-i'-'1""J,---c:,o&-"soo-~77&--xln;
222 222

-2S

lOCI!

----Ful]-<;cnle range (RFS) ---~:

Figure 9.14: Input-.."}Utput characteristic of a 3-hit bipolar AiD converter with two·s-complement representation.

;:tis related to the quantized sample Xfn] through

" 2i?{nj
Xeq[nJ = --, (9.6:2)
RFs
where RFS denotes the full-scale range of the AID converter. We shall assume that the input signaJ has
been scaled to be in the range of± 1 by dividing its amplitude by RFs/2, as is usually the case. Then the
decimal equ:valenl of ie<.J [nj i~ equal to .i[ n ]. and we shall not differentiate between these two numbers.
For a (h+ J )-bit bipolar AID converter. the total number of quantization levels is 2b+ 1 and the full-scale
range RPS., usually given in volts or amperes, ls given by

(9.63)

where b is the quantization step size, also called the quantization width, in volts or amperes. [f the input
signal is in the range given by Eq. (9.61), RFS = 2, and then g = 2b. For the 3-bitA/D converter depicted
m Figure 9. 14, [he total number oflevels is 2 ~ = 8 and the full-scale range. is RFS = 88, with a maximum
value of Am""" = 18j2 and a minimum value of Aillill = -93/2. If the i:nput analog sample Xa{nT} i~
within the fult-scale range,
98 73
- -2< Xa,(nT'-' -< 2 -' (9.64)

i[ is q:.mntized to one of the 8 discrete vaiues indicated in Figure 9.14. h general, for a (b + 1)-hil
\lo'ordlength AID converter employing two ·s-compleme.nt representation, the full-scale range is given by

(9.65)
602 Chapter 9: Analysis of Finite Word!ength Effects

'· I',
' ;-, :' ~~ ' "
'
' ' ' <!nl
% '' "
' ' ""' '
-1~
Rf.>
'-J
"··
Figure 9.15· Quantization error a.~ a function of the input for a 3-bit bJ.pdaz: AID converter.

If we denote the difference between the quantized value Q\x[nl) = i[nJ and the input sample x,.{nT) =
x[n l a.<; the quantization error e[n \,

e{nj = Q(x{nl) - x[n] = .l:[n]- x[nl, {9.66)

lt fotl.ows from Figure 9J4 that e[n} is within tbe range

-'
--
0
<e[n] < --.
- 0 '
~
(9.67)

assuming that a sample exactly ha.lf\11-ay between two levels is rounded up to the neare.<;t higher level and
assuming that the analog input is within the AiD converter full-scale range as given by Eq. (9.61). ln
this case, the quantization error e[n], called the granular noise, is bounded in magnitude according to
Eq. (9.67). A plot of the quantization error .f'[n J of the above 3-bit AJD converter as a function of the input
sample xlnJ i;; given in Figure 9. 15.
As can be seen from this figure, when the :input analog sample is outside the A/D converter full-scale
range. the magnitude of the error e[ ttl increases linearly with an increase in the magnitude of the input. In
the latter situation. the AID converter error e(n] is calJed the saturation error or the overload noise as the
A!fJ converter output is '"clipped.. to the maximum value (I ~ 2 _.,) if the analog input is positive or the
m;nimum value -I if the analog input is negative independent of the actual value of the input. A clipping
of the NO converter output causes signal distortion with highly undesirable effects and must be avoided by
scaling down the analog input xa(nT) to ensure that it remains within tbe AID converte' fulJ-scale range.
Jn order to develop the necessary mathematica.l model for analyzing the effect of the finite wordlengih
of the AID converter output. we assume that analog input samples are within the full-scaJe range, and a.<.
a result, there is no saturation error at the converter output. Since the input-output chariK:teristic of an
AID converter is nonlinear and the analog input signal, in most practical cases, is not known a priori, it
is reasonable to assume for analysis purposes that the quantization error e[n] is a random signal and to
use a >;tatistical model of the quantizer operation as indicated in Figure 9.16. Furthermore, for simplified
analysis, we make the following assumptions:
(ai The error sequence {e[n]! is a sample sequence of a wide-sense stationary (WSS) whl1:e noise
process, with each samp.le e[n) being unifonnly distributed over the range of the quantization error
as indicated in Figure 9.17, where Sis the quantization step.
(h) The error sequence is uncorrelated with its corresponding input sequence fx[nJJ-
(c) The input sequence is a sample sequence of a stationary random proce;;.s.
These assumptions bold in most practical situations for input signals whose sampJes are large and change
in amplitude very rapidly in time relative to the quantization step in a somew-hat random fashion, and
RS. AID Conversion Nolse Analysis 603

e[nj

Figure 9.1(i A :.t.tti:.'lil-<11 model uf the AID quantizer.

have heen verified to be valid assumptions. experimentally [Ben48J, [Wid56}, [Wid6ll and by computer
simulations [DeF88J. The statisti~al model also makes the analysis of AfDcor,vers.iotlnoise more tractable,
and results derived have heen found to be useful for most applications. It should be pointed out that Jn
the ca;:.e of an A/D convener employing ones· --'COmplement or sign-magnitude truncation, the quantization
error h correlated to the input signal since here tlte sign of each error sample c[n] is exactly opposite to the
sign of the corresponding input s.ample x[nl As a result, practical AID converters use either rounding IJr
t•vo's-cornplement truncation. The mean and the variarn::e of the uniformly distributed random variables
were computed in Example 2.42 Using the results of this example., we obsene thai the mean and variance
cf the error sample in the case of rounding are given by
(8/2)- (li/2)
m.,= 2
~o. (9.68)

((.5/2)- (-!Jj2)) 1 az
0"2 = ~ - (9.69)
' 12 12
The corre..'>ponding parameters for the two·s-complement truncation are as follows:
0-0 0
m.,=~=-2_· (9.70}

z (O- 0)2 al
a~---~- (9.71)
e 12 l2.

'::15.2 Stgna!-to-Ouantization Noise Ratio


Based on the model of Figure 9.16, we can evaluate the effect of the additive quan(ization noise e[n] on
the input sigr,al xfn] by computmg the signal-to--quantization noise ratio (SNRA;D) in dB defined by

SNRA/D = IO!og 10 (:J) dB,

where a} is the Input signal variance representing the signal power and a} is the noise variance representing
the quantization noise power. For roun<!Jng, the quantization error is uniformly distributed in the range
(·-8/2, 0/2) and fur two's-complernent truncation. the quantization error is unifonnly distributed in the
range ( -S, 0) as indicated in Figure 9.17{a) and (b). respectively. In the case of a bipolar (b + l)-bit AJD
converter. 8 = 2-·(b+ 0 Rps, and hence,
ru(Rr~J 2

(J; = --:.:,:""'--
48
(9.73)
Subslitutmg Eq. (9.73} in Eq. (9.72), we arrive at

SNRA;D = 10log 10 (
2 2~8a} .,)
(RFS)-

=6.02177 16.&1- 20log 10 (:~)dB.


604 Chapter 9: Analysis of Finite Wordlength Effects

p(ej
p(d
I
L~
.-----11~

.012 0 Jjf2 -3 0
(a) (b)
p{e)

I l'Th

-· 0
(c)
6
e

Figure9.17: Quantization error probability density functions; {a) rounding, (b} two's-compiement truncation, and (c)
ones' -complement truncation.

'Jahle 9.3: Sigtml-to-quantization noise ratio of an ND converter a~ a function of "'-ordlength and full-scale range.

b=7 b=9 b = ll b = 13 b = ]5

K =4 46.91 .58.95 70.99 83.04 95.08


K=6 43.39 55.43 67.47 7951 91.56
K = 8 40.89 52.93 64.97 77.01 89.05

The above expression is used to determine the minimum AID c-onverter 'A-'OI'dlength needed to meet a
specif.ed signar -to-quantization noise ratio. As can be seen from this expression, the SNR increases by
approximately 6 dB for each bit added to the wordlengfu. For a given wordlength. the actual SNR depends
on the last term in Eq. (9.74}. which in tum depends on the a"', the rms vahleoflhe input signal amplitude.
and the full-scale range RFS of the converter, as illustrated by the following example.

Now. the probability of a particular input analog sample with a zero-mean Gaussian dhtributi:on staying
within the full-scale range K a_,. is given by lPar60]

(9.76)

For example. for K = 4. the probability of an input analog sample staying within the fuU-scale range of
4a_. is ::>.9544. This implies that on average about 456 samples out of 10.000 samples will faH outside
the range and he clipped. lf we increase the full-scale :range to 60"..,, the probability of an input analog
9.5. AID Conversion Noise Analysis 605

::.bmp!c staying within the expanded fuii-scale range increa"es to 0.9974. in which case on ave1age about
26 samples out of J0,000 samples will now be outside Ihe mnge. In most application.<;, a full-s..:ale range
of 8a.t is more t..'tan adequate to ensure no dipping in conversion.

9.5.3 Effect of !nput Scaling on SNR


Now. con::.ider the effect of scaiing of the input on the SNR. Let the input scaling factor be A with
A > 0. Since the variance of the scaled input A:c[n j is A 2 o;. the s.ignal-to-quanti.zatiuo noise e.'i:pression
of Eq. (9.75) changes lo

(9. 77)

For a given b. the SNR can also be increased by scaling up the input analog signal by making A > I.
However, this process also increases the probability of some of the input analog samples being outside
the full-scale range RFs. and as a result. Eq. (9.77) no longer holds. Furthermore, the. output i~ clipped,
ct.u;;iog severe distortion in the digital representa:ion of the analog i:nput. On the other band, .a scaling
down of the input analog signal by making A -< l decreases the SNR. lt is therefOie necessary to ensure
that ~he analog sample range matches as do~ as :;-ossible to the full-scale range of the A/D converter to
gt~t the maximum possible SNR without any signal distortion.
It should be noted here that the aOOve analysis assumed :m ideal AID conversion. However. as pointed
out b Section 5.8.6, a practical AID converter is a nonideal device exhibiting a variety of errors, resulting
in the actual signal-to-quantization noise ratio being smaller than that predicted by Eq. (9.74 }. Hence, the
effective wordlength oftheAJD converter, in general, is less than the wordlength computed using Eq. (9.74)
by 1 to 2 bits. This factor should be taken into consideration in selec1ing an appropriare A/D co:mrerter for
a ,g:iven application.

9.5.4 Propagation of tnput Quantization Noise to Digital Filter Output


in most applications, the quantized signal .\'[n] generated by the AfD converter is processed by a linear
time-invariant discrete-time system H(z.). It is thus of interest to detemllne how the input quantization
noise propagates to the output of the digital filter. In determining the noise at the filter outpur generated
h)" the input noise, we can assume that the digital filter is implemented using infinite precision. However.
as we shall point our later, in practice, the quantization of the arithmetic operations generates errors inside
rhed·,gi;a' filter structure. which also p:rq:agate to the output and appear as noise. These noise sourc.:s are
as·mmed to be independent of the input quantization noise, and their effects can be analyzed sepa:rately
and added to that due to the input noise.
As indicated in Figure 9-16, the quantized signal ilnl can be considered as a sum of two sequences:
tht~ unquantized input x[n J and the input quantization noise e[n]. Because of the linearity property and the
assumption that :c[n] and e[nJ are uncorrelated, the output Yin! of the LTI system can thus be expressed
as a sum of two sequelll--es: y[nJ generated by the unquantized input x[n] and v[nJ generated by the error
5ei)uence e[n}, as shown in Figure 9.18. A~ a resul:, we can compute the output noise componem t;fnJ as
a linea:r convolution of e[n] with the impulse respo:1se sequence h[nj of the LTI system:

=
vlni = L efmlh[n- m]. (9.78)
m=-oo

From Eq. (4.204). the mean m~ of the output noise v[n] is given by

m., = meH(el·o ). (9.79)


606 Chapter 9: Anatys~ of Finite WorOiength Effects

H(::.) f-~ _V!_n]


""y{n!+v{n]

e{n]

Figure 9.11·. Model for 1he analys.is of the effect of pnx.:essing a .quantized input by an LTI discrete-time system

and. from Eq. (4.223i, its va-riance a,~ is given by

(9.80)

~Jne output noise power spectrum is given by

(9.8fJ

1De normalized output noise variance is given by

{9.82a)

which can be alternately expre~1>ed as

u.2 -_I_.
1rn-
· 2n1
i c
H(-)H(,-1-,_-t
" ~ _._
~.,
u~,
(9.82bJ

where C is a counterclockwise contour in the ROC of H(z)H(z.- 1 ). An equivale.nt expression for


Eq. (9.82b) obtained using Eq. (4225) i.s given by
~

!TJ:., = L jh[n]l
2
• (9.82c)
n=-oc

9.5.5 Algebraic Computation of Output Noise Variance


We now outline a simple algebraic appro;K:h for computing the normalized oufpU.t ooise variance using
Eq. (9.82b) IMit74b]. Several other algebraic approaches have been suggested for computing integrals of
the form of Eq. (9.82b)fChu95J [Dug80I. (Dug82J.
In general. H (z) 'is a causal stable real rational function with all poles inside the unit circle in the
;::-plane. It can be expressed in a partial-fraction form as
R
H(z) =L H,(.z). (9.83)
i=l

wht:re Hi ( z) is a low--order real rational cautial stable tra:nsferfunction. Sub-stituting the above inEq, 19.82b),
we arnve at

(9.84)
9.5. AID Conversion Noise Analysis 607

Since H;,.l<:J and H 1 (::) are stable transfer functions, it can he shown that

(9.85)

A>. a re~·mlt. Eq. (9.84} can be rewritten as

(9.86)

ln most practical cases, H(z) ha& Dnly simple poles with Hk(Z) being either a first-order or a second-
order transfer function. As a result. each of the above contour inl:egrations is much simpler to perfonn
usiag the Cauchy's residue theorem, and !he results are tabulated for easy reference. TJpica1 terms in the
partial-fraction expansion of H(z} are as foHows: 2

CkZ+ Dk
A.
z - a;. z2 ..;... hJ:.Z + dk ·

Let us denote a typical contour integral :n Eq_ (9.86) as

(9.88)

The expressions obtained after performing the contour integrations for different I; are listed in Table 9.4.

'
&&&A&kwk.,&&~-~~·-IMKI~UW

2It should~ n[)lt:d that HizJ i~ apre~sed as tk ratio ofpolynomlals in z before a direct partial-fraction expansiOfl is carried out
to arrive at the teiiilS given in Eq. (9 8.7)
608 Chapter 9: AnalYsis of Finite Wordlength Effects

Table- 9.4: E'pressiuns for typical contour integrals in the ourput noise variance caJculation.

Hc(z-l)

Hk(<.) ~A
''
B,
z- 1 -at ' z 2
CiZ~i
+bez
+De
1 +dc
A ;, 0 0
s,
0 r, I'
z- ak '
z
Ctz +D<
2
+b~:.z+d;,
0 h ,,
I}= A1

BkBt
12=~~~
I - agat
(C~;Ct + D~rDd(l - d!:dt)- (DkCe- CtDtdk)bt- (Cii:Dt- D~:.Ctdt)bx
h = (l - dtdt) 2 + dtt4_ + dt/if (1 + dkdt)h;.:bt
Bt(Ct + Dkat}
h = ~C:C"-'----";~
I + bkat -:- dka~
Br,;(Ct + Dtad
1~ = 2
I + btak + dtak

9.5.6 Computation of Output Noise Variance Using MATLAB


The algebraic method of calculating the ot.>tput noise variance developed in the previous section can be
carried out much more easily using MATL~B. To this end, it is more convenient to develop a partiaJ-fraction
expansion of the real coefficient transferfunction H (z) using the function residue. which results in tenns
of the form A and B;,.f('l. - a.t) only, where the residues B~; and the poles ak are either real or complex
numbers. Thus., for the variance calculation,only the terms / 1 and h ofTable 9.4 are employed. Program
9.5. AID Conversion Noise Analysis 609

9_4 given below is based on the approach of Section 9.5.5. The input data called by the program are
the numerawr and denominator coefficients in ascending p-..--mers of z- 1 entered as vectors inside square
brackets. The program first determines the partial-fraction expansion of the transfer function and then
computes the normalized output round-off noise variance using Eq. (9.86) and the tabulated integrals h
and /1 in Table 9.4. The output data is !he desired noise v<1riance.

'E ?rograrr 9_4


% CorDpu.taticrr of the Outpuc Noise Variar.ce Due
% t.o Inpc.:t Qua::t.:zation of a Digica.l F::l ::er
~ Based on a Pa.::.L_:_al-?.::.acti_on Approach

"
num ~ input('Type in the ::1.umera:::.or = ' ) ;
der: = i_npu t ( 'Type in the denominator = ' )
lr,p,K] = resi.C.:JEdnum,den);% partial fraction expans::io:1
R = sj__ze(r,l);
R2"' size(K,l);
iE R2 > 1
disp{ 'Ca:;-:;.not conLi::1ue . . . ' J ;
reLurn;
end

if R2 == 1
nvar
!"'lse
nvar 0;
end
% Compute output. noise variance
fork = l~R,
for m = ::. ::;1..
integral= r(k)*conj(r(n)}/\1-p(k)*conj(p(m):);
:1var = nvar .,. integral;
end
end
C.isp( 'Outpt:t Noic;e Variance = ') ;disp!real(nvar))

We illustrate its use in the following example.

-
f}0j, qry }'t "; ": s h :r 0&'tVU 08 1;1 "0 i\3) '}4:: 1417
fj, lli}'\i}
610 Chapter 9: Analysis of Finite Wordlength Effects

"{,G "';;\;_J,J't
'>.£it-h0: 7 "

An alternative, fairly simple and straightforward computer-ba._--ed method for the computation of the ap-
proximate value of the normalized output noise makes use of irs equivalent expression given by Eq. (9.82c).
fur a causal and stable digital filter, !he :impulse respome decays rapidly W zero values, aad hence,
Eq. (9.82c) can be approximated as a finite sum

L
aJ.n = SL :;;= L lhfnJj 2 . {9.94_}
n=O

To determine oJ,,_, we can itemtively compute the above partial sum for L = l, 2, .... and stop the
computation when the difference SL - SL-1 become£ smaller than a specified value I(, which is typically
chosen as I o-~'>.
The following example iJiustrntes this approach.

'% {·t ;0t/Y"'ihS ;$, L


o/ •• •JFIS it\ >1 ¢ l \if\

VJ1t'T'f"0 \1 J" •
4th t iv k ' V {h;;,J·:;\· L

A":;
v fJ tj l rf •· '/it: \\t/'\1\•
4rHi'rrTh Y Y""Ajf•t!<&" ' )flAir ;;cj '~'q\(}(ttV{ r

-
& V;UJ!07ni \r/\Y,,;r;h
)i t!'

14xvpa vai00?vwS!Ii¢
.,~,-

"''
9S Analysis of Anthmetic Round-Off Errors 611

~'[n J .\nlQ • 'l")


ujn]- Vo:~

(a)

Figure 9.t9· \<~J Pruducl qu;mllLalmn pnx:tSl> ·,md (b} ib stati~LK-al mvdei for the produ;;t :outld---uff error analysh.

<- f ~nj

ml v,lnl
+ '

••

-'-In! --J.
.I ...
\
~
••• '
'

[0) (b)

.,lgure 9.20: (a) RepresentatiOll of a dig1tal filter struclure w:th produce round--Dff before Sllmmation. and (b) it~
statist:cal modeL

9,6 Analysis of Anthmetic Round-Off Errors


A\ iHustratedearlicr in Section 8.5.2, in th-e fixed-poin:: implementati{ln of a digital filter the result of only the
multiplication operation is quantized. In th.is section we devdop the tools for the analy;;is of product rouOO-
!}fferrors. Figure 9. i9(a) shows the representat.io:-~ of a practical multiplier with the quantizer at its outpuc
:·ts statistical model is indicated in Figure9.19(b). which wi.H be used to develop the error analysis methods.
Here. the output dn]ofthe ideal multiph~r is quantized to a value 0[n]. whe:e V[nj = v[n/ + e,.-[n]. For
analysis purp<:»;es we again make assumi)tions similar to tho;,e made for the ND conversion enarana!ysis.
i.e ..

(a} The error sequence je,.tnH is a sample sequence of i stationary white noise process, wirh e<J:ch
~mple e"'fn] being ur,iformly distributed ever the range of the quantization error.

(b) The lJLantization errorselj_uence/e""fn ii is uncorrelatcJ with the sequence {v[n]}, the input sequence
\x[n l_l to the digital fi.ltcc and all other quantization noi"e sources.

Rccail that tbe assumplion of kulnl) bemg uncorrelated with {v[nJI holds only for rounding and two's-
complement truncation. The r;mge of the em)r sample e{'lfn] for these two cares are as given in Table- 9. L
The mean and variance of the error sample for rounoJing ar;: given by Eqs. (9.68) and (9.69), respectivdy,
whi~e those for :he two's-complement trunca110n are given hy Eqs. {9. 70} .and (9.71). respeclively.
U~mg the above model for ead multiplier. the repre.sen~ation of a digirai filter to determine the effect
of p:.-oduct quantizations at the output of the digital filter is a~ indicated in Figure 9.20{a), which explicitly
shows the -t'th adder wirh an output Vtfnj wmming the quantlzed ou:puts of the kt multipli.ers at its input.
'Thi-s ligun•, ntso -s.how:s the intemai rth bmnch node associa:ed with the signal variable ur(n] that needs 10
be scaled to prevent overflows at these nodes. The.-;e nod6 are typically the mputs to the multipliers, as
612 Chapter 9: Analysis of Finite Wordlength Effects

Figuu 9.21: A typical :nultipher branch with inpiit as a brunch node and outpm feedmg into an: adtkr.

x[nj j'[r.J

Figurt< 9.22: Digital filter stru.::ture with produn round-off after summation.

indiciiled in Figure 9.21. In digital filters employing two 's-complement arithme~ic, these nodes are outputs
of ruhlers fonning .sums of products, s:ince here the sums v.· ill still have the correct values even though
some of the products and/or the partial sums overflew (Problem 8.55} If we assume che error sources are
statistically independent of each other, then each error source develops a round-off noise at the output of
the digital filte:. An equivalent .statistical mudd is t:-.en as shown in Figure 9.20(b).
Let !he impulse response from the digital filter input to the rth branch node be denoted as _f,[n] and
the impulse response from the input of the flh adder to the digital filter output be denoted as gtln 1. with
their corresponding z-transfonns denoted by F,(z.} and G.f(::), respectively. F..-(z) is called the scaling
trmlSjer function and plays a role in the dynamic range scaling schemes employed in fixed-point digital
filter ~Ctructures to be dtscussed in Section 9. 7, Gt(z) is called the noise transfer function, which is used in
computing the noise power at the filter output due to product round-off as described next.
a{f
If denotes the variance of each individual noise source at the output of each multiplier, the variance
of ·~tfnj in Figure 9.20tb) is simply kfaJ since we have assumed each noise source to be statistically
independent of all orhers. The variance of the output noise caused by efin} is then given by

o-1f [kt (2~j i Gi(z)Ge(z- )z-


1 1
dz) J = o-J [kt ( 2 ~ I: jGt(ei(tlf da) J. (9 95)

ff tllere are L such adders in the digital filter structlln':, the total output noise power due to all product
ruund-offs is given by

a-;= ao1""'
'>
L ( j d.• Gt(z)Gdz - l )z- '- dz)
f=tkt Jrrj Jc . (9.96)

In many hardware lmplementatwn schemes such as tho-;e employing DSP chips, the multtplication
operat:on is carried out as a multiply-add operation with the result stored in a double-precision register.
In such. cases. if the signal variable being generated is by a sum of product operations, the quamization
opemtion can be carried out after all the multiply-addoperations have been completed, reducing the number
of qua!ltiz.ation error sources to one for each '>uch sum of product operations, as indicated in Figure 9.22.
ln suc:'1 a case, tbe statistical model of Figure 9.20(b) can Stlll be used except the variance of the nolse
source ee[n] ls now aJ, thus resulting in ccnsidembl~ lower noise at the digital filter output.
9.6. Analysis of Arithmetic Round-Off Errors 613

Figure 9.23: MOOei for the arithmetic round-off errm analysis of Figure 9.1.

(•) (b)

Figure 9.24: (a) A 5eeond-order digital filter slmcture, and (b) its model foe product round-off error- analysis.

We illustrate the product round-off error analysis. method for two simp]e digital filter structures.

-
614 Chapter 9; Analysis ot Finite Wordlength Effects

"f
H,, 'if"" r t; '0; J

~'l

H ;Uu1A: fA, '£'11« '~I~ \ted;; -4'7 fRill wrd"'' t tr hv 'J : fpi ' \IL\1! l ' wl\1 +z + P + "'t:TV M<"}} )' rHH"'
fr!kit tn

9.7 Dynamic Range Scaling


In a Gigital filter implemented U!i-ing fixed-point arithmetic. overflow may occur at certain internal nodes
such as inputs to multipliers and/or the adder outputs that may lead to large amplitude oscillatiom; at the
filter ;,utput causing unsatisfactory operat;on a'> discussed in Section 9.11.2. The probability of overflow
can be minimized significantly by property St--aling the internal signal levels with the aid of scaling mul-
tipliers inserted at appropriate points in the digital fiJter structure. ln many cases. most of these scaling
multipliers can be absorbed with existing multipliers in the strur.o:ure. lhus reducbg thC total number needed
ro implement t'le scaled filter_
To understand the basic concepts involved in scaling, consider agafn the digital fiU:er structure of
Figure 9.20 showing explicitly the r1h node variable u,..[n] that needs to be scaled We assume that all
fixed-poiilt nur.Jbers are represented a:s binary fractions and the input sequence of the filter is bounded by
unity, i.e.,
~xfn:JI :51, for all values. of n. (9.106)
The objective of scaling is to ensure that
fur all r and fill' aU values of n. (9.l07)

We now derive dtree different condltions to ensure that urln I satisfies the above hound.
9.7. Dynamic Range Scali11g 615

9.1.1 An Absolute Bound


The inverse .:-transform of the scaling transfer function F,-{z} is the impulse response /,-[n] from the filter
input to the rth node. Therefore, u,-[n] can be ex:pressed as the linear convolution of J,-[n J and the input
xfnJ: 00

u,[n] ~ L Mk]x{n- kj. (9.108)


k=-00

£'rom Eq. (9. i 08} lt fo\h)Ws then that

Thus, Eq. (9. f07) is satisfied if


00

L lf,[k[: :>0 I, forallr. (9.109)


t=-oo
The above.::ondirion is both necessary and sufficient to guarantee no overflow [Jac70a]. 3 If if is not satisfied
in the unsealed realization, we scale the Input signal with a multiplier K of value

(9. f 10)

It should be noted that the above scaling rule is based on a worst case bound and does not fully utihze
<he dynamic range of all adder output registers and, as a result, reduces the SNR significantly. Even though
it is difficulr tD compute analytically the value of K. an appro-ximate value can be computed on a camputer
usil'.g an approach similar to that given by Eq. (9.94).
Morepra>;:tical and easy to use scaling rules can be derived in the frequency domain if some information
ahout the input .'>ignals is known a priori fJac70aj. To this end, we define tbe .Cp-norm (p ~ 1) of a Fourier
transform F(ei"') as

IIFi," (2-~ DF(ei")IP dw t (9.lll)

From the above definition it follows that the £:2-nonn, qFb, is the root-mean-squared (rms) value of
F(e 1"'). and the £1-nonn. kFI! ,, is the mean absolute value of F(ejr·") 0\-"er w. Moreover limp-..oc I!Fiip
e~ists for a continuous F(ejw} and is given by the peak absolute value:

(9.1 12)

"The bounds to he derivedaswrne that the input x(nl to <he digital filter is a deterministic signa! with a
F<oorier transform X (el"").

9.7.2 £'~-Bound

Now from Eq. (9.108) we get


(9.Il3)
616 Chapter 9: Analysis of Rnite Wordtength Effects

where Ur{el"'! and F,..(ei'·'; arc the Fourier transforms of ~<rln j and .f,..ln J, respectively. An inverse Fourier
tr:-:nsi:"orm of Eq. 1_9.113) yields

{9.H4)

As a ;c~ult.

(9.115>

If ifXt1 1 < l, tilen the dynamic range constraint oftq. {9.1071 i~ sati~f;ed if

:!F,.(ejv)!l .::::= l. (9.1 i6)


I. ' .:>.:

Thus. if the mean absolute value of the input speclrum is bounded by unity, then there wi11 be no ildder
ov·~rflow if the peak gains from the filter input to all adder outputs are scaled satisfying the bound of
Eq. {9.116). In general. this Sl:a!ing rule is rareJy used sin.:e. with most input signal!. encountered. in
prm:tice.I(XI,II::; 1 docs nothokl!

9.7.3 C2 -Bound
Applying tho;: Schwartz inequality to Eq. (9.114). we arrive at [Jac70a]

(9 117)

(9.118)
Thus. if :he input to the filLer has finite energy bounded by umty. i.e., l:XIIz .:::;: I. then the adder overflow
car be prevented by ~nling the filter such that the rms valu\::s of the transfer function..:; from the inpu! to
all adder outputs are bounded by unity.

i:F,!h.::::: l, r = l.2 ..... R. (9.Il9}

9.7.~ A Genera! Scaling Rule


A more general scaling rule obtained using Holder's inequality IS given by [Jac70a]

• [ n,
;Ur
1 1S 1•' f.r II p · ,'IX i•'f.
· f9. 120)
for all p. q 2:: 1 .-:atisfying (1/p) + ~ !iq; = l. Note that for the .Co;,;-bound of Eq. (9.115}. p = oc
and q = l. and for the L2.·bound of Eq. (9.11 8), p = q = 2. Another useful scaling rule, £ 1-bmmd, is
obtained for p = l and q = oo.
Af•er "-Caling. the scaling lran~fer funcrions become il. i',. JJ and the scaling constants should be cho.>en
'iUC'·l that
' . "-
r = 1.2, ... , R. (9.1'21)
9.7. Dynamic Range Scaling 617

In mar:v slri.ictures, a:l -.coiling multipliers. can be absorbed into the existing feedforward multipliers
wnhout anY incrca.<>e -in the total number of mul.uphers and, hence, noise sources However. in some
cases. the scahng pro;:css may inlroduce additmna< multipliers in the system. If aU scaling multipliers are
regular b-bit unit;;, then Eq. (9. ;2l) can be satisfied with an equality sign, providing a full utilization of
lhe dyr;amir.: mnge nf c~ch adder output and, thus. yielding .a maximum SNR An attractive uptmn from
a h:tru:ware point of vie·->;, ~md preferred in cases where scaling introduces nev.' multiphers, is to make ft.'i
many ~caling n"d.lltipher coefficients as pn:;,sibk m the xcaled struc::ure take a value that is .a power of 2
1Jac7{}a.]. In wt:.ich ca-.c, these multjpliers can be impleme:1ted simply by a :.hift oper<~tion, Now, the norm
nf rhe ;,cal ing transfer function ,;a.t:s.ties
l<I;FII'
2 ' r f.' <~
~
(SI.122)

with a slight <kcrcm;c in the SNR.


It should be pointed ou! here that the output round-off n~)Jse should be ahvays compmed .after the digital
filter o:tructure ha~ been M:aied. For the scfued structure, fne cxpre;sion for the output round-off noise of
Eq. (9.95) thus cbang::-s to

(9.123)

where kf. is the Iota] r,umber of multipliers feedmg the fth adder with kt ::: Kr and Ge(z} is the modified
noise transfer bnction from the mput of the tth adder to the filter o-.ttput.
We illu.-<tr.:ttc next the applicatmn of the above method in !he scaling of a cascade realiL.ation of an IIR
transfer function (J ac70b ].

9.7.5 Scaling of a Cascade Form llR Digital Filter Structure


Cnnsider the e:1scaled slructure of Figure 9.25 wm;isting of a cascade of R se<:ond-Drder liR sections
reali.,:cd indirect form II. Its trar:sfcr function is given by

H(z) =K nR

i=l
H,.(z). (9.124)

where

(9.125)

The hranch nodes to he :;,.;;aled are marked by (*) in the figure. which are seen robe the inpul'> tu the
m ultiplicrs in each second-order :-;cction. Tht: transfer functions from the inpul lo these brunch node!> are
the .\,cdcng tr,msfcr function,., which .are ,5iven by

r = I, 2, ... , R. (9.126)

The scaled -..crsion <-:>f lhe above slructurc is shown in Figure 9.26 with new values for the feedforward
multipliers. Note tltal !h:.: scaling proces'i has introdu-ced a new multiplier i)'Jt in each second-order section.
If the zeros of the transfer functi•m arc on the unit circle. as is usually the cme :n practice, then ~i. = ±I
in which case we can chuose bo1 = b2£ = 2 -{} to reduce the tot:.~! number of a<"tual mu;trplicaticns in the
f.nal reali1.ation.
6'18 Chapter 9: Analys1s of Finite Word!engtil Effects

•····························· F {z)· ··························!>


• R
•·······F;(z}--,··-·t- •

Figure 9.25: An unsealed cascade realization of second-order JIR sections.

~--····-·-·············-,-·:· FR\z) ··-- .. ·········-·---·······?

+- - .. ·-F_j(z)-·····-'>
K
X(z)

Figure 9.26: The 'K:lled cascade realization_

It can be seen from Figure 9.26 that

(9.127<1)

H(zl ~ K n ilelzl.
R

i=l
Wl17b>

where

(9.128)

Denote

IIF.-llp ~a,, r=l,2, .... R, l9.I29al


IIHllp '
= fZR+l· (9.I29b)
9.7. Dynamic Range Scaling 619

and choose :he scaling constants as

(9.130)

Now, it follow<: fromEqs. {9.l30), (9.127a), and (9.127b} that

i,(,} ~ (fip.) F,(z), r= 1,2, ... ,R, {9.J3la)

H(z) ~ (l],P.) H(z). (9.13lb)

After scaling we require

r= 1,2, ... ,R, (9.132a)

(9.132b)

Solving the above for the S<:aJing constants, we arrive at


1
fJo ~ -. (9.I33a)
"'a,
fir=--. r = 1,2, ... ,R. (9.133b)
tl"r+l

9.7.6 Dynamic Range Scaling Using MATLAB


The dynam.i~: range scaling using the .Crnonn rule can be easily performed using MATLAB by actual
simulation of the digital filter structure. 1f we denote rhe impulse response from the input of the digital
filter to the o:.1tput of the rth branch node as Urln]i and assume that the branch nodes have been ordered
in accordance to their precedence relations with increasing i (see Section 8.1.2), then we can compute
dre .Cz-nonn l1F1IIz of if! [n]J first, and divide the input by a multiplier k1 = I!FJ il2· Next, we compute
the C:z-norm nFz h of {f2fn H and scale the multipliers feeding into the second adder by dividing with a
constant kz = JIFz h. This process is continued until the output node has been scaled to yield an .C2-nonn
of unity.
We illustrate the method by means of the following example.
62{) Ch~ph;r 9 l'-1'1alysis of F1rnte \'rlorolengtn EHects

'flv'i,. 11".allz:wllon [n lbc di~:L (Llnn U structure;,. ,. ,;h•Wil in F"1,11tr:: •.t:J7. Thr M A1 LAIJ firu 1 ntm .:ilmillatJIQ!: UlU;
~l.ll'e i~ P'en by Pl'ngram 9 _t. 1'-eJo nli n PI gram i!i finol ru~ willnlll ,.-alrn.s o;.;~n,.lam t!i ILl llfl~(y. a ,. • 'kl ..
k.2 - 'C• ... ~- In t:ht :IUJ~o= 1nific11Lins the "ll'nnim i::--nonn ('.al ali0i1. tht· llll.tJA.tL •.-;lnul* L~ ii=ha!Zil ,..,. j,'l.
"rlle pl"Q.K'fllll 'nmr~e-o Ibe..it~ of tlte £2~"nrt ar L"te •mpulll!! reip«JI\SC nr node yl ;c, 1 . 0 7210 00-.., ".: 7::! '5~.
w1Ud1 1.. used lo ...el 'It • I 072 JOi))l/512~~ 'Wllli ott.Jer scidill£ ~~~~~~ t~; st1JJ liCl rD UDII)' A~~~ 1\.tiJ tJ1 r.he
PJUi:r.1m .mov,·§ the £.~-.onn o l the u:npul,;e req.<""~ll;SC node: •.'las 1. OCOOO OOCOOOO-'r."l. ~~ri fl -Ill tlx- "iA"'llC5S
ut .liCa.llllt~ lhe i11pt11 tn Lhe second sup, in tbc hoe irufu~arillll!: the JlllfUWlwr.t: l2·nurm c;tfcul 1011 tt-.e r;:olltpUI
..viable 15 d11C!!.QJ ~ y'. Tlt~ progmm t:huJ ywrlu~ abe :;q oft h.- £;::-11orua (J( th uapul-;e ~rnn..;c at illl ode y2 11..-.
o • 0 l a 7 98 2a 7 6 2J q e. 'A'hKb is t1"Cd co set ~2 :::a Jo.O'I'i~8207c,2.Jt.t8 with k 3 '1.,,11 ,~ Lo ~ ni1',11 Tins procc:." 1s;
ll!~cd fur node yJ, reloull:illJ m k.:; = .J• l .fJ'H7 ~ •·itb Cbl! it I "Value of lbe .C:2· rm uf •mpu~
n:f.pc..iJ · 11L DOCie y.) as I). 9!:iiJ,9fi8ll3! 54(1 The pmgt 1 tiWI~a; "'hown hc:low li aJta.:l'ttU ud:JI:r autr; '" a"'e
l:ieen sc 1~

\ l't"OIJ ·arrt '9_6


% .tllu!"t:nn.:ion of Sc linq &- F.ound-Off .Noise Cd"cul.~t.~oon

f:OTlTiil t 1-DrYJJ
~l • BQLtll.01~1n002151252~~
k~ ;:. s<v, t rD.021i79821)":'b2JJ8~ r
k] z &q~~fll.96915,00 09j4~J;
xl - ll'kt;
Gil tl;
~i2 a (0 0];
bl - l!
·.•ar-new - o~ k: "' li;
whi le k ~ D.QOOOOl
yl = Q.2S932S4r~jl + vl;
x.2 ~ ~0.06t;.2272lk2J"L'l • -ill;
'Sill ~ .:,·1:
yl • 0.67628S8•e.t21:) - O • .l917·'66·~t.·.Pt - x::.J;
yJ a fy2 + 2 ~j111. ; ~i2(2J,tkJ;
fl12l2J a: 61..,11); ~•:&.ltll Y2;
vato1 d • Vilt:llm:;
•r.t~rn+>w .. varnev • ab!:.t)'"ll .. be.~y.::ol;
'K .:"'omr>u te pprox i te L2 no.nr squat e

e11d
Cfi ·p[ •L:t; r'IOrm ...qu~re • 'J ;cii, p[VlU.tt~J ~

Jk oJhc.J\e Jll"(l ram UJ1 be e;l ily modth«1 hJ c tt ~ 11-off r)C)i>e , '" ;u 1bc CJt:!('klr t~
lhr:o sc.Jlfcdl ilrLK"tt.m; , To tho,. ud, ...,.e ilil1" ~ht: o.iiB1Iul 6 1!1" illf'Ut 11..• <t.rru .and r:ply u ll puls.r OJt lhe mp111 ul 1e hnl
adder. nu i cq\lll,'aknJ 1•1 ~13iOg .c 1 • 1 b Pro 0 9_6 TI!e IIIOI'lMJIZC.J Du1J1!.11Mii.sc: Yart.UJU due Q I Mn~h!
Crrtll' o.un-~ is. ~ltd a... 1.07209663042567. Next. L: PP.I:.!. Ill impuJr.-;: 1 the ill'pll of 1ht Sl!l..~n;.;llldoJ ..r ilh
lhc: Cligital filLer i11ptn se1 [o £1!m. Thi-, l!1 aclul:'lo't!d by rrplru:•r~o& x? in ..: "llli41on !l'f y2 xl The progr au1
yrclt:Js ltlc-nurm hud ooq~w mrr v~ dlie 1n11 o;i.:Jg.lc enur )(llR"OC at cbt ,ccolld wid« 1.!61090J40711U7.
'1ne lllllnl nu.nnsl ~ OUtpUL mufld,.;l!T 110 ~- i.I.'IM~mi J: II ro.n.-..J IJ; [0 h..-- .qu·ntiL':ed i">ef(lfr ~~lf!IY.. t'ii lhO:II t>q\ld
1(1

2 X 1.07.209MJ()42j6"? + 4 )(" I "2"6100014071700 + 3 = I[] 1F;&55.3!ll719fl2,


On lhe Ol:lk:r 1\md..
outpul TtRJJHkJtl ~ hecottl!
•r . . ~ ~u.me qr,oanCi.L®on Dfl« w.:JrbLum ar II prodUl:b Ill ~~~ .Kidcr. Llle IQ JL:!it:d
9. 7. Dynamic Range Scaling 621

il0662172fk2 !!ld

xl y3

l/kJ

Figure 9.27: Cascade realization of !he transfer functions ofEqs. (9.l34a) in dire<.l form U.

]lk.l 111<2 01!662272/kJ


x2
xl

- 0.3917468

Figure 9..28: Alternate cascade realization of the transfer furn::tions of Eqs.. (9.134b) in direct form II_

9.7.7 Optimum Ordering and Pole-Zero Pairing of the Cascade Form IIR
Digital Filter
As indicated in Section6.4.2, and illustrated in Figures 6.!5 and 6.16, there are many possible cascade
realizatioos of a higher-order llR transfer function obtained by different pole~zecopairings and ordering. ln
fact, fora cascade of R second-order sections, there are ( R!) 2 different possible realizations (Problem 9.30).
Each one of these realizations wiU have different output noise power, as illusfrated in the previous two
examples, and hence. it is of interest to determine the cascade realization with the IOY!est output noise
power.
622 Chapter 9: Analysis of Finite Wordlength Effects

A faldy simple heuristic set of rules for determining an optimum pole~ zero pairing and ordering of
sections in a cascade realization has been advanced by Jackson [Jac70b]. To understand the reasoning
behind these rules, we first develop the expression for the output noise variance due to product round-off
in a cascade llR structure implemented in fixed-point arithmetic. To this end we make use of the noise
model of the scaled c~t:ude strm.:ture of Figure 9.26 as sbmvn in Figure 9.29.
It follows from Figure 9.29, and Eqs. (9.128.) and (9.130) that rbe scaled noise tra."lsfer function-; arc
given by

Gf(;:) = DR ii;(z) = (R0,6;) Gi\Z), l = 1,2, ... ,R.

GR+l(Z) = l, (9.135b}

where if,{z) is a<; given in Eq. (9.l28) and the unsealed noise transfer function is given by

G,(z) ~ n
R

i=l
Hdz). (9. 136)

Hence. the output noise power spectrum due to product round-off is given by

(9. 137)

and the output noise -...ariance is thus

~ aJ [~ kdGeii;J.
f=l
(9.l38)

where we have used_ the ~act that the quantity inside the parentheses in the middJe expression is the square
of the £1-nonn of G!(eJw) as defined in Eq. (9.111). In Eqs. (9.J37) and (9.138), kt is the total number
of multipliers connected ro the .tth adder. lf all products are rounded before swnmation, then

kl = .i:li'+l = 3, (9.139a)
kc=5, for£=2.3 .... ,R. (9.139b)

On the othcr hand. if all products are rounded after summation. then

kt = i. forl=l.2, ... ,R+L (9. 140)

From Eqs. i9.133a) and (9.133b),

(9.141)
9.7. Dynamic Range Scaling 623

-(···· ............... C1\:t------·---···--·----·

x[n l

Figore 9.29: 1be noise model of the scaled ca~ade rea;izatiOJJ of Figure 9.25.

Substituting Eq. {9.l4l) in Eq. (9.137), we obtain for the vutput noi<>e power spectrum

(9.142)

Correspondingly, the output noise variance is given by

(9.143)

Now, the scaling transfer function Ft(z) contains a product of se<:tion transfer functions, H;(z). i =
e
I, 2, .. _, -I, whereas the nolse transfer function G~:(z) contains the product of section transfer functions,
e.
H; (z}, i = f...._ 1, .. _, R. Thus, every term in the sum in Eqs. (9J42) and {9.143) includes the transfer
function of all R sections in the cascade realization. This implies that to minimize the Dutput noise power,
the norms of H,(z) should be minimized for an values of i by appropriately pairing the poles and zeros.
To this end, Jackson has proposed ttre following rule [Jac70b]:

Pole-Zero Pairfng Rule. First, the complex pole-pair closest to the unit circle should be paired with the
nearest complex. zero-pair. Next, the complex pole-pair that ls closest to tbe previous set of poles should
be matched with its nearest complex zero-pair. This process should be continued until all poles and zeros
have been paired.

The .above type of pairing of poles and zeros is likely to lower the peak gain of the section characterized
by the paired poles and zeros. Lowering of the peak gain in turn reduces the possibility of overffow and
att.erJuates the round-off noise. The pole-zero pairing rule is i:Hustrated in Figure 930 for a fifth-order
elliptic low-pass HR digital filter with passband edge at 03JT, passband ripple of 0.5 dB. and minimum
5<topband attenuation of 40 dR
Once the appropriate pole-zero pairings have been made. the next question that needs to be answered
is. how to order the sections in the cascade structure [Jac70bj. A section in the front part of the cascade
has i.ts transfer function H; (Z) appearing more frequently in the scaling lrans.fer function expressions in
Eq~. (9.142) and (9.143), whereas a section near the output end of the cascade has its transfer function
H, (z) appearing more frequently in the noise transfer function expressions. The best location for H, (zj
ob\'iously depends on the type of norms being applied to the sca1lng and noise transfer functions,
6:24 Chapter 9: Analysis of Finite Wordlength Effects

Figure 9.30: Illustration of the pole-zero pairing and ordering ru;es.

A careful examinatcon of Eqs. (9.142) and (9.143) reveal$ that if the .C2-scaling is used. then ordering
of paired sections. does not influence too much the output noi~e power since all norms in the expressions
are .£:2 -norms. 1"his fact is evident from the results of Examples 9.9 .and 9.10 where the tota1 ncnnalized
output round-off noise power of the cascade realization of Figure 9.27 is seen to be quite dose to that of
the cascade realization of Figure 9 .28. If, however, an £;:.o-scaling is being employed, the sections wifh
the poles clusest to the unit circle exhibit a peaking magnitude respm1se and should be placed doser to the
m;tput end. The ordering rule in this case is therefore to place the least-peakedsection to the most-peaked
se::tion starting at the input end. On the other hand, the ordering scheme is exactly opposite if the objective
is. to minimize the peak noise!! Pyy(w)jj 00 and an £.2-scaling is used. However, the ordering has no effect
on the peak noise if an C-00 -scaling is used.
The MATIJo.B Signal Pracersing Too!box includes the function zp2sos that can be employed to
determine the optimum pole-zero pairing and ocdering according to the above discussed rules. The ba'>ic
form of this function is
sos = z~2sos(z,p,k)

This function generates a matrix scs containing the coefficients of each ~cond-orrler section of the
eq:Jivalent tr.ar.sfer function H(z) determined from the specified zero-pole fonn. sos is an L x 6 matrix
of the fonn
b, b2l ao, an
[boo b, am
"". J.
sos = bol
"'' "" "''
"DL U!L an
boL btL

"''
whose. ith row contains the coefficients, {b,cl and {au}, of the numerator and denominator polynomials of
the ith second-order section with L denoting the total number of sections in the cascade. The form of the
ovo~rall transfer function .is thus given hy

,
H( -,-
<-,-
nL
H,
'
,
-~z·-
'-
n ar..·
L s.,_. , b
uui -r
-!
t.Z
+a,-~ 1
+b
lrZ
+a..,·z z·
-2

i=! i=l "' /"- .:.!

Thi~ rows are ordered su that the first row (i = 1) contain;; the coefficients of the pole pair fartlu:st from
the unit circle and its nearest zero pair. If a reverse ordering is desired with the first TO\\' containing the
coefficients of the pole pair closest to the unit circle and its nearest zero pair, then the function to be used
" sos = zp2sos\z,p,k, 'Juwn'i
9.8. Signal-to-Noise Ratio in Low-Qrder IIR Filters
625

S>>x "'
\<:1 1,M;;t;W I t"hi <hH)h A

, 1J 1\Hi:t , 'i'r "4 JYAn;lt::q; if ii :1 A& \ 1 '>


4 f;1J;)SiY0;1i.i0tn\F;t;nn "'' /\j}ii; J') M>Hr
f" , t!'Ot! Z>S tf\J Z'h'J fJ \Fi Utl

9.8 Signal-to-Noise Ratio in Low-Order IIR Filters


The output round-off noise variances of unsealed digital fifters do not provide a realistic picture of the
performances of these structures in practice since introduction of scaling multiphers can increase the
number of error sources and the gain for the scaling transfer functions. It is t:'letefore 1mportant to scale
the digital filter sl:n)cture first before irs round-off noise performance is analyzed. In most applications, the
round-off noise .,.-ariance by itself is not sufficient, and a more meaningfu.l result is obtained by computing
instead the expression for the signal io round-off noise ratio(SNR) forperfonnanceevaluatlon. We illustrate
such computation for the first-order and the second-order IIR structures considered earlier in Examples
9.7 and 9.8, respectively lVai87cl Most conclusions derived from the detailed analysis of these simple
slrLICttlr'f;s outlined here are also valid in the case of more complex structures. Moreover, the methods
followed here can be easily extended to the general case.
626 Chapter 9: Analysis of Finite Wordlength Effects

e{n]

x{n] -)--.--)'{It}
x[nl t-r-)1nl
t-Y[n) +)'In]

a
(a) (b)
Figure 9.31: Scaled tirst-order ~tion with quantizer, and its round-off noire model.

9 .8. 1 First-Order Section


Consider first tbe causal unsealed first-order HR filter of Figure 9.1. Its round-off noise variance was:
computed in Example 9.7 and is given by Eq. (9.98). To determine its signal-to-noise ratio, assume the
input x ln 1 to be a wide-s<mse stahonary (WSS) random signal with a uniform probability density function
and a variance tY}. The variance a:;~ of the output signal y[n] generated by this .input .is then given by

{9.144)

Taking the ratio of Eqs. (9.144) and (9.98), we arrive at the expressioo for the signal-to-noise ratio of the
unsealed section as
a ~' a2
S NR = -- 2-- ----"-
,.,, (9.145)
Oy cr(j
implying that the SNR is independent of a. However, this i<; not a valid result since the adder is likely to
overllow in an unsealed structure. It is therefore important first to scale IDe structure and then to compute
the SNR to arrive at a more meaningful result.
T.'Je scaled structure is as indicated in Egure 9.3 l along with its round-off error analysis modeJ assuming
quantization after addition of all products. With the scaling multiplier present, the output signal power
now becomes
(9.146)

modifying the signal-to-noise ratio to


K1a2
S!":R=-----T· {9.147)
«G

Since the .sculing multiplier :..:oeff.cient K depends on the pole location and !he type of scaling rule being
folrowe.d. the SNR wiU th11.<;. reBect this dependence. The Socaling parameter K can be chosen according to
the rules outlined in Section 9.7. To this end we need first to detennine the scaling transfer function F(t)
of the unsealed structure. h follows from Figure 9.31 (a) that

F(::) = H (z:l = -;--a-,-=-., , (9.14&)

with a corresponding impulse response

f[nj = z- 1 1F(z)} = a",utn]. (9. f49)


9.8. Signal-to-Noise Ratio in Low-Or-der UR Filters 627

Tabk "9S Signal-t<J-noio;e n;tio of first-order UR digital til!<m; for different inputs, Adapted from [Vai87c],

Typical SNR, dB
ScaJing rule Input type SNR
(b = 12, ia! = 0.95)
,----~~

WSS, white '' (I-!al}2


No overftow 52.24
uniform density 3cr 10

WSS, white Gaussian (I - lctD 2


No overflow 47.97
density (a}= 1/9) 9a02

Sinusoid, knovm J- a2
No overflow 69.91
~
frequency
-"
Now, as indicated in Section 9.7, there are different ~ahng rules. To guarantee no overftow, we can
use the scaling rule of Eq. (9.110) and arrive at

l
K = ~= = 1-laj. (9.150)
Ln=OU[nJ! ·
To evaluate the SNR, we need to know the typeofinput xln] being applied. If x[n] is uniformly distributed
with !x[n]j ~ 1, ir.s variance is given by
(9, 151)
Substituting Eq£. (9.150) and {9.151) in Eq. (9.147;. we anive at

( l - 'ail 2
SNR-
- '
., _2 - (9.152)
-~o

For a (b + l )-bir signed fraction with round-off or two's-complement truncation, a~ = z-2b /12. Substl-
h.lting this figure in Eq_ (9.152), we obtain the signal-to-noire ratio in dB as

SNRos = 201og 10 (i - !«IJ- 6.02 + 6.02b. (9.153)

As a result, for a given transfer function, the SNR increases by 6 dB for each additional bit adJed to the
register storing the product.
The above analysis can be earned out for other types ofinpUi and different scaling rules. We st:.mmarize
in Ta.b1e 9.5 the results for three different types of input for the scaling ensuring no overflow. Several
conclusions can be marl~ by examining th;s table. Independent of the type of input being applied. the SNR
decreases rapidly as the pole moves closer to the unit cirde. For a given transfer function, knowing the
type of input being applied, the infernal wordlength can be computed to achieve a desired SNR.

9.8.2 Second-Order Section


Consider nex.t the causal unsealed second-order IIR filler of Figure 9.24(a). Its scaled version is indicated
in Figure 9.32 along with the round-off noise anaiysis model, assuming again quantization after addition
6.28 Chapter 9: Analysis of Finite Wordlength Ettects

e[nl
K
x!n l }---y-~ v!nl
+ l'[nj

ia) (b)

Figure 9.32: Scaled second-order sectim: with quan!uer, and iL\ round-off noise model.

of all products. Now fOI" a WSS mput with a uniform probability density function and a variance o,2 , the
signal power at the output is given by

(9.154)

while the round-off noise power at the output ili given by

(9. 155)

Therefore the :>ignal-to-noi~ ratio of the sca:ed st.raclure i~ nfthe form

(9.156)

To determine the appropriate value ofdJe scaling multiplier K, we need to Ci)mpute the scaling transfer
function F(z). It can be seen frnm Figure 9.32(a) that Ftz) ;;; 1dentical to the filter trnnsfcr function H (z).
lf the poles of the transfer function are at l = re- 1 9 , then both transfer functions take the fonn

l
F{z)
-
= H(:z)
·
= l+az 1 +fh~
= I 2rco."er.. 1 +r2z2'
(9.157)

The corresponding impulse re:-;p<mse is- obtained by taking !he inven,e z-transform of the above, resulting
in Wrob~em 3.:02)
r" sin(n + 1)&
f!nl = h{nl = . · .u[n]. (9.158)
SinO
'ro eJiminate completely the overflow at the output in Figure 9.32, the scaling ·multiplier K must be ch~

K = "'""'
L.."=G :hlnll .
The summation in the denominator of Eq. {9.159) with hfnj given by Eq. (9.158) is difficult to compute
analytically. However, it is possible to esmblish some hounds on the summation to pronde a reasonable
estimate of the value_of the scaling multiplier coefficient K [Opp75]. To this end, note that the amplitude
H.9. Low-Sensitivity Digital Filters 629

of the response of the unsealed second-order section of Figure 9.24{a) to a sinusoidal input at the resonant
frequen.:y w =e. i.e.,x(n-j = cos(Bn), is given by
I l i/2
(9.160)
r)l(l 2rcos0+r 2 )j

However, IH(ei"'}l cannot be greatet ban L~o :htnJi since the latter is the largest possible value of
;:_,utpul y(n l for an input x[n] w1th lxln J < L Moreover,

~ lh[nJI = -.-
L
1
- ~ r" jsm(n ~
smtl L..-
Otil
n=O ~=0
00
I I
:::;: sinB Lr" = {1 -r}sine·
n=O
(9.161)

A tighter upper bound on L:~~ !h[n]j has been derived ln [Unv75] and is given by

I:n=O'·h(n]l <-
~ 4
~ . (9.162)
.:r(1 - r~) sin8
From Eqs. (9.159), (9-.160), and (9.162), we therefore arriw at
" rrl(l - ,.2)2 sin2 8
( I - ,.,~(1-
.
2rcose + ,.z,- -> K2 >
- 16 (9.163)

Following the technique carried out for the tina-orde-r section,. we can derive the bounds on the SNR for
the aH-pole second--ordet section from Eqs. (9.156) and (9.163) for various types of inputs. As in the
case of the first-order section, as the pole moves closer to the unit circle (r - 1), the gain of the filter
incf"'..ases. causing the input signal to be scaled down significantly to avoid the overflow while at the same
time boosting the output round-off noise. This type of interplay between round-off noise and dynamic
range is a characteristic of all fixed-point digital filters [J ac70aJ.
In certain cases, it is possible to develop digital filter structures that have inherently the least quantization
effects. In the following section we consider these structures.

~1.9 Low-Sensitivity Digital Filters


In Sec-tion 9.2, we considered the effect of multiplier -coefficient quantization on the performance of a
-digital filter. One major consequence is that the frequency response of the digital filter with quantized
multiplier coefficients is different from that of the -desired digital filter with unquantized coefficients, and this
difference may be significant enough to make the practi-c-al digital filter unsuitable in most applications. It is
thus of intere<.t to develop digital filter structures that are inherently !~sensitive to coefficient quantization.
Tu thi!'.- end, the first approach advanced is based on the conversion of an inherently low sensitivity analog
w~work composed of inductors. capacitors. and resistors to a digital filter structure by replacing each
analog network component and rheir interCOnnections to a corresponding digital filter equivalent such that
the overall structure "'"simuJates.'" the analog prototype [Fet71], [Fet86]. The resultingdigitai fiiter structure
is called a wave digital filter. which also :shares some additional properties of i.ts analog prototype. An
<Utemative approach is to determine directly the conditions for low coefficient sensitivity to be satisfied
by the dlgital filter structure and to develop realization methods that ensure that the final structure indeed
satisfies these conditions [Vai84]. In this section we examine the latter approach.
630 Chapter 9: Analysis of Finite Wordlength Effects

Figure 9.33: A typical magnitude response of a bounded real transfer functicn.

9.9.1 Requirements for Low Coefficient Sensitivity


Let the prescribed transfer function H(z) be a bounded real {BR) function as defined in Sect:on 4.4.5.
That is. H(z) is a causal stable real-coefficient function chuacterized by a magnitude respon~ \H(e1 "')j
bounded atxwe by unity, i.e.•
j H(ei"')J ::::: l. (9.164)

Assume that H (;:) is such that at a set of frequencies Wi the magnitude is exactly equal to 1:

(9.165)

Since the magnitude function is hounded above by unity, the frequencies Wk must be in the passband of
the fiJter. A typical frequency response satisfying the above conditions is shown in Figllre 9.33. Note that
any causal stable transfer function can be scaled to satisfy conditions of Eqs. {9.164) and (9.165}.
Let the digital filter structure /V realizing H (z) be characterized by a set nf R mu1tip1iers v.ith coef-
ficients m;. Moreo-ver, let the nominal values of these multiplier coefficients, assuming infinite precision
realization, be m;o. Assume that the structure N is such that, regardless of actual vaJues of the multiplier
coefficients m 1 in the immediate neighborhood of their design values m,o, its transfer function remains
bounded real, &atisfying the condition ofEq. (9.164). Now. consider IH(ef«>k)j, the value of the magnitude
~poose at w = illf<, which is equal to 1 when the multiplier values have infinite precision. Because of the
assumption on]\[ if a multiplier coefficient m; is quantized, then IH(efrut)J can only become less than 1
because of the BR condition of Eq. (9.164). Thus, a plot of jH(ei"'*)r as a function of m; will appear as
indicated in Figure 9.34 with a zero-valued slope at m; = m, 0 ,

•IH<ei~lll = o. (9.166)
am I 'm;=m.<j

implying thar: the first-order sensitiYity of lhe magnitude function !H (eiw)j with respect to each multiplier
coefficient m; i:s zero at aU frequencies Wk where jH(ef"')f assumes its maximum value of unity. Since all
frequencies 01;. where rhe magnitude function is exactly equal to unity, are in the passband of the filter,
and if these frequencies are closely spaced, we expect the sensitivity of the magnitude function to be very
!ow at other frequencies in the passband.
A digilal filter structure satisfying the above conditions for low passband sensitivity is called a struc-
turally bounded System. Since the output energy of such a structure is less than the input energy for ali
finite energy Inpu15lsee Eq. (4.102)}, it is also caJled a structurally passive system. If Eq. (9.164) holds
9.9. Low-Sensitivity Digital Fitters 631

Fi~re 9.34: Ulustration of the zero-<;ensitivity property.

G(z;
Ao(z) +

FigtU"e 9.35: Parallel a!! pass realization of a power-complementary pair of transfer functions.

with an equality sign, the transfer function H(z) is called a lossless bounded real (LBR) function. i.e.,
a stable allpass function. An allpass realization satisfying the LBR condition is therefore a structurally
!o.vsless m LBR systern implying that the structure remaim; allpass under coefficieltl quantization.
We now outline methods for the realization of low passband sensitivity digital filters.

9.9.2 low Passband Sensitivity IIR Digital Filter


In Section 6.10 we outlined a method for the realization of a large class of stable IIR transfer functions
G(:) in the form of a parallel a.llpass structure:

G(z) = ! (Ao(z) +A 1(z)i, (9.167)

where A..o(z} and A. 1 (z) are stable allpass transfer functions. Such a realization is possible if G(z) is a BR
function with a symmetric numerator and has a power-complementary BR tr.msfer function H (z) with an
antisyrnmetnc numerator [Vai86a]. In this case, H(z) can be expressed as

H(z) = zI {Ao(z) - At {z)}. (9.168)

The all pass dec-ompositions ofEqs. (9.167) and {9.168) pennit the realization of the power-complementary
pair {G{z}, H(z)} a~ .indicated in Figure 9.35.
Now. on !he unit cirde. Eq. (9.167) becomes

G(ej"') =~ {eitfo\«>) + ei9J(w) }. (9.!69)

since Ao(z) and A 1(z) are allpass functions with unity magnitude responses. Thus, .if the allpass transfer
functions are realized in structurally loss!ess forms., ;G (ei"'H will remain bounded above by unity. More·
over, at frequencies Wj_, where ;G(elwt)! = !, Oo(wt) = (&·, (pJI!)hl!"• and \ef8:l(CI.tl) + eil~J("'tll = 2. Or in
632 Chapter 9: Anatysis of Finite Worcllength Effects

y[nj

Figure 9.36: Parallel allpass realization of the lowpass fi:ter using struc(urnlly losslcs~ all pass sections with quantized
multiplier coefficients.

other words, the realization of Figure 9.35 is structurally passive implying low passband sensitivity with
respect to multiplier coefficient quantization.
Section 6.10 describes a method for deterrninillg the two aU pass transfer functions Ao(z) and A 1(z} -
for a given BR transfer function H (z) satisfying the conditions given abo\.-e. Once these allpass transfer
functions have been detennined. they can then be realized in structura11:y lossless forms using one of the
two methods discussed in Section 6.6.
The following example illustrates the low passband sensitivity characteristics of a parallel allpas-s
struct.Jre.
9.9. Low-Sensitivity Digital Fi!1ers 633

-IW
'\

' j
fj'
"
\]

(a) (b)

Figure 9.37: {a) Gain responses of the parallel all pass realizatior_ with infinite preCision coefficients (shown w1th solid
l:ine). and wi!h quantize!! coeffidencs {dashed line). and {b) passband details.

Figure 9.38: Passband re;,ponses of the direct fonn realization will infinite precision coefficien1s (shown with solid
line). and with quantized coeffidems (dashed lme).

{t shouki be noted that the power-com~le.mentary transfer function H (z) realized in the fonn of
Eq. (9.168} also remains BR if the allpass filters are stru<..1:urnlly LBR. Hence. it ex.hibi.ts low sensitiv-
ity in its pa~sband [which is the stopband of G{z) J. However. low passband sensitivity of H(z) does not
imply l.ow stopband sensitivity of G(z) (Problem 9.35).
A number of other methods have been advanced for the realization of BR UR digital transfer functions
lDep80j.tHen83], (Rao84]. [Vai84], [Vai85a].

9.9.3 Low Passband Sensitivtty FIR Digital Fitter


In many applications linear-phase FIR filters are preferre-d. Ai indicated in Section 4.4.3. there are four
types of linear-phase FIR filters of which the Type 1 filter is the most general and can realize any type
of frequency response. We therefore restrict our attention here to a BR Type 1 FIR transfer function and
outline a simpJe approach to realize it in a stnicturally passive fonn. thus ensuring low passband sensitivity
to multiplier coefficients [Vai85bj.
Now, trom Eq. (4.&0), the frequency re&ponse of a Type l FIR filter of order N can be expressed as

(9,170)

where iJ(w), a real function cf w, is its amplitude response. Since H(::) is a BR function, ff(w) ;::: 1. Its
delay-complementary filter G(z) defined by (Section 4.8.1)
G{z) = z-NJl- H(z) (9.l7l)
has a frequency response given by

G(elw} = e-JwN/2 [t- iJ{w)] =e-jwNf2ij(w), (9.172)


634 Chapter 9: Analysis of Finite Wordlength Effects

Figu~ 9.3~: Amplitude respo!lseS of typical delay-compiement;uy Type l FIR filter.-;.

x[nj

Figure 9.40: Low passband sensitlvi!y realization of a Type 1 FIR filter H(z}.

where G(w} = I -if (w) is its amplitude response. Amplltude responses af a typical delay-complementary
FIR filter pair are depicted in Mgure 9.39. It follows from lhis figure that at w = Wt, where jR(.ei"'')l = 1,
G(w) has double zeros. Thus, G(z) can be e"Xpres~ro as

L 2
G(z)=G.,(z)D(1-2cosw.H- 1 +z-2 ) =G<l{z}Gb(z). (9.173)
k=l

A delay-complementary implementation of H(z) based on Eq. (9.171) is sketched in Figure 9.40,


where the PIRfilterG(z) is reaii:zed according to&;:. (9.173) as a cascade of Ga(Z) and L fourth--order FIR
sections, with the kth section having a transfer function (I - 2 cosw,~;C 1 + z- 2 l. Now if the multiplier
coefficient 2 cos Wk of the kth section is quantized, its zeros are stiJI double and remain on the unit circle.
As a result, quantization of the coefficients in Gb(<:) does not change the sign of the amplitude response
G(w), and in the passband of H(z}, G(u.') 2: 0. Moreover. Ga(Z) has no f':eros on the unit circle, and
quantization o~ i1s coefficients alw does not affect the sign of G(w). Hence, ft(w) continues to remain
bounded above by unity, i.e .• the realization of H(:;) as indicated in Ftgure 9.39 remains structuraJiy BR
or structurally passive with regard to all multiplier coefficients, resulting in a low passband sensitivity
realization.
9.1 0. Reduction of Product Round-Off Errors Using Error Feedback 635

--~~

Figure 9.41; Magm!ude respons<!S of the o:iginal FIR lowpass filter (soltd lim:) and its lO'A' r.ensiliviiy realization
(dash:cd line)-

9.10 Reduction of Product Round-Off Errors Using Error


Feedback
We have indicated earlier in Section 9.6 that in digital filters implemented using fixed-point arithmetic,
the quantization of multiplication operations can be treated as a round-off noise at the output of the filter
structure and can be analyzed u:-.ing a statistical model of the quantization process. In many applications,
this noise may decrease the output signal-to-noise ratio to an unacceptable level. It is thus of interest to
in..-estlgate techniques that can reduce the output rotmd-off noise. In this section, we describe two po.'lsible
solutions that require additional hardware. ln critical applications. the cost of the additional hardware may
be justified.
The basic idea behind the error-feedback approach is to make use of the difference between rhe unquan-
tized and quantized signal i:n reducing the round-off noise. This difference, caJled the error, is fed back
to the digital filrer structure in such a way a.'> to not change the transfer function originally implemented
by the struc1ure while efft:t·tivdy decreasing the noise power [Cha81], [Hig84], [Mun81]. [Tho77]. We
illustrate the approach for a first-order and a second-order filter structure. The error-feedback approach is
often used in designing high-preds.ion oversampling AID converters (see Section ll.l2).
636 Chapter 9: Analysls of Finite Wordlength Effects

X(Jl j

Figure 9.42: A fir.;.t-on:Xr digital 5lrer stntcture with error feedback.

9.1 0.1 First-Order Error-Feedback Structure


Consider again the scaled first-order section of Figure 9.3l(a). We aBo assume that all multiplier coeffi.
ciencs are signed (b + l )-bit fractions.. The quantization error signal is then given by

e[nl = y!_n}- v[n]. {9.174)

We now modify the structure of Figure 9.3l(a) as shown in Figure 9.42, v.ilere now the error signat is
being fed back to the system through a delay and a multiplier with a coefficient fi. In practice, the coefficient
fJ is chosen to be a simple integer or fraction, such a.-.± l, ±2. or ±0.5, so that the multiplicatkm can be
simply performed using a shift operation and wiU not introdw.."e an additional quantization error.
Analyzing Figure 9.42. we arrive at the expression fur the transfer function of the digital filter structure
With y[nl as the output as

(9. 1 75)

The noise tran'>fer furu;lion G(z) with the error ft!edback, with y[nl as the olltput, is given by

G(z) = f(z) I
£(z) X{~}=D
(9.176}

and without the error feedback (fS = 0) is given by


l
G(z) = 1 ~a'
,. {9.! 77)

Followlog steps similar to that outlined in Example 9.7. we arrive at the output noise variance of the
error-feedback structure as
"2 ~ 1 + 2a{J + p2 2
a •. ~ , o:o, (9.178)
' 1 -a-
where u-J IS the variance of the noise sour-ce efn 1- Note from the above that the output noise variance is
a minimum when {) = -a. However, in practice lo I < 1, and hence this choice for f3 will introduce an
additional quantization nuise sour-ce, making the analysis resulting in Eq. (9.178) invalid. Thus. from a
practiCJ.t point of view, it is more attractive to reduce the output round-off noise variaoc-e by choosing f1 as
an integer with a value ci0'3e to that of -a, in which case the only noise source is due to the quantization
of uy(11 - q.
9.10. Reduction of Product Round-Off Errors Using Error Feedback 637

4 -~-_71

_/// I

--r'--

o~---~- ------
o 02n 0.4rr 0.&.11:

Figure 9.43: The- nvrm:th7ed CfTO' power spcctr~<m of the lirq-ordcr ~ction witt!. and withou~ errm feedback.

For ~0'~ <' 0 S, f.= 0. implying nn ~rror feedback. However. jn this case, tbe pole of H(?.) i~ far from
the uni~ circle, and a;.. a re:-.ull. the noi:-.e variance is not bgh. For !al ~ 0.5, we choose f)= (-l)>.gn(aj. 4
Substituting this value of fi in Eq. {4T;"~)_ we arrive at

(9.179)

Comparing Eqs.. (9.179} and !9. l?K) with fj = 0, we note that the introduction of error feedback ha<;
mcreased the S:'\IR by a factor of i01ogw!2(1 -la:)J. TZJ~ above increase in SNR i,; quite significant if
the pole 1s closer to lhc unit cin:ic. FoJ e:x.ample, if :al = 0.99, the impro-vemenl is about 17 dB, which
i& equtYalent to about J- bits r,f mcrea~d accuracy cornp,:;red to the case without error feedback. The
additional hanhvare requ1rements for the ~tructure of I-'igurc 9.42 are two new adders an:d an additional
~:torage regi~!:er.
Comparing the two expres;,\(WS of Eq:'i. (<:I. 176) and (9 .177), we conclude that the noise trnnsfer functi-on
with error feedback is given by thai without the error feedback multiplied by the expre55IOn (I --;--- fjz- 1 ).
Equivalently, the error-feedb£tck .;:ircuil is .'lhapinli t3e error spectrum by modifying the input quantization
noise£(;:) to EJ (Z) = (l + p~- 1JE(::). The olllput noise i;.. generated by passing Es(Z) through the usual
noise transfer function of Eq. (9.!77).
To examine !he effect nfnoi~c spectrum shaping. l:on~ider the case of a narr-owlmnd lowpa&S 1-!n-.t-order
filter with a -+ L In this cme_ \VC choo~ {3 = -!,and as a result, E~(z) ha>. a zero al.:: = l {i.e.,
a1 = 0). The power spedr.ll Jensily of the unshaped quantization noise E(::) is JJ-,
a constant. The power
2
spectral densit)' of the shaped noise source EJ;:) \-<..given by 4 sin (w/2)crJ -and i-.s plotted in Egure 9.43
along wftl! that of the un~haped ca~. The noise :>haping redi:-.tributes the noise so as to move it mostly imo
the ~topband of the lowpass filt.c-r. thu" reducing the noise variance. Because of the noise redistribution
cam.ed by the error-feedba£k: ..:ircuit. thi~ method of nmnd-off nobe reduction has also been called the
erro.--s,.,ectntm shaping approach in the !iterature !Hig84l

9.10.2 Second-Order Error-Feedback Structure


The error-feedback approach for round--off noise reduction has atso been applied t-o second-order liR
digital filter structUies !Hig&4), /MunS!j. One proposed structure obtained by modifying Figure 9.32\a}
is indicated in Figure 9.44. The transfer function H{z) of tltis structure is given by Eq. (9.157). Jt should
be noted that the inclusion of the error-feedback circuit to the structure of Figure 9.32(a) does not affttr
638 Chapter 9: Analysis at Finite Wordlength Effects

Figure 9.44: A ~econd--{)rder digiraJ filter with e7ror feedback..

either H(z} or the scaling transfer function F(z). Analyzing Figure 9.44. we arrive ar the expression for
the noise transfer functlon
(9.!80)

11te output round-off noise variance for C2-scaiing is given hy


ai = OIGJi2) aJ. 2 (9.181)

Note that a choice of fh = a: and fh = a2 makes ~Gjb = l, yielding aJ; = aJ, an apparent optimal solu-
tion. However. this. choice for the multiplier coefficients in the error-feedback path introduces additional
quanti?.ation noise sources that were not taken into account in the above an~sk As in the case of the
tint-order section with error feedback, a more attractive solution is to make fJ1 and lh integers with values
close to a 1 and a 2, respectively. For example, for a narrowband Jowpass trans.fer function, the poles are
close to the unit cirde and to the real axis, i,e., r ~ 1 and B «::: 0. In this case, at i:s dose to -2 and a 2 is
close to l. We therefore choose here fh = -2 and fh = l, resulting in a noise transfer function
1-2z- 1 +z- 2
G(z)~ . (9.!82)
l +«1Z l +<illZ 2
It has been shown by Vaidyanathan [Vai87c1 that for a very nliiTowband lov.113-ss filter with r = 0.995.
f) = 0.07n-, and b = 16., the second-order error-feedback structure has an SNR that i.s approximately 25 dB
higher than that without the enor feedback. A detailed comparison of the second-01der secti(m with and
without the error feedback for various lypes of inputs is left as an exercise (Problem 9.36}.
lt should be nO(ed that here also the error-feedback circuit provides a noise shaping, as in the first-
order case. In fact, for tlte parameters indicated above, the noise transfer function whh error feedback
is given by that without error-feedback multiplied by the expression (l - z- 1 ) 2 . Or in other words, the
error-feedback circuit is shaping the error spectrum by modifying the input quantization noise E{z) to
£,-{zJ = (I - .:C 1f 2E(z}. The output noise is generated by passing E~~z) through the usual noise transfer
function of Eq. [_9.180) wi'h fJ1 = f3-?_ = 0. The power spectral density of the shaped noise source E~(z) is
give.n by l6sin4 (w/2)a5, whereas that of the unshaped case is simply aJ. These pmver spectral densities
have been plotted in Figure 9.45. Observe how the error feedback lowers the noise in the passband by
pushing it into the stopband of the filter.
9.11. limit Cyctes in IIA Digital Fitters 639

Fi~ 9.45: Normalized error ~r spectrum of the second-order section Wlth and without eiTOe" feedback.

9.11 Limit Cycles in IIR Digital Filters


So far we have treated the anal] sis; of finite wordlength effect.<> using a linear model of the system. However,
a practical digital filter is a nonlinear system caused by the quantization of the arithmetic operations. Such
nonlinearitie; may cause an HR fiiter, which is stable under infinite precision, to exhibit an unslable behavior
under finite precision arithmelic for specific input signah:, such a<> zero or constant inputs. This type of
instability usually results in an oscillatory periodic output called a limit cycle, and the system remains
in this condition until an input of sufficiently Jarge amplitude is applied to move the system into a more
cor.ventional operation.
In applications where the digital filter is operative at ail ti.mes, oscillat~ output in the absence of an
input is highly undesirable. In JY.trticular, limit cycles with a frequency of oscillation in the audio frequency
range can be quite annoying to the listener in audio and musical sound proce">Sing applications.
It should be noted here that limit cycles occur in HR filters due to the presence of a feedback path.
Such oscillations are absent in FIR structures which do not have any feedback path.
There are basically two type." of limit cycles: (1) granular and (2) overflow. The fonner type of limit
cycle is usually of lO\\' amplitude. wherea;.; overflow oscillations have JaiEe amplitudes. We examine bolh
types of limit cycles in this section.

9.11.1 Granular limit Cycres


Two type<> of granular limit cycle" have been observed in IIR digital filterx: inaccessible and accessible
limit cycles fC1a73]. The former type can appear only if the initial conditio:ls of the digital filter at the
time of s;tarting pertain to that limit cycle. whereas in the ~cond case, the limit cyde condition can he
reached by starting the digital filter with initial conditions not pertaining to tl:at Jimit cycle. We illustrate
ihe gener.uion of lhe limit cycle by analyzing the nonline-ar behavior of a fint-order and a second-order
IIR digital fil:er.
640 Chapter 9: Analysis ol Finite Wordlength Effects

~n]1JL~~~---$.--l~• Ytnl
u<J
Figo:re 9.46: A first-Qrder IIR filter with a quantizer,

The limit cycle generation can be easily illustrated on a computer. MATLAB Program 9_7 given below
can be used to study the granular limit cycle process. This program uses the function a2dR of Section
9.4.1 to develop the decimal equivalent of the binary representation of the filter coefficient with N bits for
the magnitude after rounding.

% Program 9_7
% Granular Limi:: Cycles in Fi:-st-Order IIR Filter
%
elf;
alpha = input('Type in the filter coefficient =
yi = input{'Type in the initial condition= ');
.
"' .
x = input('Type in the value o~ x[O] = ' ) ;
for n = 1:21
y(n) = a2dR(alpha*yi,5) + x;
yi = y(n}; x = G;
end
9.11. Umit Cycles in IIR Digital FiHers 641

Thbl~ 9.6: Limit cycle behavior of rhe first-order llR d1gJtat filter.

a= 0 8 1011, f(-1] = 0 a= lnlOlJ,)o[-1]=0

n aj[n- 1] Yin] aj[n- 1] j[n]


'
0 0 Oa 1101 0 Oal101
I OAlOOOllll 0 4 JOOI lalOOOllll 16.1001
2 Ot..OlJOOOll 0 4 0110 OaOiltxXllJ Oa.OIIO
3 0.::,.01000010 Ot;.GlOO ] 11 01 000010 lo.OlOO
4 OaOOIOllOO 0 6 0011 Oa00101100 0,1,0011
5 0 4 00100001 OAOOIO 11;00100001 laOOlO
6 OAOOOlOllO Oll.OOOl ''' 0~00010110 OaOOOI
'
7 OAOOOOI011 ' 14 00001011 laOOOl
8 Oa.OOOOIOil 0~00001011 0 4 0001

'
''
'
rI II I 0
r
'
''
'
t'
''
'

:, j 6I 6I I '
!
I,

_j ,j,
,
,j,
,."
! 6 '

{a)
"""'"'"
(b)

Figure 9.47: Illustration of limit cycles in a first-order liR digital filter. (a) a = 0.6, and (b) a = -0.6.

k = 0:20;
stem{k,y)
ylabel{'A'T!Plitude'); xlabe:('Time index n');
title: ['\alpha = ' r.u.:n2str(alpha) ]');

Figure 9.47 shows the plots of the first 21 samples of the output response of the first-orrler IIR digital filter
of Eq. (9.1) implemented with filter coefficients rounded to 6 bits. The input is x[n} = 0.048[n] and the
filter coefficients are a= ±0.6. The initial condition yf -I] is set toO.
642 Chapter 9: Analysis of Finite Wordlength Effects

Figure 9..48: A ~nd-order JlR section w1th quar.tizers after the multipliers.

"" "'' -l
'
¥1 t¥nJ\ ali iu1 «'*"* r~w 11:0: tm!!U vB<
t~+wmlin;g,. U& ttr;iiciW!0di {f! 441. !&!r afw;m:;cr tlf0!!1nht 0!

Limit cycles can also occur in sec-ond-order IIR filters if the system with quantizers has effective poles
at z = ± 1. In !his mode the dead band is bounded by 28/( I - :cq I + a2) [Jac69].
It should be noted that the limit cycles occurring with the amplitude bound ofEq. (9. 192) are basically
inaccessible limit cycles, and it is highly unlikely, in practice, that the digital filter will start with initial
conditions penaining to tbese limit cycle~ [Cla73]. On the other hand, fOT the second-order IIR filter
structure of Figure 9 .48. with arbitrary initial conditions, accessible limit cycles are highly likely to occur
and their amplitude can only be bounded from below:

E\·en though we have considered here the zero-input limit cycle generation in very simple IIR digital
filter structures, limit cycles also occur in higher-order structures. However. their analysis is almost
impossible, except fO£ the detem1ination of the bounds on the amplitudes of the limit cycles [Lon73]. In
some :structures, periodic limit cycles have been observed with nonzero constant amplitudes and also for
constant amplitude sinusoidal inputs.
9.11. Limit Cycles in IIR Digital Fil1ers 643

.r[n]---{tJ-='--..j

Figure IJ.49: A second-order JIR section with a quantlzer after the accumulator.

9.11.2 Overflow Limit Cycles


As indicated earlier, limit-cycle-li.ke oscillations can also result fromoverllow in digital filters implemented
Y<ith finite precision arithmetic. The amplitude of the overflow oscillations. ;;an cover the whole dynnmic
range of the register experiencing the overflow and are much more serious in nature than the granular type.
We illustrate the generation of an overflow oscillation on a compute-r in the following example.

Wf,qtil& ::$111 t'&tllsuhuw; xud rjr{ :P'm;,;nnt 1XIf 111':'1'4&! (riti!~wf ,:!\tit&.; lYfl¢t A!:i 1J8ilmff01t ¥1¥
!,V!01Mm01Q Wf !t·U·{1®1jillk
ZA'1\ :0'm !i t«:r rt'Wfkt!mlx XW ,;:tvry,
w 7!

-
1£0!lliif!,VJ01M 1110' 41!0$166M11()hfo !] t l
9i xr ~~'" 111} .:::: o

h ,'~:F;:,:·~·
i
'0 Xkhu:tkfi(::iqht :r r"rvx<f'j.;tG·: ,;c,t ,"'7 .·
i
d "'
,h!.u\' 'ililli;
\n "2 <Sf±
(:.f70 }>{';;

·' w/tf>h • y
Yt;, V'' \(It;•

¥ " ¥r4tJ:
Yi.Ar!;t. • b; )( {
»1:::1101 t' :;,si!Swt UY.lrA
'\1\J " , HIt""~' %!(:¥"' , • ..~:..t ; ;
f..S L:.t t ! · 0! ;tt:;q (;t;(U::>.tt. T :0 '. i {
Chapter 9: Analysis of Finite Wordlength Effects

Figure 9.50: Overflow limit cycles of Figure 9.49.

FJgUre 9.51: Ccefficient ranges of a second-order direct fonn IIR filter to guarantee no overflow oscillations (sbown
by shaded region).

For stability we have shown in Section 4.12.1 that the filter coefficients Of the second-order direct fonn
UR structDce of Figure 9.49 must remain inside the stability triangle of Figure 9,51. However. the structure
can still get into a zero-input overftow oscillation mode for a large range of values of the filter constants,
satisfying this stability constraint when implemented using Nro's-complemem arithmetic with rounding.
It has been shown that overflow limit cycles under zero-input cannot occur if the filter coefficients lie in
the shaded region inside the stability triangle. as indicated in Figure 9.51 [Ebe69J. This region is defined
by the relation
(9.!95)
As the above condition is quite restrictive, we now examine other IIR structures lhat do not sustain
limit cycles with less severe constraint on the pole locations.

9.11.3 Limit Cycle Free Structures


A number of authors have advanced digital filter structures that are limit cycle free when implemented using
specific arithmetic schemes. The most ~raJ approach to the development of wch structures is based on
the state-space representation [Mil78). In Chapter 2 we considered the time-domain description of an LTI
discrete-time system in tenns of the convolution sum and the linear constant coefficient difference equation
relating the input and the output signals.. Another time-domain representation for a causal LTI discrete-time
system is in terms of internal variables called the state variables. which are usually the output variables of
all unit delays.. For a second-order causal crr discrete-time system, the state-space representation relating
the output sequence y[n] to the input sequence x[n] is given by
9.11. L1m1t Cycles in IIR Digital Filters 645

(9.196)

ylnJ= (c; q] [ s;[n]J


j'2[n] +dx[n]. (9.197)

Denoting5
(9.198)

B- [ b,
- "' J. (9.199)

we can rewrite Eqs. (9.196} and (9.197) ln a compact form as

s[n + 1] =A s{n] -t- Bx[n}, (9.200)


y[n] = C s{nj -t- dx[n]. (9.201)

In the above equations s[n], given by Eq. (9.198), is called the state vector with its elements s;[n] known
as the state variables. The matrix A given in Eq. (9.199) is called the state transition matrix.
Even though in an actual implementation, the right-hand sides of both Eqs. (9.200} and (9.200 are
quantized. the quantization errors caused by the quantization of the right-hand side of Eq. (9.200) go
through the feedback. loop and are responsible fur the generation of limit cycles. We assume that the
variables s 1[n + I] and s 2 [n + 11 are quantized, and the delayed versions of these quantized signals are
the state variables SJ [r.] and .Q[n], respectively.
We define a quantizer to be pa.ssiYe if

for aUx. (9.202)

If x is inside the given dynamic range of tbe system, then for magnitude truncation (Section 9.2) it is evident
that Eq. (9.202) holds, i.e .• the quantizer is passive. If x is outside the dynamic range caused, for example.
by cverflow. it must be brought back to £he range by following either the saturation arithmetic scheme
or the two's-complement overflow scheme discussed in Section 8.6. As a result, magnitude truncation
followed by either of the above two overflow handling schemes is again a passive quantizer.
A digital filter structure with a state transition matrix satisfying

ATA=AAT (9.203)

has been cal1ed a normnlfonn structure. It has been showo that such a structure with passive quantizers
does not support zero-input limit cycles of either type [Mil78). The matrix A satisfying the above condition
and ~Aii2 < l is caiieda normal matrix.

'In 1his t=l. row and «>lumn ve<;:tors.. in general, are indtcaled by boldface lower!;:ase letters. while tbe m~~trlces are epo::senled
by boldface upper.ne letten.
646 Chapter 9: Ana!ys1s of Finite Wordlength Effects

x[n) --<(!+~(+~~

-d
Figure 9.52: A second-order IIR section.

'[n]l ' I a' I


Ji
•• • I "-• Ia_._, I •••

Figure 9.53: Register stocing the signal to be quantized.

9.11.4 Random Rounding


A conceptually simpie technique to-suppress zero-input granular limit cycles in second-order llR sections is
ca1led random rounding [But77], [Law78]. To explain the operation of this method, Iet the register storing
the signal v[n] prior to the quantization be as indicated in Figure 9.53. In the nmdom rounding method. an
uncorrelated binary random sequence x [n] is generated extemaJly by a random number generator, and the
(b + l)th bit a-b-l of v[n] is replaced wil:b x[n]. The modified signal v[n] is then passed through a passive
quantizer. In addition to the increase in hardware complexity, this approach a1so results in a slightly higher
quantization noise that is, however, distributed oYer the whole frequency range.

9.12 Round-Off Errors in FFT Algorithms


Since FFf is often employed In a number of digital signal processlng applications, it is of interest to-analyze
the effects of finite v.-ordlengths in FFT computations. The most critical error in the computation is that
due to the arithmetic round-off errors. As in the earlier sections, we assume that the DFT computations
are being carried out using fixed-point arithmetic. aod we thus restrict our analysis to the effect of product
9. 12. Round-Off Errors in FFT Algorithms 647

round-off errors in the DFr computation via the FFT algorithm." and to compare thern with the errors
generated m the direct DfT computation {Opp89], fPro92}, [Wei69]. The model for the round-off error
analysis to be employed here :s the same as in the case of LTI digital filters and is shown in Figure 9.18.
Now the multiplier coefficients wt"
and the signals in the OFf computation are. in general, complex
numbers resulting in complex multiplications. Since each complex muJtiplkation usuaHy requires four
real multiplications, there are four quantization errors per multiplication. lfthe DFf computa1ion requires
K complex rnultiplicarions. there are 4K sources of quantization ermrs in the computational structure. We
make the usual assumptions about the statistical pwperties of the noise sources:
(a) A114K errors are uncorrelated with t:ach other and uncorrelated with the input sequence.
•:b) The quantization errors are random variables unifonnly distributed wiih a variance aJ == z-2h,t;2,
assuming a signed b-bit fractional fixed~point arithmetic.

9.12.1 Direct OFT Computation


Recall that the A'-pui.nt OFf X[k] of a length-N oomplex sequence x[nl is given by
N-1
X(k] = L x{nlWZk, (9.208)
"~

Thus, the computation of a single DFI' sample requires N complex multiplications, and hence, the total
number of real multipiications for the computation of a single DFf sample is4N. As a resuJt, there are 4N
quantization error sources. The variance of the error in the computation of one OFT sample is therefore 6

.., ' 2-lbN


a; = 4Noo = - - (9.209)
3
indicating that the outpm mund-off error is proportional to the DFT length..
Now, the input sequence .x[n] must be scaled to avoid overflO\l.' in the .:;:omputation of X[kl From
Bq. (9.208) it follows that
N-1
iX{kl~ :':: L jx{n}l < N, (9.210)
•="
assuming that the input samples satisfy the dynamic range constraint lxfnJI < L To prevent overflow we
need to ensure that
IX[k]! < l. {9.211)
which can be guaranteed by dividing each sample x{n] in the input sequence by N.
To analyze the effect of the above scaling. assume the input to be a white nois-e sequence with each
sample uniformly distributed in the range ( -1/ N. 1! N) [Pro92]. The input signal power is then given by

2 (2/N)2
a_t = !2
(9.212}

The corresponding output signal power i:.

2 N 2
ox= 0"-" = 3N (9.2l3)
~----------------
6Strn:tty speaking. the ol.l!pul noise ~<lriance will be less lhan the value here since multipficmions v;ith ~ and w~!l do 1\Qt
devebp any error. '
648 Chapter 9: Analysis of Ftnite Wordlengtll Ettects

x(O~

x{4]
w'N -I

xf2]

xf_6J XI3J
w' -1 w·
x(l]
N
'
x(5]
w'N -I

;rf3!

xl71
w'
_,
'
Figure 9.54: Reduced flow-graph for t!-.e computation of XHl-

As a result, the signal-to-noise ratio is


az 22b
SNR = ~ = 2 . (9.2}4)
cry N
The above expression indicates that the SNR has been reduced by a factor of N 2 due to scaling and the
round-off error. The wordlength size needed to compute a DFT of given length with a desired SNR can
be determined using Eq. (9.214} (_Problem 9.43).

9.12.2 OFT Computation via FFT Algorithm


We new consider the round-off error analy~s of the DFT computation based on an FFr algorithm. Witbout
any loss of generality, we analyze the decimation-in-time radix-2 FFT algorithm. However, the results
derived can be easily extended to other types of fast DFf algorithms.
From the flow-graph of the DIT FFf algorithm given in Figure &.24, it can be seen that Ehe DFf
samples are computed by a series of butterfly computations with a single complex multiplication per
butterfly mutlule. Some of the buuerfiy computations require multipJications by ±I or ± j that we 00 not
treat separately here to simplify the analysis"
Consider now the computation of a single DFf sample as indicated in Figure 9.54. It follows from this
figure that the computation of a single DFT sample invo1ves ,., = log 2 N stages. The numberofbutterfties in
a particular stage depends on the stage's location in the computational chain with N /2' = 2"-r butterflies
in lhe rth stage, where r = I. 2, _.. , v. The total number of butterflies involved per DFT sample is
therefore
l + 2 + 2 2 +--- + 2"- 2 + 2~-l = 2"- l = N- I. (9.215)
It also follows from Figure 9.54 that the quantization errors introduced at the rth stage appear at
the output after propagating through (r - l) stages while getting multiplied by the twiddle factors .at
each subsequent stage. Since the magnitude of the twiddle factors is always unity, the variances of the
quantization errors do nol change while propagating to the output. The total number of error sources
conlributi.ng to the outpm round-off error is 4(N - 1). Assummg that the quantization errors introduced in
each butterfly are uncorrelated with. those generated at otherbt.tterflies, the variance oft he output .round-off
error is then
9.13. Summary 649

v 2 = 4(N- 1)---
Y
')-2~>

12
="--;;---
z-2h N
3
(9.216)

It should be noted that the above expressior. for a~ is identical to that derived for the direct DFT
computation case given by Eq. (9.209). This is to be expected since the FFT algorithm does not aiter
tile total number of complex multiplicatiom; to compute a single DFT sample, rather it organizes the
computations more efficlently so that the number of multiplications to compure all N DFf samples is
reduced.
Now, tD prevent overflow at the output if we scale the input samples to satisfy the conditioo 1-t[nH <
1/ N, the SNR obtained in the FFT algorithm remains the same as given i.n Eq. {9.214). However, in the
case of the FFI' algorithm, the SNR can be improved by following a different scaling rule as described
below.
Instead of scaling the input samples by I j N, 'Ne can scale the input signals at each '>tage by 1/2. This
scaling rule also guarantees that the output DFf samples are scaled by a factor of (1/2)' = liN as desired.
However, each sca..ling by a foctor of 1/2 reduces. the round-off noise variance by a factor of 1/4. As a
result. the round-off noise varian..::es of lhe 4{2''-r) noise sources at the rth stage are :educed by a factor
t•f 1/4r-l, while the noise propagates to the output. It can be shown that the total round-off noise variance
llt the output is now given by

(9.217)

assuming N to be large enough ;,o that 1/ N << 1 (Problem 9.44). The SNR in this case reduces to
a2 221:>
__]{_= (9.218)
a2 2N
'
Hence. distribution of the scaling into each stage has increased the SNR by a factor of N. The formula
given in Eq. (9.218) can be used to derennine the wordlengtb required to achieve a specified SNR for
<:arnputing DFT of a given length N {Problem 9.45).

!1.13 Summary
This chapter is r:oncemed with the effects of the finite word!engths caused by the actual implemenmtion of
a digital signal processing algorithm that ca!L.<;.eS the results of the algonthm to be different, in general, from
tl!e desired ones obtained in the ideal case with infinite precision wordlengths. To develop the appropriate
rr10dels for the analysis of these effects prior to actual implementation, we first review the quantization
process tL'led to fit the infinite precision data into a finite wordlength register a11d the resulting errors caused
by it. The quantization of both fixed-point and floating-point numbers are considered. However. the
C.isctL-.$ion in the rest of the chapter is restricted to fixed-p<lint implementations.
The effect of the quantizatior: of the multiplier coefficients in :he implementation of a digital filter
is considered next as a coefficiem sensitivity problem. Simple formulas are derived for such sensitivity
analysis for the infinite impulse respon~ (JIR) filter and the finite impulse responore (FIR) filter.
ln the digital processing of a contirmous~time signal, rhe latter is sampled periodically by a sample-
and-hold device and then converted into a digital fonn by an analog-to-digital (A/D) converter. A statistical
mode I is developed for the analysis of the input quantization error caused by the ND conversion and is userl
to derive the expression for the signal-to-quantization ratio as a function of the ND converter wordlength.
lbeA/D quantization error propagates to the output of the digital filter processing the digitized continuous-
t:.me signal, appearing as a noise added to the desired output, and methods for the statisticaJ analysis of the
C•utput error are provided.
650 Chapter 9: Analysis of Finite Wordlength Effects

The effecr of prodUct round-off in the fixed-pomt implementation of a digital filter is then analyzed
using a statistical modeL A s!atistical analy-sis of the ouiput noise caused by the propagation of these
internally generated e:-rors also to the output of the digital filter is provided. An overflow ma;: occur at
certain :intemaJ nodes in a digital filter implemented in fixed-point arithmetic, which can result Hl a large
amplitude oscillation at the filter output. Methods of scaling the internal signal variables with the aid of
suitably placed scaling multipliers to minimize the probability of overflow are discussed. The perfonnance
of the cascade form of digital filters is particularly examined in depth to illustrate the effect of pole-zero
pairing and the ordering of the low-order sections. A detailed analysis of the output signal-to-noise ratios
of scaled first-order and second-order UR digital filter sections is then provided.
Conditions for the iow passband sensitivity realizations of IIR and FIR digital filters are next derived
and a method for such low sensitivity realization for each case is outlined. Two approaches to the reduction
of round-off errors in llR digital filter structures are then described..
The discretization process can also cause the occurrence of periodic oscitlation. called limit cycles, al
the output of an IIR digital filter under certain conditions. These limit cycles are difficult to analyze in
the general case. Their existence is demonstrated here only for the first-order and second-order HR digital
filters. Conditions for the limit cycle free operation of a state-space structure is. derived.
The chapter concludes witb an analysis of round-off errors ln the implementation of OFf and FFf
algorithms-

9.14 Problems
9.1 Denve !he range;; of the relative quantization errors of Table 9.2.

9~l Compute the pole m:nsitiviry of the first-order lowpas;; transfer function HLP (z) of Eq. (4.1 09} with respect to
me coefficient u.
9.3 Compute the pole sens.iti'oities nfthe '!eCOnd-order bandpass tr.ansfer functioo HJJp(Z) ofEq. (4.ll3) with re...pect
to the coefficiems 0' and p.

9.4 The digital filter struo:ure of Figure 9.52 i~ more commonly known as. a modified coupled-form. structure tYao82j.
Al',ot:ber modified coupled-form structure is shown in Figure P9.L Determine the transfer ful1C1ion of borh structures
and then compute their respective pole sensitivities. Compare these sensitivities with those of the structure of f\gure
9.10 as given in Eq, (9.46).

y[nJ

d d

-I
Figure P9.1

9.5 Deterntine the transfer function of the second-order digital filter structure shown in Figure P9.2 and compute ir.,;
pot.~ sensJtivities [Aga75]. Compare lhese sensitivities with those of the strnctures of Figure 9.10 and Figure P9.l.
~1.14. Problems 651

x[n]

-c

Figurt:P9.2 Figure P9.3

9.6 A third·nrder elliptic highpass transfer function

0.18681.;::- 1Hz 1 - 0.0902z + 1)


H(z) =
(z + 0. 3628)(.;:1 -c- 0.5\ J lz + 0. 7363)
i> reali.;red m (I) direct form. and (2) cascade fonn. Compute the pule :sensitivities of each structure.

!1.7 Consider the digital hlter strocture of Figure P9.3 characterized by .a transfer function H {z} = YJJ X I· Show that
th<:c sensitivity of H (z) with. respect to the multiplier coefficient u defined by UH{z)jikt i~ given by

&H(z) "' . ) G ,
---- = •·aU".: · «(Z},

'"
wherl': Faf::) IS the scaling transfer function from the mput node X 1 to the input node Y2 of the multiplier a and Ga (zj
i> the noise transfer functiGn from the output node X2 of the multiplier u to the filter output node Yj.

~~..8 Show thac the function WN(w) ddin<:d in Eq. (95S) satisfies the following properties:
(a) ;) <:.: W,v(w) :5: L
(bJ W.'I'(O) = W,v{ll") = L for all N.
'
(<:j 1'illl "V , , = 1
",V!WJ [<)• 0 < W < ll".
N_,.oo v-

9.9 Venfy !he SNR value~ giYen in Table 9.3.

9.1V An alternate approach to the alg:ebrnic ealct~lation of output round-off noise variance is outlined in this problem
[Pat80j. l.et the partial-fraction expan$ion in z of an Nth-order real rational noise transfer function H(z) with simple-
poles. ~ gn'en by
N A
H(z)= L-_'-,
k=l z-r-ak
where ,{k and a;; are. in general, complex numbers.
(aj Show that H (z} can be expressed in the fflJ"m

N
H~zi= 'C'
LCk
k=l
(a"+') +B.
1 t· ag

where C;; and B :.ne constants. De1ermine the e:xprcssions for C11. and B.
652 Chapter 9: Analysis of Finite Wordlength Effects

(b) Show 1bat th~ normal lied llUtput round-off noise vanance c:J can be expreso;ed in the form

(c) Show that the above ex.pre.<..5mP. can be further simphfied a;.

9.11 Detennine the output noise variance due to the propag<~tion of the jnput quantization noise for each ;:;.f the
fo!lo.wlng causal IIR digital rilters:
tz:+3){:-l)
( a; H, (') - ~'-o~~--;;';-;
- ' ~ - (;: 0.5)(;: -0.3)'

(b'. H-(~)-
"- - - Ot
-,
+ 1)(4::: + 71)(z2 0.5< + 0.4)'
-co3c(2='"'+~•'"'0o .s;c'c',---'"~·'co'c+~l~lc:c:
(<: - l }1
(C-' H3{:::.} = " ~ -
- {;:~ 0Az+0.J)

9.12 Determine the expre~<.JOn for the normalized output noi'ie variance due to input quantization of the following
d1git:<l filler realized in a parallel form

H{z) = C + ~~~.4 •-. +--:--"B~


1-a;-- 1-fh;

Each se.:tion in the parallel structrne is realized in direc! form ll. "What i<> the value of the variance f« a -0.7,
{3 = 0.:;-, A =
3, B = -2, C 3" =
l).i 3 Realize the following u.ansfer function

(z-2)(z+3>
H(z) = c-''-c~"'c~~
(z+03)(z-04)

~n four different cascade forr..s with each firsl-order stage imp]emer~ted in direct form ll.
round---<:~ff noise at the
fa) Sho\>o l:h-.o no\:;e model for each unsealed litrocture for the computation of the product
o:;:tput asS-uming quantization of products before addition assuming fixed-point implementation with either
rounding or £Wo's-complemcr.t truflUl.tion. Compute the normalized output rmmd-off noise Yariance for each
reali?.atio:;. \Vhich cascade realization has the lowest rouod"off noise?
{b) Repe;It pH1 (a) assuming quantization after addition of prod-llct.

9.1•& Realize the transfer function of Problem 9.13 in two different pa:aHel focms with eac-h first-vrder stage imple-
mented in direct form II.
(a) Show the noise model for each unsealed stmdure fer the <:omputati:on of the produ-;:t round-off !lOire at the
output assuming qoontization of products before addition and asw.ming fixed-point implementation with ei!her
rouniling o: <wo's-complemen! truncation. Compute the normalized output rot~nd-off noise variance for each
r-ealization. Which para!ld realization his the lowest round-eft flOtse?
(b) Repea: part {a; assuming quantu.ation afte-r additiof! of prodLK"!i.
9.14. Problems 653

9.15 Realize the following .~et..-.:md-order tran:;fer function

2 ~ 2c 1 - u c 2
H\z) =
I +0.5.;: l + 0.06.: 2'
in (1) direct form. (2) caMCade form, and (3) pnaEel form. Each section in the cascade and parallel strucrures is realized
in dir~ct fonn H. Show the amse mode-l for eoch unsealed structure frn- the computation of rhe pmduct round-off noise
a-: the ootput ussuming quantlzatlon of products before addition and .assuming fixed-point implementation with either
rnundmg or I we "s-<:omplement truncation. Compute the product rou!ld-off noise varian.ce for eaeh reabzation. Wh.ich
rt:aliLation ha~ tb_e lowest round-off noise? Note: There are four cascade and two parallel realizations.

9.16 Realize the transfer function ofPmbler.1. 9.15 in Lle Gray-Ma..<"kel form.
(a) Show the noise model for the unscaleC: ~tructure for the computation of t."!Je product round-off noise variance at
!he output as.<;urniug quantization of product'> before additwn and compute its product round-off noise vanance.
(bJ Repeat ;>art (a) assuming quantization after addi~ion of pwducts.

9.17 One pos~ble two-muitiplier realization of a second-order Type 2 allpass transfer function

is showr. in Figure P9.4. Derive L'<e e:w;pression for the nonna!ized steady-.~tate outpill noise variance due tc product
mund-off ;;;ssu;ning fixed-point implementation with either rounding or rwo's-comple::nent truncation.

x,-r"'~" +
--~~

Figure P9.4

9 . 18 Develop the noise model for lhe product round-of: noise analys1s of the second-(lrder coupled-form structure of
F1gure 9. 10. Determine the normalized outpt.::t round-off noire variances due to product round-off before summation
aud after ~WlJmation.

9'.19 Develop the noise model for the product round-eft noise analysis of the second-()njer Kingsbury structure of
f,gure P6.8 of Problem 6.9. Determine the normalizej output round-off noise variances due to product round-off
bd"t:Jre ~ummation and after 5ummation .

1).20 The allpass sectmn of Figure P9.4 is employed walter the phase response of the structure realizing the transfer
f~.:nclion H (z} of Problem 9.13 as indicated in Figure P9 5. T.le ail pass equalizer of Figure P95 has a trnn;;fer function

0.8+0.5z- 1 +z-2
A 2 i·Z) = -"C;i-i::'T--:c~=
l+0.5zl+O.Sz2"
G:mopute the normalized 5teady-&tat~ output r.oise variance due to product round-off of the phase-equalized structure
of Figure P9.5 ;f H (z) is realized in a cascade form with the lowest product round-off noise.

~A,(+
FigureP9.5
654 Chapter 9: Analysis of Finite Wordleng!h Effects

9.21 Conslder t.lJe digital filter structure of Figl.lre P9.6 that is assumed to he implemented using 9-bit signed two"s-
o..:omplernent fixed-point .arithmetic with all pmCucts quantized before additions. Draw the linear noise model of the
un»:::aled system and compute its IOta! output noise power.

I .•
j -u.s<] ~~

Figun: P9-6

9.22 S.:a!e the first-order -dig!tal fil:er SU"UCture of Figure P9.7 using C.e £z-nonn sealing rule.

yfn.l

Figure P9.7

9.23 S.::a!e the secO£Jd-order digiml filter structure of Figure P9.8 ·~.sing the £z-nonn seating rule.

!U:4 Scale the structure realized m Problem 9. ~2 using the .C2-norm seating rule and then compute its output noise
variance due to product round-off assuming qcantization of products before addition.

9.25 Scnle the structures realized in Problem 9.13 using the .C2·nonn scaling rule and then compute the output noise
variances due to product round-off assuming quanlization ofproduc~ before addition. V.'hat would be theootput noise
nl.fiam:es if quantization is carried out after addition?

9.2& &:ale the structures realized in Probiem 9.14 using the .Cz-norm scaling rule and then compute the outpm noise
vru-iance~ due to product round-off ass;uming quantization of products before addition. What would be the output uoise
¥anance~ if quamization Is carried out aftet" addition?

9.27 Scale the 'Erucwres realized in Problem 9.15 using the L:z-nonn scaling rule and then compute the outpUI noise
vanance due to prodm:t round-off a~ming quantization nf products before addition. What would be the output noise
variance if quanti7.atwn is carried out after .addition?

9.28 {J) Calculate the output noise \>ariance of Ule digital titter structure of Figure P9.9(a) due to product round-off
before addition. A..~sume all numbers are frncrions and represented in a two'~ompk:ment fixed-point rep~tation
wit!!<> wordlength of b+ l bus. Note that the mukiplier"'-l" doesm>tgenerate noise. (b) Nowconsiderthe<:onfiguration
of Hgure P9. 9(b), where the filters A 1(<:), A2(Z}. and AJ(Z) are implemented as in Figure P9.9(a) with the n:u.ltiplieJ-
nx:-ffictents d, replaced with d1. d2, and d3, Ie\>pecUvely. Calculate :he output noise 'l>ll.riaoce of thl:s new digi!al filter"
,.,rructu;e due to product round-off before addition.
H" 14. Problems 655

•J.29 Con~ider the digital !iller !.1ructure of Figure P9.10. Let o-J
represent the total output noise \'ariance due t0
product round-off m G(z 1 and A\.d. If each delay m !he realization of G(z) is rep!~ with tY."O delays, i.e., z-J
replaced with ::-1, but A(;:) is left unchanged, calculate the output noise variance due to prodJct round-off in terms
<)f co.

0 ..1 +.:: -1
x[n] ~ G(:) - A(,) f-< y[n] A(;: ) .., I
1 + 0.3;:
Figure P9.1&

!J.39 Show that there are ( R 1; 2 d1fferent po~<,ible realizations of <1 cascade of R second--order sections.

~•.31 (a) \I.'hat is the optimum pole-zero painng and ordering of each of the following transfe£ functions for obtaining
tlte smallest peak output nolM: due to product round-off urn:ler an £2-scaling rule? (b) Repeat part (a} if the objective
is to minimize the output noise power due to product round-off under an .C,.,-scaling rule.
(.::2 + 0.8;: + U.2)(z2 + 0.2z + 0.9)(z 2 + 0.3z + 0.5)
(i) HJ(zl = ., ~ ,
(z 2 + O.lz + 0.8)(;: 2 + 0.-z + OA}(zL + 0.6;: + 0.3)
(z 2 + O.!z + 0.7)(;: 2 + I Az + 0.33)(z 2 + 0.5z + 0.6}
(h) H2(~) =
(z 2 +0.4;:+0.7Hz-" 0.2;:+0.9)(:: 2 +J.lz+0.18) •

~t32 Derive the expression~ for the SNR giYen m Table 95.

!:~.33 In Section 6.7.1, -we showed that die first-urder lowpass transfer function HLp(z) ofEq. (4.1 t3-l and the first-
c·rder highpass transfer functi-on HH p(.;:) ofEq. (4.118) can be realized in the form of a parallel allpass structure 01f
Fig!lre6.32. where A 1 (zJ is a first-<Jrder aHpass tramd'erfunction given by Eq. (6.65). lhe first--order allpasssectioo can
be implemented using a single multiplier« ruing an)' one of the four single-multiplier illpass struetures of Figure 6.23.
Shov; that the reabzation of HLp (z} in the tOnn of Figure 6.32 exhibits low seosirivicy io the passband at w =
0 lVith
gsp~t lo the multiplier coefficieru u_ 1.e.,

a IHLp(e1<z>)l
=0.

'"
Likewise, show that the realization of HH p (;:j in the f:Jrm of Figure 6.32 exhibits low sensitivity in the passband a!
w = n with resJ)'e\:t to the multiplier u>efficiem u. J.e.,

=0.
656 Chaptef 9: Analysis of Finite Wordlength Effects

9.J4 l!; Section {).7.2, ..w showed :hat the l>t~:ond-urder banJpass transfer function HBp(z) of Eq. (4.107) a!td the
-;c•."<.lDd-\ -rdcr band:-.tup tran~:of~ function HBs!~) of Eq. (4.112) can be realized in the fonn of a parallel allp&~ ~'IDJ<'Iure
of Fip.Jre 6.35. wh<:re A2(;:) is a s:xond-order all pass transfer function given by Eq. (6.68). The second--o.'lfder allpass
scc!it:'il ..:an \:>e- nnpl.~mrnt~ ming two milltipli!!N a and f5 by employing a cascaded lattice structure shown in Figure
6.36. Show that the realilation of Hop(:) in the form of Figure 6 3:'i exhibits low se:J.sltivity in the pas~and at the
cente~ frequency ('-'() wi.th respect tu the muLtiplier coefficients a and B. Le.,

;
it 1 Hsp\ef<'-'J(
'I
~'-~, __!: =0,
Ja

Li'<..ev.i»e. show that the realization of Hfls(z'; in the form of Figure 6.35 exhibits low sensitivity in the passband at
=
w = () and w .T with respect lo the multiplier coefficienh ct arul fl, l.e.,

uj Hys;eJw)i: iJ IHg:;(ejw)l i
i:la

<~\l!ss(ej"".1\
ifzy
, !
.o.=O

i
!,,_,.
= 0.

=O, 0
ap
IHBs(ef"')~~
6/3
-
t,>=;r
=0,

=0.

935 In the parallel allpa:;s reall7..atlon of a bounded real (BRl tnmsfer functllin G(t}, the passband of G{z) is the
.,top!x\nd <.If its power-cumplementary transfer function H(::). Show why low pas.<~band sensltivity of G(z) does not
imply low 'i.topband se:n~itivity of H(~J.

9.36 tkvdop the a.pressions fm the SNR of the secoOO-order IIR filter structure of Figure 9.44 with and without
errur fedhw.;k for the following types of inpu!sc: (a) WSS, white vniform density, {b) WSS, white Gaussian density
(a_, = l /3), and (;;;sinusoid with known frequency. The poles are at z =
(l - e}e*i8 with .E -.. 0, B -.. 0, and
11 » "- A'l-ume a (h + ll-bit ~ignecl representations for the number repl'eseotation.

9.37 _\:1ndify the ~.:oupled-form slructure of Figure 9.10 tc include erTar feedback and de~ennine the normalized output
round-,,ff noise power of lhe modified structure. What are the appropriate values of tbe multiplier C<lefficienl§ in the
er~m·-teei.lbad loops to minim1ze the output round-off noise p(lwer? What is the e11.pression fO£ the output round-off
noise powa in this ~;_-a.se?

9.3ii MoJify !he Kingsbury struc!Ul'e of Figure P6.8 to include enur feedback and detennine the normalized output
round-C>f! noise power of the m<Jdiiied strudure. What ;ue the appropriate values of the multiplier ooetfu:ients in die
error-fee-<:!bm:k loops to minimrze the output round-off noise power! What is the expression for the output round-off
nop;e pnwe!' in this ca.<;e?

9-39 sr,'.JW that the coupled-f{)rm slructure of Figure 9.10 wlll n01 support zero-input limit cycles under magnitude
quanti;r.atlllll.

9.40 Shtiw that the moilitkd l.'Oupted-fonn structtlre of Figure P9J will not support zero-input limit cydes under
magnit~t!e quantization.

9.41 Show that Ll-te se~.:ond-order structure of Figure P9.11 will not support z.ero-input limit cycles under magnitude
'-lu;mli7.<11ion fMee76J_
U.15. MATLAB Exercises 657

-ul .
~l

Figure P9.11

Sl.42 Detemoirte the expce.;sion for the output round-off noise "ariance in the computation of a single sample of a
kngth-N DfT c.sing Goertzel's algorithm implemented in a fraclional sig~ (b + 1 }-bit fixed-point arithmetk.

g,_43 Determine the Immher of hit" h lo curnpute a single sample of a 512-point DFr o-f an input sequence of length
512 by direct wmputalion with an SNR of 25 dB.

9.44 Derive Eq. (9.217)_

9.45 Determine the number of bits needed 1o compute a single sample of a 5!2-point OFf of an input sequence nf
length 51 2 using a radix-2 de<:imation-in-time FFT algorilhm wnh an SNR of 25 dB. Assume a distributed scaling to
prevent overftow al the outptJL

9.15 MATlAB Exercises


M ?.I Write-a MATLABprogrnm to plot the pole distribmion ofa.<,econd-ordertwo-multipliecwucture with multiplier
coefii.cien~ represented in sign-magmtude form with b bits.

M ?.2 Using the progmm develnred in Exercise M9.I, plot the pole di»tributions of the dire"'t form, the coupled
fonn of Figure 9.9, and !he Kingsbury structure of Figure P6. 9 for a 4-bit wordlength, Comment: on the coefficient
senr>ilivity of each of these struclures from the pole di-stribution plou..

M9.J Modify Program 9_1 to de»ign an elli?{ic: towpa.ss JIR filter with the following !5-peCifications: passband edge
a10.5n. -~topband edge a\ 0.55n. passhand ripple ofO.Cl dB, and minimum stopband attenuation of60dB. Quantize
the ttansferfunctmn coeffic"leOt.s to 6 bits using the M-function a2dT. Plot the magnitude responses and the pole-zem
plots of the two transfer fuoctions. Comment on your results.

M 9'.'1 De-termme the factored form of a fifl-h-order ellipti<:"lwpa.'>s transfer function with the fullowing :'!petificatkms:
pa:'lsband ripple of 0 5 dB. minimum stopband attenuation of 45 d9 • .and passband edge at 0.45x. From its pole-zero
lccations, determine the optimum pole-zero pairing and their ordering to minimize the output noise power under an
C:x;-scahng role. Verify your result using MATLAB.

M 9.5 Realize the transfer function of Exercise M9.4 as. a paralle: cunmx:tion of two all pass fitter-> and detennine- its
power-complementary highpas3 transfer function. Plot on the ~e figure the gain responses of the lwo filler.~. Verjfy
!he lt.>'N passband swsilivity of the -paraUe1 allpass realhatmn by making a 1 percent change in the filter coefficren-ts.
6!3-8 Chapter 9; Analysis of Finite Wordlength Et~ects

M 9.6 Write a }\,fATLAB program ro s;mul.at<;: <: fourth-ocder tran&f.::r function nf the form

(I+ OF-J + h:!.::-::){1 + b:;~-l +b4.;:-:2)


Hi;:)= -. -~ (9 219)
(l+ap: 1 +a:1: "){l+<qz 1 +a4z "")
m a "-'<1$Cade form 10>1th each ,ccond-onier seclion realized m direct fonn IL Tile input data to yot.:r program are the
nu.:nerator coeff,c~h {b;} and the denominator eoefficier.Js ja;} of the ~>econd-offier sections. Usmg th.i;. pwgnun
simulate tv.-'<J different cascade realizations of the following transfer function

(9.220)

Using this program determine L.'le- nnpulse resPQnSe of each pertinent scaling tram, fer functions. and their approximate
£1-norms.. Based on this infonn:ali<:m. scale each realization usm~ an .C2·scaHng rule and compute the tolal pnxloct
round-off noise variance for each ~caled s.iructure as~;uming mundmg before addition.

M 9.7 Write a MA TLAB program to simulate a foun:h-orCe-r tmmfcT :·unction of the fonngiven in Eq. (9.219) m parallel
fmms I and II, The input data to your program are the numerator coefficients !h; j and tbedenomina!or coeffic-;ents {ai j.
Using thiS progntm simul<lle the twi.l different parallel realizations of the trar.sfer function of Eq. (9.220). Determin~:
the' impulse reJ.ponse of eocb pertinent ~aling mrnsfer functions and their approximate l:z-norms. Based on this
irdonnation. scale each realization using an £1-;Scaling rule and compute the total product round-off noise var-:ance
for each scaled ~trudur~ aM>ummg rounding. before addition. CoiDp<~.re the octput round-off noises of the two parallel
strocture~ with those of the cas;;ade reaJizatior~-; of Exercise M9.6.

M 9.8 Write a MATLAB pmgram to simulate a fourth-order transfer function of !he form gi~·en m Eq. (9.219) using
the: Oray-Markel method. The input data ro your program are the numerator coefficients (b; l and the denmm'!.atm
;;;o~fficients {a; i. Using th:s pmgram simulate the Gray-Markel realization of the lransfer funct.ion of Eq. (9.220).
Determine the impdse response of each pertinent s.::aling tran$fer function and its approximate L;::-nocm. Based on
lhi~ mfvnnation, scale the realizati(m using an .C;::-scaling rule and compute tbe total product round-off noise variance
for the scaled structure asiuming rounding before addition. Compare the output round-off noise of the Gray-Market
realization wlth those of the ca<;cade realizations of Ex.er::ise M9.6 a:nd the two parallel strtx::mres of Exercise M9.7.

M 9.9 Using Program 9_7 investigate the granular limit cycles of f'igure 9.46 for the fiJllowing sets of values of the
=
codfirienr. the initial condition and the scale factor of the input impulse: {-aJ a = 0.5-. yf -I] = 0.1, x[Oj 0.04: (b)
a,,_ 05. ;·f-11 =I. xJ:OJ = 0.04;:md(c)Ol =0.5. y[-l] = tO,xrOJ = 6. ComrnentonyourresulK

M 'J.IO Modify Program 9_& by replacing theM-function .aLdR 10-ith the function a2dT and then demonstrate by
mrning the modtfied program that the structure ofFigure9.49does r-.ot exhibit overflow limit cycles if sign-magnitude
tru.lCation is uM:d to truncate the sum of products of Eq. (9. i 94).
Multirate Digital
10 Signal Processing
The digital signal proces.sing structures discussed so far in this tex.t belong to the class of single-rale
systems since the sampling rates at the input and at the omput and all internal nodes are the same. There
are many applications where the &ignal of a given sampling rate needs to be converted into an equivalent
signal with <~ different sampling rate. For example. in dig1tal audio, three different sampling rates are
presently employed: 3Z kH~ in broadcasting, 44.1 kHz. in digital compact. disk. and 48kHz in digital
audio tajX' (DAT) and other appHcations [Lag82]. Conversmn of sampling rates of audio signals between
these three di..."ferent rates is often necessary in many situations. Another example .is the pitch control
of audio recordings usually performed by varying the tape recorder speed. However, such an approach
in digital audio changes the sampling frequency of the digital signal and, as a result, conversion to the
origina; <>ampiing rate is needed [Lag&2]. In the video applications, the sampling rates of NTSC {National
Television Systems Committee) and PAL (Phase Alternate Lme) composite video signals are, respectively,
14.3181818 MHz and 17.734475 MHz, whereas the sampling rates of rhe digital component video signal
are- l.J.S MHz and 6.75 MHz for the lwninance and the color-difference signals, resp-ectively [Lut9i]. 1
Thert are other applications where it is convemenf (and often judicious) to have unequal rates of sampling
at the filter input and output and at internal nodes. Examples of s~ch sampling rate aJterations are the
oversampling ND and DlA converters discussed in Sectiom 5.8.5 and 5.9.3, respectively, and analyzed in
detail in Sections 11.12 and I 1.13, re<>pectively. Additional applications of sampling rate alterations are
also given in Sections [ 1.8 through 1 1.11.
To ac-hieve different sampling rates at different stage~, multirate digital signal processing systems
employ the down-sampler and the up-sampler, the two basic sampling rate aJteration devices in addition
tc• the conventional elements such as the adder, the multipher. and the delay. Discrete-time systems with
unequal sampling mtes at various parts of the system are called muitirate systems and are the subject of
discussmn of this chapter. ,
We fin;t examine the input-output relations of :he up-sample-r and the dmvn-sampler both in the time-
domain and the transform-domain. As in many appJications, cascade connections of the basic sampling
rate alteration devices and digital filters are employed; some basic cascade equivalences are then reviewed.
For sampling rate alterations, the basic sampling rate altera:ion devices are invariably employed together
with lowpa.ss digital filters. Tbe frequency response specifications of these filters are developed next A
computationally more efficient appmachto sampling rate alteration is. based on a mul.tistage implementation
that is then illustrated by means of a specific design problem. The polyphase decomposition of a sequence
is reexamined next in the framework of rnultirate theory, and its application in developing computationally
efficient sampling rate alteration systems is illustrated.
Many multirate systems employ a bank of filters \\>ith either a common input or a summed output.
These filter balks are introduced next, and the design and computational1y effi-cient implementation of a
dass of such filter banks are discus'ied. It is followed by an i:1troduction of certain special types 0: transfer
!.CCIR Re.::ommendJtiO!l No 60!.

659
660 Chapter 1o: Multi rate Digital Signal Processing

functions that are particularly attractive in the design of computationally efficient multirate systems. The
rest of the chapter is devoted to a discussion of quadrature-mirror fiher banks that find applications in
signal compression and other areas.

10.1 The Basic Sample Rate Alteration Devices


The two basic components Jn sampling rate .alteration are the up-sampler and the down-sampler introduced
earlier in Section 2.1.2 where we examined their input-output relations in the time-domain. However,
it is also instructive to analyze their operations in the frequency-domain. This will point out why these
devices must be _used with additional filters. ]n addition. the frequency-domain analysis provides the basic
foundation for analyzing more complex multirate systems introduced in the latter parts of the chapter.

1 0.1.1 Time-Domain Characterization


We reexamine the time-domain characterizations of the two basic sampling rate alteration de\'ices here
again. An up-sampler with an up-sampling factor L, where L is a positive integer, develops an output
sequence xuln] with .a sampling rate that is L times larger than that of the input sequence x[n]. The
up-sampHng operation is implemented by inserting L - I equidistant zero-valued samples between twc
consecutive samples of the input sequence x[n] according to the relation

-jx[n/L], n =0,±L,±2L, ... ,


[ l
x,..n- ~-·~ (!0.1)
0, Oun:;.o WJSC.

The up-sampling operation is illustrated in the following example using MATLAB.


10.1. The Basic Sample Rate Alteration Devices 661

illptil <:q!kn<e
-~----- --------,
'I'
II> 9
0 ''
~ r
"
'fil"
"'
"'
9
9
''
'
r
"'"
'i":"
'
'''
'' •"-
a
!l.5J r' '
''
'' 9
'
16
''
'
o, •' ''
-ll.51 l !! 9
6
i
,,
6' '' 6
w
6'' -n ~-~---
m ~
0

"' ""
"Time tnde"-!1
~

"' ' "' "


Tinoe l:>do>x :n "'
(a) (b)
Figure 10.1; Hlustration of lhe up-sampling process.

xfn]~x.lnl
Figure 18.2; Block diagra:n representation of an up-sampler.

The block diagram representation of the up-!Wnpler, also -ca]Jed a sampiing rote expander or simply
an expander, is shown in Figure 10.2.
1n practice, the zero-valued samples inserted by the up-sampler are replaced with appropriate nonzero
values using some type of filtering process in order that the new higher-rate sequence be useful. This
process, called interpolalion is discussed later in this chapter.
On the other hand, the down-sampler with u down~sampllng factor M. where M is a positive integer.
develops an output sequence yfn1 with a sampling rate that is (1/M}th of that of the input sequence x[n}.
The down-sampling_ operation is implemented by keeping every Mth sample of the input sequence and
removing M - 1 in-between samples, to generate the output sequence according to the relation

y[n] = x[nM]. (10.2)

As a result. all input samples with indices equal to an integer multiple of M are :retained at the output and
all others are discarded. as illustrated in the following example.
662 Chapter 1 o: Multi rata Digital Signal Processing

''
' '
" 5
J1r th"' '''
t '
"
0

' "
:6-
9

'
I
''
'
I' ,A
<

I
-0
',I 0

' "
(a) (b)

Figure UU: Illustration of the down-sampling process.

x[nl~y[n]
flgure 10.4: Block diagram representation of a down-sampler.

The block diagram representation of the down-sampler or .sampling rote compressor is shown in
Figure 10.4.
The sampling periods involved have not been explicitly shown in Figml:s 10.2 and 10.4. This is in
the interest of simplicity and in view of tbe fact that the mathematical theory of multirate systems can
be understood without bringing T or the sampling frequency Fr into the picture. It is instructive in the
beginning to explicitly see the time dimensions at various stages in the sampling rate alteration process.
as indicated in Figure 10.5. In the remainder of this book. however, the explicit appearance ofT or the
sampling frequency FT is not shown unless their actual values are relevant.
The up-sampler aDd the down-sampler are linear but time-varying discrete-time systems. The time-
varying property of these devices is easy to show. Consider for example, tbe factor-of-M down-sampler
defined by Eq. {10.2). Its output Jt[n] for an ioput x 1[n] = x[n -no] is then given by

Yl [n] = x,(Mn) = x[Mn- noJ.


10.1. The Basic Sample Rate Alteration Devices 663

1{n]-x 0 (nT)~ y!n]- xa(nMT)

Input sampling Output sampling

frequency = Fy =
I
T
, F7
frequency- F7 - M =r·l
-··~ jx (nTIL}, n""O,±L,±2L,....
x[nj- x.,(nT) ~ x-,.[n]-l" O, otherwise
Input sampling Output sampling
frequeocy - FT - -
l
T
frequem;y .. r; .. LF7 - ~,
F'iaure 10.5: The sampling rate altemtion building blocks with sampling l"llleS explicitly shown.

Figure 10.6: A simple multirate sysll.':m.

But from Eq. (10.2).


y[n- no]= xrM(n- no)]= x[Mn- Mno] I= Yt[n].
Likewise, it can be shown thar the up-sampler defined by Eq. ( 10.1) is also a time-varying device (Prob-
lem 10.1). However. they are both linear systems (Problem 10.2).
The up-sampler and the down-sampler building bloch of Figures 10.2 and I 0.4 are often used together
in a number of applications involving nwUirate signal processing and are discussed in more detail in this
and the following chapter. For example, one application of using both types of sampling rate altecation
devices is to achieve a sampling rate change by a rational numbet" rather than an integer value. Example
10.3 below illustrates another application.
664 Chapter 1 0: Multirate Digital Signal Processing

!..---._

---~,~,---.?,:;----L---Jo\---~•.--/'--,,.:---"-__j~/--o
(a)

L=2

'----- m

(b)
Figun> 141.7: Effe<.:t<; of up-~>ampl_ing in the frequency domain: {a) mpu1 spectrum, and (b) output spectrum furL = 2.

10.1.2 frequency-Domain Characterization


We first derive the relatmns between the spectrums of the input and the output of a factor-of-2 up-sampler.
From the inpuH;lUtpul reiation of the factor-of-L up-samp~r given by Eq. (10.1), we arrive at the corre-
sponding relation for the factor-of-2 up-sampler:

[ J -Jx[n/2},
x..,n-0 ,
n=0,±2,±4, ... ,
0
th -
CI'WlSC.
(10.3)

to tenm, of the z-transform, the input-output relation is then given by


00 X
x .. (z) = L Xu[n]z-" = L x[n/2Jz-"
n=-00

00

= L xfmk-2m = X{z 2). (10.4)


m=-=
In a similar manner, we can show that for the factor-of-£ up-sampler.

{105)

Let us examine the implication of the above relation on the unit circle. For z = ei«> the above equation
be\:omes Xu(e·"") = X(ej"'L), Figure l0,7(a) shows the DTFT X(ef"') that bas been assumed to be a
real function for convenience. Moreover, the DTFT X (el<-~) ~own is not an even function of w, implying
that x{n] is a complex sequence. The asymmetric response has been purposely chosen to illustrate more
dearly the effect of up-sampling.
As shown in Figure 10.7(b), a factor-of-2 sampling rate expansion thus leads to a 2-fold repetition
of X(ejw}. indicating that the Fourier transform is compressed by a factor of2. This process is called
imaging because we get an additional '"'image" of the input spectrum. In the case of a factor-of-L sampling
rate expansion there will be L - l additional images of the input spectrum in the baseband. Thm., a
10.1. The Basic Sample Rate ALteration Devices 665

0.45 0.5 l.O


Nonnalized angular frequency
(a)

0.4,--~.

-ll
:[ 02f
!i ~
-"-
'. . "" -"'-
16""
'i
-0.2
20 30 40
" 0
50
Timeinde.:o: n
0

60 70

(b)

Flgun 10..8: (a) Desired magnitude re$p0nse and (b) corresponding time sequence.

spectrum X (ei"') bandllmited to the low-frequency regioo does oot look like a l<Jw-frequency spectrum
after up-sampling, because of the insertion of zero-valued samples between the nonzero samples of x,.fn ].
Lowpas.s filtering of xu {n} removes the L - 1 images and in effect "fills in"' the zero-valued samples in
x ... [n] with interpol"ated sample values.
We next dlusrrate the frequency-domain properties of the up-sampler using MATLAB. The input is a
causal finite-length sequence with a band!imited frequency response generated using the M-file f i r2.
The input to f i r2 is as follows: length of the sequence is 100, the desired magnitude response vector
mag -= [0 1 0 OJ, andthe,.·ectoroffrequencypointsfreq " fO 0.45 D.S ll. The desired
magnitude response is thus as indicated in Figure 10.8(a). A plot of the middle 61 samples of the signal
generated is shown in Figure l0.8(b). To investigate the effect of up-sampling. ~-e use Program 10_3 which
follow:;.
The input data called by the program is the up-sampling factor L. The program determines the output
of the up-~mpler and then plots the input and output spectrums, as indicated in Figure 10.9 for L = 5.
It can be seen from this figure that, as expected, the output spectrum consists of a factcr-of-5 compressed
vers1on of the input spectrum followed by L-1 = 4 i.mages.

%. Pr:;grarn 10_3
% Effect of Up-Sa!t'.pling in the Frequency Doro.ain
% Use fir2 to create a bandli~ited input sequence
fceG = {0 0.~5 O.S 11:
mag = [ 0 2_ 0 0 l ;
Chapter 10: Mult}rate Digital S\gnal Processing

OalpLR >ptctn.m

""' ' ""

1
~--Tl
I~~,
" '"
o.s;" I"'"
r,
I I
. ,I ' "'
II
I
II I'", '
II '\
. , {}6L

-"'"' - '' '' I


r{\411
,J.luII \
' '
I
I \ II
I
I
'

,: I M
v ,, vI
{)_8
' '' """
Flgllt"e l(i!J: MATLAB-geoerated input and output speetrum of a factor-of-5 up-sampler.

x = fir2i'39, freq, mag};


% Evaluate and plot the i~put spectrum
[Xz, »Il ""freqz{x, l, 512);
plotiw/pi, abs(Xzli; grid
xl abel ( '\omega/ \pi ' } ; ylabel { 'Hagnitude' } ;
~itle\'=nput spectrum'};
pause
% Generate the up-sampled sequence
:, = input ('Type in ~he up-sampling factor '};
y = zeros(l, ~*length(x));
y ( [ l : L: ler.gth(y)]} =X;
% Evaluate and plot the output spec:rum
[Yz, w] ~ freqz;y, l, 512);
plot lw/pi, abs !Yz!); grid
xlabel { • \oreega/ \pi' } ; ylabel ( 'Magni t:..de' ) ;
Litle( 'Output spect:::-utr•');

We now derive the relations between the spectrums of the input and the QUtput of a down-sampler.
Applying the ..:-transform to the input-output relation given in Eq. (10.2), we arrive at
~

Y(z) = L x[Mn]z-•_ (10,6)


,=-Xl

The expression on the right-hand side of the above equarion cannot be directly expressed m terms cf X {z).
To get around this problem, Vt-e define an intermediate sequence Xint[n] whose sample values are the same
as that of xfn] at the values of n that are ml.lltiples of M and are zeros at other values of n:

x [n]-lx[nJ. n=O,±M,±2M,... , (10.?)


1
'" - 0, otherv.rise.
Then,

>1=-0C n=-oc-
1o. 1. The Baste Sample Rate Alteration Devices 667

=
X;rnfkj<._~kjM -_ X,.,,{z
. !;M
). (10.8}
k=-=

,~ow Xmr[n] can be formally nda(ed to _x[n] through xm1 lnJ = c(n]x[nJ. where c[n] is defined by
I, n=O,±M.±2M, ..
c[n] =
I ,_ .
0 • o:.ttcrWJse.
(!0.9)

A convenient representatioo nf cin] is given by (Problem lOA}


M->
c!nJ = Ml "L...r W M'
'" (10.10)
k=O

where WM = e-j 2n/M is the quantity defined in Eq. (3.24). Substituting Xi 1u[n] = c[n]x[n] and making
use of Eq. (10.10) in the z-transform of X;n1fn], we obtain

(10.1 f}

The desired input-output relation in the transfonn--domain for a factor-of-M down-sampler is rben obtained
by substituting Eq. {10.11) in Eq. (10.8), resulting in
] M-l
Y(t) = M L X<::lf."-'w;k) (!0.12)
i=O
To understand the implication of tbe above relation, consider a factor-of-2 down-sampler with an input
[nj whose spectrum is as shown in F1gure IO.IO(a). As before, for convenience we assume again X (e-'w)
.l:
to be a real function with an asymmetric frequent.)' response. From Eq. {10.l2) we get

Y(el""J = ~ rX (e:'w/2) +X( -ejw/2) J. (10.13)

The plot of },X (ej"'f 1) is shown by the solid line in Fjgure IO.IO(b)- To determine the relation of the
fri~t:ond term in Eq. (10.13) with respect lo the firs!, we observe next that

( IQ.l4)

indicating that the second term X(e-i"'l 2 ) in Eq. (_JO.B) is obtained simply by shifting the first term
X (efw/l) to the right by an amount 2tr, as shown by the dotted lines in Figure 10.1 O(b)~ The plots of the
two terms in Eq. (i0.13) have an overlap. and hence, :in general, the original ·'shape" of X(e.lw) is Jost
when x[n]ls down-s-ampied. This overlap c-auses the aliasing that takes place due to undersampling (i.e.,
down-,.ampling). There is no overlap, i.e., no aliasing, only if X(ejw) is zero for lwl ~ rr/2. Note that
Y(e 1',} in Eq. (10.13) is indeed periodic with a period 27r, even though the stretched version X(ei"') is
periodic w-ith a period 4:r. For <he general case, the situation is ~entially the same, and the relations
between the Fourier transform of r:te output and the input of the factor-of-M down-sampler is given by
M-J
Y(efwj = __!__
ML- "X(eJ{&-2d)!M)
. {10.(5)

'""
668 Chapter 10: Multirate Otgital Signal Processing

{a)

(b)
2 Y(r'{U}
I M=2
''

----z"x;-----~;------i,-------,~-----o,~,------------------m

(c)

Figure HUO: lt!ustration of the aliasing effect in the frequency-domain caused by down-sampling.

The a:,Ove relation implies that Y\ei<») is a sum of M uniformly shifte-d and stretched versions of X (el"')
then scaled by a factor IjM. Aliasing due to a factor-of-M down-sampling Is absent if and only if the
signal x[n] is bandlimited to ±rr/M, as shown in Figure 10.11 forM= 2.
We next iUus.trate the aliasing effect caused by down--sampling using MATLAD. To inv~tigatethe effect
of down-sampling., we use Program 10_4 which follows. lbe input signal is again the signal genecated
using fir2 with a triangular magnitude response. as in Figure 10.8(a). However, here the frequency
ve<.tor has been selected to be freq = [ 0 0 . 42 0 . 4 8 1] to ensure that there are no appreciable
slgnal components above the normalized frequency of 0.5. The input data called by the program is the
-down-sampling factor M. The program generates the spectrums of the original input and the down-sampled
output
't Program 1C·_4
~- Effec: of Down-Satr.pling in ::he ?requency Dom.s..in
% Jse fir2 to create a bandlillii~ed input seq~ence
=req = [0 0.42 0.48 11;
mag R ~0 1 0 OJ;
x ~ fir2(101, freq, mag};
% Evaluate and plot the input spectrCim
tXz, vJ] = fre-qz:x, 1, 512);
plot(w/pi, abs(Xz}); grid
1 D. 1. The Baste Sam~e Rate Atteration Dev:ces 669

X(e,f<n)

[\ _,. • ~1
/1\ 0 .n
{a)

[\
2x

(b)

Figure liU 1: Effect of rlown-sampling ln the frequency-domain illustrating absence of aliaMng_

xlabel ( '\omega/\pi ·); ylabel{ 'Magnitude');


title('Ihpu~ spectrum');
pause
% Generate tr-.e down-sampled sequence
M input ('Type in the down-sarr.pling fac:.cr
= ');
y "- t-i: length{x}J);
x(~l:
% Evaluate ar:d plot the output spectrum
[Yz, w~ = freqzly, 1, :'.12);
pl~t(w/pl, abs (Yz)); grid
xlabe: { '\omega/ \pi ' ) ; ylabel i 'Hagn:::_ tude' ) ;
title{'Outpu~ spectrum'};

Tfle plots generated by the above program are shown in Figure 10.12. The input spectrum is shown in
F.:gure I 0.12(a). Since the input signal is bandlimited to rr }2, the output spectrum for a down-sampling
factor of H = 2 shown in Figure l0.12(b) is nearly of the same shape as the input spectrum, except it has
been stretched by a factor of 2 in frequency and its magnitude is reduced by one-half as predicted by the
factor 112 in Eq. { 10.13). On the other hand, the output spectrum for a down-sampling factor of X = 3
shown in Figure i0.12(c) shows a severe distortion caused by the aliasing.
Any linear discrete-time muJtirate sy&em can be analyzed in the transform-domain by using the input-
output relations of the up-sampler and the down-sampler given by Eqs. { l 0.5) and ( 10.1 2). respectively.
We mustrate rhe applications of these relations in the latter parts of this chapter.

1 0.1.3 Cascade Equivalences


A~ we shall observe 1ater. a complex multirate system is fonned by an lnterconnection of the basic sampling
Tate alteration devices and the components of an LTI digital filter. In many applications, these devices
appear in a cascade form. An interchange of the positions of the branches in a cascade often can lead
to a computationally efficient realization. We investigate certain specific cascade ronnections and their
e<tuivalences, which leaves input-output relations invariant.
870 Chapter 1o: Multirate Digitat Signal Processing

lr-----~-·· 0.51
I
()_>;- 0.4'1
~ (J]i
~
:S<n~
I
Rl·

,,.,, - - - - - - - - - u~ D_r.
o"-_:_
u

02

ia) (b)

-
-------.l
\) ~~--~- -~-------

/ - -
''
<J_.O_

~ (l i;
I
t :
~ <J.2'

i}_!!'
'
------------
J
--
.. -
''

(c)

Figure IG.ll: Ja) Input spe{:trum. (b) output spectrum for a dov,cn-o.ampling factor of M = 2, and (c) output s.pectrom
foc a down-sampling factor of M = 3_

x[~ jL IF1 [nl,!M pnj '


(a)
Figure 10.13: Two different ca.~cade arrangements of a down-sampler and an up-sampler.

1he baste sampling rate alteration devices can be used to .change the sampling rate of a signal by an
integer factor only. Therekwe, to implement a fractlOnal change in the sampling rate )t follows that a
o;;ascade of a down-sampler and an up-sampler should be used. It is of interest to determine the condition
under v.hich a cascade of a factor-of-M down-sampier witt! a factor-of-L up-sampler (Figure 10.13) is
interchangeable, wi:tb no change in the input-output relation. lt can be shown that this interchange is
possible if and only if MandL are relatively pn'me, i.e .• MandL do not have a common factor that is
an integer k > ! (Problem l0.5} [Vai90:j. We discuss fractional changes. in the sampling rate in Section
;o 2.2.
'!Wo otht..>r ~irnp}e cascade equivalence relations are depicted in Figure 10.14. The v-alidity of these
equivalences can be readily e;;(ablished using Eqs. (10.5) and (10.12) and are left as exercises. (Prob-
lem l\1.6). These rules enable u,; to move the basic sampling rate alteration devices in multirate networks
to more advantageous positions. Such rules are eKtremely useful in the ~i.gn and analysis of more
compilcated systems. as we shaH demonstrate later.
10.2. Fitters in Sampling Rate Alteration Systems 671

.tin! c•--;,;ftil r--c. y11n! x[11:


.......-+i.,l..M 1!{.:-i ~ ""
' fa)

(h)
Fi~ure 10.14. Ca~<:adc C<..juivalen<.:es: (al equll·:!lence #1, :md it>} eqt..iv-a!erK:t #2_

~--:::1_. ~~
~-· l±...:J......_. y[n]
xjn I ·~ l.~~~ ~ _\1nJ tin]

(a l (b)
Hgun> 10.15: Filrer~ :n ,ampling rme iiteranon sy~tc"Jl:>: (a) mterpolator and (b} l!ccirnator.

10.2 Filters in Sampling Rate Alteration Systems


;~rGn the sampling theorem lmroduced in Section :'::2.1, it i~ knm>.'n that the sampling rate of a critically
:;amplcd discrete-time signal with a spectrum occupying the full Nyquist range cannot he reduced any
; urther since such a rcduc!ion will introduce aliasing. Hence, the bandwidth of a critically sampled signal
0

'nust first he reduceJ by !owpa<>S filte!ing before its sampl.ir.g rate is reduced by a down-'1-ampler. Likewise,
I he zero-valued samples inLn..>riw:cd by an up-sampkr mlls! be imerpofatcd to more 3.pprupn.ate values
~'or ;m cffecnve sampling mtc increase. A:; we shal! show next. this i~tterpolation can he stmply achieved
by digital k>wpas~ filtering. We consider }n this scclion some of the i."'sues c..-mceming these lowpass
~liters_ We first develop the fn:qucncy response spe::ifications of these low""Pass filters. Next, we illustrate
1he decimation and interpolation ofscqc1ences ming MATL\B. Finally, we investigate the computational
coflplexity issues of the Jowpas;, dtgital fille.rs.

10.2.1 filter Specifications


Since up-sampling causes periodic repetition of the basic spectrum (Figure 10.7), the unv.-anted images
in the spectra uf the up-sampled signal x,[n 1 must be removed by usmg a lmvpass filter H (z), called the
interpolation filter, as ir.dicated in Figure ~ O.l5(a}. On the other hand, as indicated in Figure lO. J5(b), prior
1o dnwn-s<~mpling. the s.ignaJ u[nl should he bandJimited to jwj < n( M by means of a ~owpass filter H(z),
{ailed the deciJ•mlionfi!ter, to avoid aliasing caused by down-Nampling. The system of Figure lO.lS(a) is
often called an interpulatnr whde the system of Figure IO.IS(bJ is called a decimator. 2
The sp....->cifications for the Jowpuss filter ln Figure 10.15 can now be derived. We first develop the
spccificaliun~ for the interpolation filter. Assum<! xln I has been obtained hy sampling a bandlimited
:;:untinuous-t•mc signa! x,1 (t) .atthe .Nyquist rate. If X,AjfJ) and X(ej«>) denote the Fourier transforms of
2 h ha~ boo"" :a<;-H!y a~so.;.med here:that !h.- discrclt:-t;m" <igna: 10 b" nterpo!uted or de;;ima!ed ha~" ltw.pass fre--Juency response end.
;h arc"*· !he u~~i,-~d illtcrpola:ion m 1h"' Uedmat.oll filh<r ;, ~ !CMp-.iS~ filter. Hov."C~ec if the d;snete-tirne s1,gnalto be interpolated
Gr dec·imated ho> <~. highpca,;s (ban.:!pas~) freyuentoy response. rhen the d~.~ired interpolation or :he dedllliltiOO filter i> a tughpa~'
{'Hn.;'.pa,;,) iilter
672 Chapter 10: Multirate Digital Signal Process1ng

'" i r) :mJ _r ln j, rco;pcctivdy. fi-om Eq. (5.14a). it f<Jllows that these Fourier transforms a:-c re-lal..:j througt

(10.16)

where Tn is :he >amp!ing period. Since the s.ampLmg is being done at the Nyqaist rate, there is no OH~rlap
b..:nw.:n the ;.h:ftcd >-.pectras of Xa(jw/ To) lf we instead "<!mple x" (T) at a mu<:h higber rate T = Tu/ L
y1ddmg _v!n], Jl'> Fl•uner tran::;fonn Y(eJ'") is related to X.,\jQ.} through

. ,,
Y (c 1 J -
-T
-
' L~ X
"
(jw-jhk) L Loc
T
-
-T:
-- X (jw-)2Kk)'
''(ToiL).
{!0.\7)
k=--::x.- · - 0 k=-oc - - o,
On the other h;md. if wo: pass x!nJ through a fa<:tor-of-L up-sumpler- gcneruting x"ln). frcm Eq. (10.5) tht
relation between the P<mrier transform X~k-i""') and X(ej'"; JS given by

(!D.l8)

h :·ot:ov.;, from Eqs. (HL 16) tn ( HU8) that if x,.[n] is passed through an ideallowpass filter with a cutoff
.at JT! 1 ;md ;:, pin of L. th<': output of the filter will be predscly y{ n].
In pr~;.cti<.:c, a lraw,;iuon hand is provided to ensure the realizabliity and stability of the lowpass inter-
pol;:uon filter HI~). Hr:nc~. the desired Jowpas;, filter should have a stopband edge at w, = ;rfL and a
pao;sband edge ,.v P close tn M, tu reduce the distortion of the spectrum of the signal x [n j. 3 If We denote;
lhi~ h:ghest frequency ::hal need-. to be preserved in the signal to be interpolated, the passband edge wp of
tlw low pas;. fil!cr ;;hould h.: at wp = w,/ L. Summarizing. the specifications fer the lowpass interpulati(lfl
tilter an: thus giv.:n hy

I H{el"'>,~ =I 0,L.
·~
Jwf :'Sw.-/L.
;rJL :::= lwl ::=:: :r.
(10.19)

lr :..ho~1ld be r.oted that in many inll:-rpolation applit:atton~, one requirement is to ensure that the input .samples
arc no! changed at the ou!puL -Thi;, requirement can be satisfied by a Nyqui~! (Lth band) interpolation
filt::r, \<.hi:ch i~ discussed in Section W.7.
ln a •imilar maont.-r. we um develop the spet:ificatioos for the lmvpuss decimation filter that are giYefl
by
IHfel"')l-11, i(L)I _::: u;,!M. (W20J-
\' - 0, r:jM_:::iwi::Srr,
where eu, dcno:e-: the highest frequency that needs be preserved in the decimared signal.
The effects af decimation and inteJ1Wlation in the frequency-domain are illustrated m Figures 10.16
<>n.:_i 10.17, resr.:ctivel}", fo-rM= 2 and L = 2. Figure 10.!6(a) shows the spectrum X(el"') of the input
,i!,'Tial.t [n j ant.! the spectrum H(eP") of the decimation filter. The passband edge and the stopband edge of
rhe decimation filters are, rescpecrively, w, /2 and rr i2, where we· is the highest frequency in the input signal
that is to he pX>;erved at the outpu! of the dedmator. From Eq. (10,13), the frequency response Y(ej"') of
the dec'imator culput .vln] is now given by

wht:re \l (<'1°) i,_ the frequency response of the tiiter output v[n ]. Due to prior filtering there is no overlap
in :he;c t\vO :-,pcclr:a, resulting in no aliasing at the decimator output with the output spectrum Y (ej'"). as
l l hi-- '>W,<Hillgs of r.cp. ""·•· !ip. rmd J, in this chapter are p<"ecisely a~ in Se>:.'h<m 7 .l, I.
10.2 Filters in Sampling Rate Alteratoon Systems 673

-;; 0

Ia J

(b)

(C)

Figun- 10.1(>: Spedrum of i<H the· input .tt<J :. (b) -;he outpc~t l-'(r. ~ oi the factor-of-2 decimatur with x[nJ flhered, and
(<.:) thr oulpl.l: .>In! of !htc fa.:tor-of-2 drdmanr with no iil!ering of ~tn} :showing the effed of aJ;a;~iog. The ~pedrum
oft he dc:..:ime~t:on tlltct H!;_l i» ~huwu m (J) xitb dot1ed line,;.

.;kctchcd m Figure 10.1 fl(b). However, it ic. nor poss1ble to :ecoverthe original input ~ignal xin J from :he
d~cim~ned version yin!. The filkr H(z) i, u~ed to preserve X (ej'~) i::~ the range -w, /2 < w < ('-'c/2, and
one can reconstruc:. this portion nactly fmm y[nJ. On the other har_d, if there is no filtering of the input
x!n j p:-lor to dmv:t·»ampling, the two cnmponems -!X (ej:..•.<') and ~X ( -ejcufl) will <n-erlap, resulting
u:_ a severe ...tliasing at the decimatoruutput, as indicated in Figu-re \O.l6(c). It should be noted that t!lc
sequem:e xfn l corresponding tiJ th-e ffequency res.pon~ X(el">) nf Figure 10.16(a) can be recovered from
;ts decimated verston tflhe decinwtion factor here _i;; 4/3. We dbcuss frm:tiumd sampling rate ~iteration in
Section 10.2.2.
Lkewi;;e. in the .:a~c of L'fte fuctor-of-2 interpolation, the ~pectrum V(ej"') of the up-l>ampler output is
given by X (e/'-"-- :. Figure !O.I7(j) show;; V (ei''') for an input spectrum indicated in Figure HU 7(a). The
"-per.:trum n_, 1 ''-) of the interpuh1tor l>Utput o-btained by filtc-rmg t'[n i is thus as sketched lr: Figure 10.!7(.::).
The de... ign of H (z) i~ a standard liR or FIR low pass tilwr design problem. Any of the te--chnique>,
nui.l:i:!d in Chapter 7 ca:1 he appi:ed here f:::>r the design of the,;e lowpa-.s filters..
674 Chapter 10: Multirate Digital Signal Processing

__.__.c____o,c-----~~------,<---~--",,--
0 ro
2 2
{a}

-n -•2-2w, 0 w,
2
'l rr "'
{c)

Fi.gu.re 10.17: Spectrum of (a) the input xln], (b) the output x[n] of the up-sampler, and (c) the output yfn] uf the
fador-of-2 interpolator. The spectrum of the interpolation filter H(z) is shown in (b) with dotted lines.

10.2.2 FHters for Fractional Sampling Rate AUeration


A fractional change in the sampling rate can be achieved by cascading a faclor-of-M decimator with a
fa1::tor-of-L interpolator, where MandL are positive integers. Such a cascade is equivalent to a decimator
with a decimation factor of M j Lor, alternatively, to an interpolator with an interpolation factor nf LjM.
There are two possible such cascade connections, as sketched in Figure 10.18. Of these two, the scheme
sb.own i.n Figure lO.I&(b) is more efficient since only one of the filters. H,.(:z.) or HJ(Z), is adequate to
serve as the interpolation filter and the decimation filter, depending on which one of the two stopband
frequencies, nIL or 1f / M, is a minimum. It should be noted also that the sampling rate alteration system
of Figure 10.18(a) will, in general, preserve less of the signal's frequency content than that of Figure
10.18(b). Hence. the desired configuration for the fractional sampling rnte alteration is as indicated in
Figure 10.18(c). where the lowpass filter H (z) has a normalized sropband cutoff frequency at

w,=min(-,-
'!T ;r) (10.21)
'L M
which suppresses the imaging caused by lhe interpolator wbile at the same time ensuring the absence of
aliasing that would otherwise be caused by the decimator.
10.2. Filters in Samp~ing Rate Alteration Systems 675

(a)

=~
(b) (c)

Figure 10.18; General schemes for illCrea;,ing the sampiing rate by 1-/ M.

10.2.3 Computational Requirements


As indicated earlier, the lowpass decimation or interpolatio:-a filter can be designed either as an F1R or an IIR
digital filter. In rhe case of single-rate digital signal processing, IJR filters are, in ge.neraJ, computationally
more efficient than the HR digital filters, and are therefore preferred in applications where computational
cost needs to be minimized. This issue is not quite the same in the case of multiratedigital signal processing.
as elaborated next.
Consider the factor--of-M decimator structure of Figure 10. J 5(b}. If the decimation filter H (z) is an
FIR filler of length N implemented in a direct form. then
N-•
t.:[n] = L h[m]x[n- m). (10.22)
m=O
Now. the down-sampler keep.<. only every Mth sample of v[nj at its output. As a result it is sufficient to
compme v[nj using Eq. (10.22) only for values of n thai are multipies of M and skip the computations of
the in-between M ~ l sampl~. This leads to a factor-of-M .savings in the computational complexity. Jf
on the other hand, H (z) in Figure 10. J 5(b) is an UR filter of order K with a transter function
1/(z) P(z.)
( IQ.23)
H(z) ~ X(z) ~ D(z)'

where
K K
P{z) = Lp"z-", D(z) = 1+L d,z-", (10.24)
n=l
its direct form implementation is given by

w[nJ = ~d1 w\n - ll- d2w{n - 21 - · · ·


-dKw[n ~ KJ + x[n). {10.25a)
t:[nl = pr;,wln} + PiW{n ~ 11 + · · · + PKW[n ~ Kj. (i0.25bl
SitK:e vfnJ is being down-sampled. it is sufficient to compute v[n] using Eq. (I0.25b) unly for -....alues of n
that are integer multiples. of M. However, the intermediate signal w[n] in Eq. 00.25a} must be ;;omputerl
for all value~ of n. For example, in the computation of

v(M] = p.:~w[M] + PlW[M ~ 1] +- · · + PKw{M- Kl.


K + r successive values of wfn! are still required. As a resuh, the savings in the computations in lhe case
of an IIR filter is going to be less than a factor of M.
The following example provides a more detailed comparLwn.
676 Chapter 10: Multirate Digital Signal Processing

;0- \ld r'W\r 1r!I)J'il'itNd0111h¥1Y(,¢ tit\


T;Vt Iiili# iN t{H 1'Sf{H N¢

For the car,e of the interpo1ation filter in Figure 10.l5(a). very similar arguments hold. If H(z) is an
F1R filter, then the computational savings is by a factor of L {since u[n] has L - l zeros between its two
consecutive nonzero samples). On the other hand, computational savings is significantly less with HR
filters.

1 0.2.4 Sampling Rate Alteration Using MATLAB


The Signal Processing Toolbox of MATLAB includes three specific fwtcrions for sampling rate alteration.
The~ are decimate, int-erp, and resanple.
The function decimate can be employed to reduce the sampling rate of an input signal vector x by
an integer fdctCU" M generating the output signal vector y and is available with four options:

y decimate {x, ~4)

y deci:nate(x,t.'f,ii)
y dPci~ate(x,M, ' f i r ' )
y decimatetx""M,N, ' f i r ' l

In the first opt_on. the function employs an eighth-order Type l Chebyshev llR lowpass filter by default
and filters the input sequence in both directions to ensure zero-phase filtering. In the second option. it
utilizes an order-n Type l Chehyshev IIR lowpass filter where n should be less than 13 tv avoid numerical
instabih1y. In the third option. it employs a 30-tap FIR lowpass fi)ter which filters the input only in the
forward direction. The FIR filter i;;; designed with a stopband edge at rr/M using the function fi:::-1 of
MATLAB. Finally, in the last option, it designs and e:nploys a length-:-.! FIR lowpass filter. We illustrate the
application of the function dec i rna t e in the following example.

4 ln \his chapter t..'>e ~=mputa.lional <:mnple,;ity of an implemen:ation" !~ taken to be equal to the rwmber of mu!tiplicatmns required
per second. Also.lhe symmetry of FIR impulse respon~es. wtnch lead to about 50 percent computational savings, is ignofl'd.
10.2. Filters in Sa·Tlpling Rate Alteration Systems

(a) (b)

Figure 10.19: The input and output sequences of the factor-of-2 de>:ima:or of Example 10.5

~ f 'J hrs {; fillli 1; }\::<t tt (' })rt; £Ntii i (:&11 Jl¥ t!C '11 !i 0

'
${ "' )St;!W!; 4 \r#<Y(J+J'i
H v Ut;giri;Ct \"' t0G"'i44' A I
7 +:r»:» 7 y r:: { (''!{ Jt%
n "' ' r1:r:0 v"* ' :r " ;' ;::::;~:~~ 1;
i' ¥ WtbCf:N!btf
+t J g
'* <1+XH1XJ>£% tA'Z Utf\1 t rt;;HPL Jll$f$!,;J¥f\!:ir
Ji: 0 tHPt: V] *h) 0 VfNr{ #~~'Jp% "f J"'u:"
I. {Pft.Y}tA} @ j;>p {; (}0 tb)U~ J f!!l$ t; 'i'¥'7 1H?i{1:t?tLtf11
'( ' 1147 t: XJIIIG! fl t ?t" 1!1" • ('kIt ' 1

'
.,.,,(,>iT'&' b' }, :;
w!:,tia<.; n, xi
{ !Tjt;P'"

\'Vtf;
/\1 !Ttil 1( ]m!v

t 1:+ lw; ' ffotr;:;'!0P11t:'17 ' f

«J.41»2i<: t ::Zv: r :;1 i


n;,,{;: 11/ }(''; £
st Jldlf, ;n, ]i'}} vH/W!if f •
LJ l}G( tHIIh)~t:'%' } i
Xi: IM%0 1 : ?'Uli\ii\ X.bdt:J£ tx ~ 'I

The function i r. t.erp 6 can be employed to iocrease the sampling rate of an input signal x by an integer
factor L gener,uing the output signal vec:or y. It is available with three options:
y ~nterp\x,L}
y _:_n'Lerp(x, L,::-;r, alpb...a)
[y, -,1~ .:_nterp\x,L,~J,alpha)

~- c---c----:-:-c------:--
5The gro11p delay ot 1he defa11h dec•m .. uon filter ls. 14.5 "'<ample-s. For an mtege.--va/ued group delay. a ~ination ti!h:r of odd
lengtil -'hould be u.~ed.
6 ic ~e c p i~ ha>'ed tm the inl"fi"'la:nr d<..-slgn of Oelken eta/. !Oet75j.
678 Chapter 1o: Multirate Digital Signal Pfocessing

The lowpass filter designed by the program to fill in the zero-valued samples inserted by the up-
sampling operation is a symmetric FIR filter that alLows. the original input samples to appear as i.s at the
oofpuf and finds the missing samples by minimizing the mean-square errors between these samples and
the ideal values. The length of the AR filter is 2NL+l, where N =:; 10. The input signal is a~sumed to
be bandlimited to the frequency range 0 _::::: w ::= alpha, w-bere a:ipha ~ 0. 5. ln the first qJtion, the
defauit values of alpha and;~ are, respectively, 0. 5 and 4. In the remaining two options, the bandedge
a 1 pha of the input signal and the length N of the interpolation filter are specified. In the last option, the
output data contains in addition to the interpolated output vector y. !he filter coefficients h.
The application -of the function in te rp is considered in the following example.

'* 1"±":1/&i( +)!&


'i { i: '4:ii>Lt 41'> !\WI t'f
&
z:<U·A
14 v Lnt>rJ L ! I '

FmaUy, the ~nction resa_~::_~ in MATLAB can be utilized to inc~ the sampling rate of an input
vee: or x by a ratto of two positive mtegers, L; M, generating an output vector y. There are five options
avrulable with this function:

y resample(x,L,M)
y resample(x,L,M,?,}
y resample(x,L,M,R,beta)
y resample(x,L,M.~),
[y' h] resample (x, L.Ml
10.2. Filters in Sampltng Rate AlteratiOn Systems 679

1
1"
-< . 1 r
-2 i
" Time iOOex n
{a) (b)

Figure 10.20: Tb.e input and output sequences of the fa...."'tor-of-2 interpolator of Example 10.6.

The program employs a low pass FIR filter de!ilgned using the function fir 1 wilh a Kaiser window
to prevent a.lia.-.ing caused by the down-sampling and eliminate the i:maging caused by the up-&tmpling
operation (see Figure 10.17). The number of samples used on both sides of the present input sample x[n]
can be specified by the input data R. The default value of R is. 10. 1be design parameter fJ for selecting
the Kaiser v..indow can be specified through the input data beta (see Section 7.6.5).
Example 10.7 demonstrates the app:icarion of the function resample.

f~~~·-~,,~~~p~~~~-~,,m~ffi·~~,~~~~~~~~,,s~~~~~·f~~~
5
~!ibm m Mm WJ'lMr !:l8tt
·njllf PL

11rn1 n =-l:!Jr4.
% Tr·ryy tG:P
1 :J ;;,:yf.t
s s nor J<:; ,:; r 71'*'; u, r. t:";hv t 0
k

rnz>.it. 1 tn'"'G"' rr: Jr;ory1


:; P)Idf t "tip-. t7.4J iA<:trd
*' +DF"!Jf ! " ::«twt "f~HiW;}J H"s;p ::'wt:i;;n: L
X: "' £Hj:?<·l: :"'*,n0J\:11110iF;"f" <<f itl.%0%: sln,;,fln>+/1' t .r
7 ,J \Yt V+k{ f ' }""r+<i!d,n& tv::y N~ ,P 0· ."t"U ± 'V { f!" .; r; :; { ;:: "' ' J t
0 lf*:SH7Y·:!t.2t::' t h'<t : ;\{/;\ 1fiiHJ:(Hf'1P<>r
M "' k:H \
?& "' 11J1ti "''f:i."E' ,;; v;J:r;j t*f'&·?JL',
ii) {it?J:T£.t"JIX ( 1' ;.{ d} )1)/1\0" f ¥'!1 ;"'Tit
T T''*'*F'l«t.¥:.2."\H
ii) f {<<. LtffPA ;t;"i{) 1 !'1{ "f.Uf:t;i(ti: 4:f.'•j.y0).J'"it"+
0'J£:i}· }:.;{ ~~'!' f \ ' ; '
fi{H\(]7, .¥}hi]''
t -' 141! 1 " '}flAr t :Wf!\]>A:''-?.<>tt •• 1 ,
x ioV+> £ r >J· liNG f: >zkx r;J : : ;r
1t.i:rnl'{::.t i¥: ,:::!{
*
11\.,[" d$ j' (ff. l
Jr'\1Yt>f JF. 7 : :·-.sp ·7!:
L~ ": } £\ ' ?/0 't$iUt "+"'<{Ur.LJT 1/11 • f ,
w :s:t-f 1 "1'1:1!\t0 :r.n:Ysv. rt~}; t"tAtt4&:-. i'
680 Chapter 10: Multirate Digital Sigr.al Processing

1(]
' Time ''
ir.ae~ & Tlme inCex n

(b)

Figu~ 10.21: Illustration uf sampling rate increa;.e by a ratmnal number 5/3.

(a_) (b)

I<'igure 10.22: Two-stage Jmplementation of sampling rate alter-ation .,ystems: (a) interpolator <ll:l.d (b) decima:01:.

~
12kHz 12kHz 400Hz
(a)

h r

-L--+-------------------~~~-w
~ n!J(l 2-r. It

180Hz 200Hz 12kHz


FP F~ F,
(b)

Figure 10.23: lllustrarion of the decimation filter design. {Frequency response pl<.lts sOOwn not to s-::ale.}

10.3 Multistage Design of Decimator and Interpolator


The decimator a.<J.d the interpolator of Figure I 0.15 are single-stage sl.ructures since here the basic scheme
for the rmpiementation invoh•es a single lowp.ass filter and a single sampling rate alteration device. If the
interpolation factor L can be e~pressed as a prodoct oi two integers L 1 and L2, the factor-of-L interpolator
of Figure HJ.IS(a) can also be realized in two smges, as indicated in Figure l022(a). Likewise, the factor-
of-M decimator of Figure l0.15(b} can be implcmeiJted in two ~tages, as shown in Figure W.22(b). iftbe
decimation factor M is a pmduct of two .imegers M1 and A-fz. Of course, the design can involve more than
10,3. Multistage Design of Decimator and interpolator 681

two stages, depending on the number of fact-ors <J:sed to express Land M, respectively. It tUJns out that, in
general. the computational dfidency is improved o;;igniticantly by designing the sampling rate alteration
:<.yf.1ern as a <:ascade of several i>ta.ges. We demonstrate this feature next by means of an example which
abo iUustrate:s how to choose the specifications. of the individual filters in a multistage design.

pf fr /\\" ,) FAtfUI f "1 \ i:C 10rffincJit:$i pf jf©; Y%}0 ~,f 4 71 F0F


"iP";d;; "'' ""lhq ttx t!p·v;nMh! vr +t±!rr ft ty: ms:. wJS:.Mttn0 !itt !\q ;jt ¥;.!'0 II¥»
f ·"" ]t\1\! 't+.J\ ""''•lnt u;;sf
4, o 41!¥ "14"¥·'¥!\t iff lf #1;' "'" {o
!®Inn ttt 1~1
f't " e V ' t j TIJ'tt:Pt 1!1; 0s' rtiJJniiwn! f:r"t
40Nt!~ Tn•H• \7 a:. brh Jli n:R0'0J! {,;; L:¥1 t"or nnRtdrLntti.

:i 1rt tM~I~ ~m ~t+,,;tR


1;; Jf\!Cit


Chapter 10: Multi rate D1gital Signal Processing
682

===~~~::::::~=~~;;,","!'~::~,lllf~x~i:¥~~:1¥**~****~:;~~~~~;J::~~·~~==~~~~«ft;
w 10 +!J ;;::\!& ,Pl

y,yJ"V ·: f<t$1!JJ\1 !ZXh-

i3ti:%8t ui t ""' ::;t;


\JINL"?Y£ J£;;; r ""' Jft

·1"~:=:!i:=:.;,,:•~• il'ici;Iw~M~~ttr mi'Biiii!4


4%11=
a sw:f&;:!lf!Vif m
Uirfrtfl:t0wd
ilil::i¢®l i¥mt

":£4{4;1

-:::;.:;:;~
Jf
H " '\\' tZ< Si!Jl}

#f\QQjjKL 11i\1\&Y1- !Jillt PG)¥#"11¥!<114411<" (;( tft'! vi


tt q{ % Po/\JWI.tlm
1b; --ill-" V'f

%(#1')£.0% r J Jf')'i(l\t ""--Jfl'ift

IM!~Imillli«Po/\J pm:x uJ;:;¥1!11 Yvt 11 :?A~rh, ilH '"'* watilli!Yct uJ £Uiw¥1ii¥u"f!l"' ~~sim41il¥ !111¥ !kli!M,m!!I'~Wil!f
lm ~I<'V¥111 tM

In the general case of a k-stagc implementation. as indicated in Figure l027(a), the decimation factor
isM = M1 M2 · · · M,.,. FDI" a given MJ; there are k- 1 quantities, Mt, Mz •...• Mx-!. which can be chosen
ln various ways with OJle combination leading to an optimum multist:Jge realization of a decimato£ with
'10.3. Mutfistage Design o1 Decimator and !oteroofa;or 683

jt;r~'";J 1

.
~it
/ I..! I;;
2"ikHY -'!;H.- '1 !::kH:z
! - 151- 1 /" ;I

''-'·

(){J(I Ill
F 1 - iSF,
15

Figure 10.24: Dec:uaw..m !iller Ji'\lgu ba;..cd on lh;: JFlR <!ppn.-;:.._·h; ff"t'tjL'k:n~;.· rC.>[}<l!lM' ploh ;;u! ~hown to ~;:ale l..

~t-~i}
12kHz !.:'kHz 12kHz 41l0Hz
( :t )

J: kH; ~2 kl-!7 l2k.HL l<OO Hz


lhl

Fi.c• lis ~~
i 2 kH7 ;2 k.~;; StX! Hz 8{1() H1.. 400 H.c
(C)

Figure 10.25: fh.; ~tep;, m the IWO->ta.~~· ~alL" <limB of !be tkcimatnr strunure.

f-->j Gi:::} :--.


'
I:Z kH;. 1.4 ".,:Hz soo fl7 400Hz

Fi!!:u.rc Ht26; A t~ret:-Sta.F:~ tea!u.ation of !he de<:1mator.


684 Chapter 10: Multirate Digital S~gna; Processing

l
H(,_E_!
,----J:---:
':7_,
__-.,(,,;_-_,)--
.-:~
_ ! L___: L __-_, ;_-__:
fh_l
Figun· 1!).27: 1;t) .-\ _;;cnct a! m .ltl>kg.._' 'i-tructun: i.H lit:,·l•n.••;un and! h, ,J g~:rer;,\ mc1tiH<Jge ~lruL1u:~C f1rr in:e~;;nlalto;L

the h;,l~; ..::omputdiona: .;:;ynple:~.itv. The d::kn"l1 1Od!•.m ,)f ;~n <Jfllli:IUtn rcalizaton depend.• on lhe >l'kctioc
ol !c. ~nd the ··bc~l-- comhin;.Jtton and onlerin',!. ,Jf M 1_ ,tf ~- _ . )d, rlwr minimi7es :he 1equired number of
mubpl icatiO!h per sc..:,1nJ !Cr;_...'>;_( ]. Tlli.• ..-:-nT.;o.pnnding mu!li-;tag~.:: mterpolatm design is a ..-ery ~m:;.bpt•u~
pwhk·m IFlJ;t:rc l0_27,'hl] [Crot·U!.

\Vc ha' c -.l~r·wn m Sect:{)n I 0.2 _i ;hat a sing.k-..;tage d<Oc:rnatur ,,, lmerpu;~'<tor employing FiR h.w,.·pa>.>. ii\ters
<.':m lw '"utnput31 ~onull)" dl!dcm ,j nee th~· nece,..;ary mul l~pll.: .lli<llh r.oquireJ tc compulc the output '>ample
c:m be :.:arned nut on!j v.-hc:1 !1'-"cded_ \\"c (~ernonstr.tted i•1 the Pf''i:J:;<, sec!ion that !he :.:omrutational
r<.'qu:rt:m<.:~>h em he furth..:r dccP.~;-;_y,:J u~lnto a rnultt5tagt: de-;.ign. Additional reduction m the computational
t'<lnlplnd} 1s po..;sihk h~ re:::li?.int,· the FlR Jilccr:-, u:-.ing the poh:pha<;L: de.:.:ompnsition de~r~~ in Secl!on
6.3_::;_ in n::n.tin <-':::.H.O-~- ;: :<> ahv ::o;.;,lblc to ;·ealize l!R de:.:tmatinn and inrcrpolation flhcr~ i;";J polyphase
:·cnn'i. n:::-.u!ting in reduced L01l'-plll,t\ionaJ ~-omplcxity reali7.ation".. W.: review the polyphase decompositi-on
ag.::nn here am! iEu;,:nttc i1~ app!icntinn in the efhcic-:lt realintion nf the dctirnator ami the int:'!rpulatoL

,( i;) (!0.36)

M-1
Xt:J = L _---"x,.;;.t:,, ( !0.37)
ko •J

.<~Mn---, kj_--". O~k:::M-l.

an~ L";l]kd Ihe pol ypha~e ::uHtp..'trL:ms of th;.; parent -;cqucnce .r!nl. an:.l the
·the -'>uh,c>qu:_·tK<:s ( rz l 1< ll
function~ X~ (;:J, given hj the ,--lr:m-.fom; of !.1 z f_n I i :m: call-.:d !he pof}ldW.\t' t·omponenJs nf X (:) 1Bel761.
The rdcttion hd\\:CCn the '-Uh;;o:-qucno.:c-\ (x; In Jl <nH.l the migPJai '>t:quence {xtn j] IS given by

(j ::_ k M- J.

Etju:ttion ( 10.37) em he written in matrix form''"


10.4. The Polyphase Decomposition

F1gure HL28: A strucmral im~retation of the M -band polyphase decomposition of a sequence x[n ].

X(z)=[l :- 1 ... z -<M-1> l [ ~:;::;


. ]
. {10.40)

XM-I(Z,\1)
A multirate structural interpretation of the polyphase decomposition is given in Figure IU.28.
The polyphase decomposition of an FIR transfer fwlCtion can be carried out by inspection as illustrated
in Section 6.3.3, where we developed two-branch and three-branch polyphase decompositions. of a length-
9 FIR transfer function. Figure 6.8 illustrates the parallel realizations of an FIR transfer function based
on the polyphase decomposition. In the following section, we consider several variations of the parallel
realization of an FIR trans.fer function based on a poiyphase decQmpositiQn,
The polyphase decomposition of an IIR transfer function H{z.) = P(::.)/D(z.). on the other hand,
is not: that straightforward. One way tu arrive at an M-branch polyphase decomposition of H(z) is to
e,;press it in the form P'(::.)/ D'(z-'11) by multiplying the denominator D(z) and the numerator P{z) with
an appropriately chosen polynomial and then applying an M-braru::h polyphase decomposition to P'{z).
This approach is illustrated in the following example.

-P!W ;!11!rJ P

'
Chapter 10: Muttirate Digital Signal Processing

Note that the abo\'e approach i-ncreases the overail order and the complexity of H (z). However, w~en
u>.ed in certain multirate o:lrucrure,.. th.e approach may result in a more efficient struct:Lre. An ahernru:tve,
more attractwe appmach iS considered i:t the following example.

;; n' '"Hits.t ,;:.l<; '}Y


r /) f: +),; rMi<: i1 t4!\+T2~;

( Yi ?·,:~l0t5
• } ., ;t
*
)]
~ ( •: :·?1~Wi?-.,. .
: • J !• 0

'<"'tr thvi fb tl«r 1J!W0"f hit tlr\I!Jdf\}b··r.f.md\. tfw ,,,..;, \; trmuhn flk!Ht;;;zrp un: ;;J0ii;0p
G±c \ \h4 < d)f d,-Y."~"N\;4'4hf!!Pw 1k1u """· rht F"t:t\i\"!f ;fw '~Snr; hf ihr' · ,.qr t!f P &.wi£2/ TIA"Y0Jh fi it t

1D.4.2 FfR Flfter Structures Based on the Polyphase Decomposition


We have mustrated in Section fd.3 that <1 parallel realization of an FIR filter transfer function H(z.)
can be obta.irnxi using a polyphase decomposition. As we shaU point out later in this chapter, such a
realization often results in computationally efficient structures in certain multirate applications. We revisit
the polyphase decomposition ba<>ed FIR filter realization again here using the more commonly used notation
for the polyphase components and develop several other alternate realizations.
Consider first an M-bram:h polvpbase decompos:i;ion of H(7.} given by
M-1
H(z) = L ::.-"Ek(ZM). (10.41)
k,-{•

A dirtX·t realization of tt..e above is shown in Figure l0.29(aJ. The transpose of this realization is indicated
in Figure J0_29(b)_ An alternative represenfarion of the transr.mse structure of Figure l0.29(b) is obtained
by using the notation
Os:iSM-1, (10.42)
resulting in 1he stru...~ure of Figure 10.30. The corresponding polyphase decomposition is thus gi\'e!l by
M-•
L H(z.) = z.-iM-l-tJRt(zM)_ (1M3)

'""' ofEqs. (10.41) and (10.43). the former is usually called


To diffcrentiate between the t'WO decompositions
rhe Type f polyphase decomposition, while the latter is called Type JJ polyphase decomposition.

10.4.3 Computationally Efficient interpolator and Decimator Structures


Computationally efficient decimator and interpolatocstructures; employing linear-phase lowpass filttfl can
be derived by applying a polyphase decomposition to the Jowpass filters. We demonstrate this property
next.
10.4. The Polyphase Decomposition 667

Figure Ul.liJ: Realization of an FIR filter based Qn a l'ype I polyphase decomfK)sitlon .

Figure 11}.30: Realimtlon of an FIR filter based on a 'I)'pe II polyphase decompo~itioo

Consider first the use of the polyphase de-composition in the realization of the docimation filter of
Flgure l0.15(b}. If the lowpass filter H(z) is realized as in Figure 10.29(a), the overall decimatorstructure
takes the fonn ofFigure l0.3f(a). By invoking the cascade equivalance#l of Figure IO,l4(a). this structure
reduces to that indicated in FigJre 10.3l(b), which is computatiooa.Hy more efficient than the structure
of F1gure JO.l5(b). Tu illustrate tllis point, assume that the decimation filter H(z) of Figure fO.l5(b)
is a :Cngth-N FIR structure and the input sampling period T = l. Since the decimat:or output y[n] is
obtained by down-sampling the filter output v 1[n] by a factor of M, it ili nece~sary only to compute vl.n]
at n = .... -2M, -M, 0. M. 2M . . _ The computational requirements are therefore N multiplications
and {N - 1) additions per output sample being computed. Hov.·ever, as n increases, the stored signals in
the delay registers change. As a result, an computation.<; need to be completed in one sampling period, and
for the foJlowing (M- lj sampling periods the arithmetic units remain idle. Now comider the structure
of FigtJre l0.3l(b). If the length.<; of the subfilter E~;-{z) is N;:-, then N = L:'dJ1 Nk. The computational
requirements of the kth subfilter are Nk multiplications and h:~- I additions per output sample, and thatfrn-
Chapter 10: Multirate Digital Signal Processing

Fr F7 i M

J cc-; J J.

~ ~~ ---t__::~::-:'_j
+

.r--i

' lM
• •

['~~
] .J
l . ' •

: ~~ ) (b)

l<igun• IO.Jl: D<X'JnAtnr imple-mcntilll01l baseJ Ul a polypha.>e decompos.itinn.

the overall structure is therdore L:~uJ _f<ik = N multiplications and Et!,;Q 1{N.-- l) + iM- l) = N- I
aJJit1ons per de-cimator outpu! "ample. However. in the Iauer structure. the arithmetic units are operative
al aH instant'-> of !he- output s.ampli:1g period. which isM times that of the input sampling period.
S:mi!ar :>avings ;:,;rc ;>ho obmined in ihe -.:ase of the interpolator structure employing polyphase decom-
position in the realization of comput:c.tionat!y efficient interpolators. Figure 10.32{a} shows the interpolator
structure derived from Figure HU S(n) hy making use nf the L-hand Type I polyphase docomposirion -of
the imcrpulation filler H(':) ;md th~ cascade <:Oquivalem:e #?.of Figure l0.14{b). An alternative reaJiza-
tinn nbtained u.'iing the Type II polypha-.;e decnmpo&Jiion uf the interpolation filler H(;;) and the cascade
t:quiv<~.lence #2 b <;howo in bgure IO.J2(b).
More efficient lnlcrpnlawr and dcc:imator '>tfUL'1Ures can he realized by exp!mting rhe symmetry of the
li!ter .::odfidcnt<; of H (::-! in the Gls-e of linear-phase filter,;. Consitier for example the rea.lizauon of a
fxtm-of-~ (M = J} decimator using a length- 12 !inear-phao,;e FIR lowpass filter with a symmetric impulse
response

II\:) =hi OJ +hi 11.::: t + 1112 !.:::-- 2 + hl3 }z-' + hl4}z- 4 + h[5JC5- + h[SI:- 0
+ h[4J.c __ , - h[3].::::-'> + h[2J::- 9 + hfl !z- 10 + h[OJz-ll. (10.44)

A \.:uttventional polypha.;e dccompusition of the abm-c Hl;:} yield;; the fallowing subfilten;:

Eo(:::.)= h[O] + h!3Jz- 1 + MSJ;:- 2 +h[21-::- 3 ,


= h! 1 I--+- h!4Jz- 1 + h!4]::- 2 + h[lj;: -J.
E t C:i (l0.45)
£2(~) = hf2]-'- h15iz:- 1 ---t- hi3J=- 2 + h[OI:::.- 3 .

Ncte th.at the subfilter £1 {..::)still has a symmetric impulse response, whereas the impulse response of £ 2 (z}
is the mirror tmage of that of Eo(:). These rdailom. can be made use of in developing a computationally
efficient realization using only six multipliers and 1 t two-input adders. as depicted in Figure l0.33.
10.4. The Polyphase Decomposition 689

• •

(a; (bJ

lligure 10.32: Computationally c!lkit:nt interpolator structureH: (a) Type I polyphase decompo.>ilion. and (bJ Type H
tootyphase dccomrosition_

hl5J

Figure 19.33: A cumputalionally effiLienl re:;.lization of a factor-Df-3 dedmator ex plotting the linear-phase symmetr}'
of the length-12 decimatiOn filter.

1 OA.4 A Useful lden!1ty


Ttre cascade muhirate digital filter structure of Figure 10.34(a_) appears :in a number of app-lications. If we
e;qness the transfer function H(z) in i~ L-term Type I polyphase form .Lt.:J
~- k Et(:L ), it can be easily
shown that the structure ofFigu~ l 0.34{a; is equivalent to the time-invariant digital filter of Figure I0.34(b ).
~here Eo(z) ;s the zeroth polyphase term tprohlem 10.24) [Vai93J. This equivalence can be exploited in
~impiit)'ing complex rnultirate networks :::ontaining ca-.cadc structures of the form of Figure 10.34(a) frn-
analysis purposes.
690 Chapter 10: Multirate Digital Signal Processing

= *' ----g. ylnl

(a) (b)
Figure 10.34; [a) A c.";;Scade multirate s!ructure and (b) its equivalem form.

Figure 10.35: Sampling rate ;alteration based on conv~rsmn of ·npu! d:gital signal to analog form followed by a
resaffi?ling m the desired output mte_

10.5 Arbitrary-Rate Sampling Rate Converter


Th.ere are many applications requiring the estimation of a di-.crele-time signal value at an arbitrary time
im;tant between a consecu!ive pair of known samples. Applications include conversion between arbitrary
sampling rates, timirtg adju;.tmem in digital receivers. time-delay estimation, echo cancellation in modems,
and beam steering and direction finding in antenna arrays.
The above estimation problem can be solved by using s.tJJ:le type of interpolation which basically forms
an approximating cunlinuous-time signal from a set of known consecutive samples \lf the given discrete-
time signal and then evaluates the value of the continuous-time signal at the desired time :instant. This
interpolation pnx:~:<.!i can be direetly implemented by designing 11 digital interp;:>lation filter. An a!! --digital
de.si.gn of a sampEng rate converter with an arbitrary conversion factor is not simple. In particular, the
design is quite difficult and expensive when the conversion factor is a ratio of two very large integers or
an irrational number.

10.5.1 Idea* Sampling Rate Converter


In principle, a sampling rate conversion by an arbitrary conversion factor can be impiemented simply
by pas.<>ing the input digital signal Ihrough an ideal analog reconstruction lowpass filter whose output is
r<"samplcd at the desired output rnte, as indicated in Figure 10.35 [Ram84]. Ift:'le impulse response of the
analog lowpass filter is denoted by g.,(t), the output of !he filter is then given by
~

.i'a(tJ = _L x[£]gu(t -- tT}. (10.46)


f=-=

If the analog filter is chosen to bandlimit its output to the frequency range Fg < FJ.j2, it~ outpuLi.,(t) can
then be resampled at the rate F~ and. hence. the output y[n] of the resampleT at new time instants 1 = nT'
-is g.iven by

y[nj = X,.(nT') = L x[t]ga(nT'- en. (10.47)


t=-"'-'
Since the impulse response g,,(!) of an ideal low-pass analog filter is ofiofinite duration <md the samples
g, {nT'- fT) have to be computed at each output sampling im:tant, implementation of the ideal bandlimited
interpolation algorithm of Eq. {10.47) in exact form is. not practical. Thus, approximations to this ideal
interpolation algorithm are usually employed in praetice.
10.5. Arbitrary-Rate Sampling Rate Converter 691

to !.' t;

Figun! 10..36: Interpolation by an arbitrary factor.

The basic interpolation problem based on a finite weighted sum of input signal samples can then be
stated as follows: Given N2 + N1 + 1 input signal samples, x[k], k = -NJ, ... , N2, obtained by sampling
an analog signal x.,{l) at t = t1< =to+ kTiz, detennine the sample value x .. (to + ctTln) = y[a] at time
t' =to+ aJin. where -N1 ::::=a :S N2. Figure 10.36 illustrates the interpolation process by an arbitrary
flctnr.
We next describe a commonly employed lnterpolation algorithm based -on a finite weighted sum of
input samples.

10.5.2 Lagrange Interpolation Algorithm


In this approach, a polynomial approximation X,.(t) to Xa(t) jg defined as

No
Xa(t) = L Pkft)xin + k]. (10.48)
k=-Ni

where Pk.(t) are the Lagrange polynomials given by

P«n ~
~~
n,.,,
N:
(10.49)
!#<

Since
h(t,-) = { ~: (10.50)

it follows from Eqs. (10.48) to (10.50} that

(1051)

From Eq. { 10.48), 1he value of x,.(t} at an arbitrary value t' =to+ aJi.n is given by

N,
ia(t') =ia(l<l +a Tin}= y[nJ = L Pk(O!}X[n +kj, (10.52)
f,-,-N,
692 Chap1er 10: MuPlirate Digital Slgnar Processing

l I
" -2 "
n-1 n+3
'' - 3 "
Figun; 10..37: The input and output :>ample locations of an up-sampler with a conversion factor of 3/2.

where

(10.53)

We illustrate in the following example the application of the Lagrange interpolation algorithm in
designing a fractional-rate sampling-rate converter.

"'~*'-'' p
'lftfflltl •
l!iG
,_
10.5. Ar~trat)'-Aale Sampl ng Rate Co11v-ertei 693

.Fln;rl~y. lor £1l « .-.~ •loll r·f -..·[1'1 1 11. lhe \aiUI:! ul u . tu toe labcla.l IJI;, 1:-. •lol.i. h1~oill 11 trJ d
tll~tr eocftiCIHill ai~Jr:S.

~'-14u:) = - I I 17.:0R., P- ~4.:tt.l 0.7 17. 1.0 S&l)


,fb4a;? • .. - 1.29(\1, I'} (U I ~ 1."1':m4. f i[J<.~-·

_ n [;:::
- .rLnJ
i~J · (10.S9)
xhr + IJ
....,.e H ll'r bkli:k. tJt:dlicienL m: nK. •h:dl for the 00>.-r fa.cwf ol.lll n_krpool!lb1r ~;l!:.~iiD ~ gi.. en by

If [ 0.;. 17 ....().~%l 11.'7~U1 O.ot~11i]. ~ LO.tiO)


-O.J121i 0.7401 - 1 :!96..1 1.1284
Jt :hot.lld be !:'oldc.lllftoat.an e:tamll'}llrin" o;~( Rg1m lO.J7tlla1 dW" ~~~~ L~fli...:1.mb Ia crompu • Jl" 3], )'[II+ I•
.llllir:l ,-r.n -+ 5l on:. •pill ~ll by the cuel1tcienL, in Eq.<1 i I Cl~ J Md ( 10 .56bl. ~ ( IO~"i711J ~:~nd (I OS7b,, BOd
E:.:il- ( ltU ) nl'ld <t~J. 'f li~l.)'. Or lr. ollli.: 'il."tJid.x, lhc r;ie:o;j~ j NpOI:Wu t I crt 1~~: 11 time-.orr.IT)'ii1J fi 1CI"
v.i1b a pt.riod ol rhl'ee sam pi~. A ~ izaunn or thto ~~ fiK'I.nr or ~ intJ::lfiO] tl)l :d Gfl un ID'I(ilk."mi:"ss[lslJan
(If ( 16.59~ ~ nd.c:stctl an Fi.B~J~r: IO.~st•• a¢t' tOOl, il\ pr1l("LI..."e. 1'-!! u~~l "i)'"Skm ~~~ "'Jtl bC' thr«; nple:
pc~Mlds.. aDd a rewlt. tb! utilpul m(ile .\1111 1xru:tll) .a aJ: the 11me mdelll .II' 1.
.1\'11 t.crnlrti't111: rc&liLaiJ«t ~.:~I 1hc ozba¥e Inlet;'"'"' I n;Je qxtbto1 11'1 ~:lao JCJml nf • li ITH:•"':v}'ii'\J FIR .fi Iter Is
lll!ltictlled i n lJll!'l:
a<e
'n (1), 1'bt fillt!t' .CIJitfliL"Ie.Db at' M lifi)J-ordcr li:n:it-\'lli)'ICI£ All tit! ~a 1\;Jg 01; ~rind of J znd
lgr:itd !be n lueJo .11:'1 jnd:jc~ ~ll Fi loUt' IO...'l c 1.
~ ruli~ ion o( lfu! ubu ... c fnartionill-r.llle ,rfe~lot 15. .:.bluu1r.;.d by w~itu illllhit ~ potyou.
~ ot il!!l:p.. ( 10.S5 ) ro ( 10 :5:5d) i~:~ 1:4 ( 10.:54' ·nx:n yi ldc. ,

I
li1 l =u l ( - -I .r " -
. 6
2:1
l
- Kill- 11-
~
I
-.--(u )
.2
+- r[lf
6 n)
+ a•., ( 2~ .rlot - l] - . [11 t + lI •I" + II)

~
-;xhd I
+ J-.ll~t + L)) + .1[nj, (IQ.bJ t
~

A dl IIlli 11r.er ni:&Jiul'i of tht ulx.J .. c- tll!IWiCin lcarts fo l.lw: romJW .FJ'nfl"l'lll't" nf Fi~re 10.39 (farl!SJ whim: •
~1!1' (LIS1CbQ!n~ or tlJc. cbicc' FIR dtJ:!IW ltllep ~giver~ b}'

I -2 l -1 I I
-t;::
Holt)= ....._ z ... +
-:; ~::;
...
.
I - 1
H I rZ) = -t' - ~
2
I I
-+ -z:..
:! J
694 Chapter 10: Multirate Digilal Signal Processing

x{_3iJ

.x[3t +I]

H
.x{3l + 2]

·- FT
.x[J.e + 3]

f - ... .-2,-1.0,1.2....

(a)
-E
2 T '
r

;r(n+l}

ho[n] ht{n] .hzinl h 3 [n] h4[n] hs[n]


+ y[n]
+ + + +
(b)

Time ho{n) h 1[n] h2!nJ h 3[nl ftt[n] lis[n]

"
3t+ 1
P1(€Xo)
0
Po("o)
P,{o:J)
P_J(fXo)
Po( a,)
P_2(0fJ)
p_J(aJ) P_2(aJ)
0 0
0
3!+2 0 0 Pt(aiJ Po( a,) P_ 1(a2 ) P_z(CXV

(c)

Figure 16.38: Implementation of a fraction.a:-rate inlelpOlator with a con ...-ersion factor of 3/2. (a) Biock digital
filter implementati011, (b) implementation using a time-varying FIR interpolation filter, and (c} coefficients of the
time-varying filter as a ftmction of m~mpie index.

1 0.5.3 Practjcal Considerations


A direct design of a frnctionaJ sampling rate converter in most applications is impractical since the length
of the time-varying filter needed is usually very long and the corresponding filter coefficient caJculations
in real time are thus nearly impossible. As a result, a fractional sampling rate converter is almost always
10 5. Arbitrary-Rate Sampling Rate Converter 695

y[n]

Figure 10.39: Fractional-rare interpolator implementation w;ing the Farrow structtrre.

l">'"' Si""'""idal S"'JO<mre

c~11
-,
9 :

.
c
1 d.. l?fi '~
'
'
'
:1 b ':
,
l!Uo
·c-wc--'",'Cs :ro
Ti:m i:mUx 11. sarnpling_,rue.,.,.j ~
25
i sec.
J w

(a) (h)

- r --------,

''

!
0
~ -0-05

4>f
I
0

(c)

Figure 10.40; Plots uf the interpolawr input and output sequence. and tlie error sequence for a sinuroidal input of
frequency 0.05 Hz.

xlnJ-~ H(:.} 'Analof y{n J


res.am r
F'T

Figure IVAI: Hybrid form of a fractional-rate sampling con-.erter.

r.~alized in a hybrid form consisting of a digital sampling rate .convener with an integer-valued conversion
f:lClor followed by an "analog" fractional rate COJJVert-er as indicated in Figure 1OA l [Ram84J. The digital
sampling rate converter, of course. if need be, can be implemented in a multistage form. 8
s See Section W.3.
696 Chapter 10: Mu!tirate Digital Signal Processing

Yn!n! F0t:::l • v[nj

1; !n]
1
-.j F, 1.::) •
• •

• •
~VM_;ffl/ ,~L_ 1 1n!~
(a) {b)

Figure 10.42: {a) Analy'>l-~ filt~r bank, and {b) >.yn:h~xis filter bar..k.

10.6 Digital Filter Banks


So far, we have mostly concentrated on the design, reali7-ation. and applications. of single-input. single-
output digital filters. There are applications, as in the case of a spectrum analyzer, where it is desirable to
separate a signal into a set of subband signals occupying, usually nonoverlapping, portions of the original
frequency band. In other applications, ii may be necesr.ary to combine many such subband sig11als inlo
a '.iugle cumposite :-.ignal occupying the whole Nyquist range. To this end, digital filter banks play an
important role, and are the subject of discussion in this se1.-'tion.

1 0.6.1 Definitions
The digital filter bank is a set of digital bandpass filters with either a '.:"ommon input or a summed output.
as shown in Figure 10.42. The- s!mcture vf Figure IOA2(a) is called an M-band analysis filter bank with
the subfilters H~;(Z) known as the analy">.i.vfiltcn. I;: is used to decompose the :nput signal x[n] into a set
of M subband signals v_.,[n! with each subbaod signal occupying a portion of the original frequency band.
(The ~ignal is being '"analyzed" by being ;.epantted into a set of narrow spectral bands.i
The dual of the above operation. whereby a set of -.ubharld signals il~{nj (typically belonging to con-
tig-Llous frequency band~) is combined into one signal yfn] i;. called a .~ynthesisfilter bank. Figure 10.42(b)
shows an L-hand synthes.is bank where each filter Fk (;::) is called a .~ynthesis .filter.

10.6.2 Uniform OFT Filter Banks

We oow outline a simple technique for the design of a cla~.s of filter banks with equal passband width~.
Le1 Hu(:::) represent a causal lowpa'is digital filter y,.jtlJ an impulse responf>e hu[n j:

=
fio(z} =L ho!n Jz -~n. (10.62)
"~

wb.,ch we as~ume to De an HR fHter without any loss of generality. Let us now assume rhat Ho(z) has its
passband edge wp and stopband edge f<-'.< around Jr I M, where .~f is some arbitrary integer, as indicated in
Figure 10.43(aj. Now, consider [he tran;;fer functi-on Hk(Z) whose impulse response h,t[nl is defined to be

( 10.6}}

where w.'>!' = t'·- ;hiM, as defined in Eq. (3.24). Thus.


10.6. Oigilal Filter Banks 697

en "/I'-
Wp

"
M
W,

,.,
rn ,, 00

ID 0 1!!:
M

'
2n
•00

H2

I 0
" il>
M
•' '
2n
00

J]
HM-J

0 •
0 I
Zlt{M-1!
2n

{b) M

Figure 10.43: The llimk of M filters H~r;(:t.) 'With unifonnly stuffed frequency tespomes.

00 ~

Hk\Zl ~ Lhkin]z-' ~ :Z:::ho!n] (zwt,f'. ( 10.64)


n=G "=0
i.e.,
HkU:) = Ho(zw:,). (10.65)

with a corresponding ftequenc)' re~pons.e

H (el"') ~ H (ej(w-2 ...:kjMl) 0::-;:k::sM-L (10.66)


k. - 0 •

In other words, the frequem:y response of H~ (z) is obtained by shifting the response of Ho(z} to the
right, by an amount 2nkj M. The responses of H1 (z). H2(z), ... , HM-1 (z) are shown in Figure l0.43(b).
Note that the corresponding impulse responses hdnl are, in general, complex and hence IHJ;:{ej"')J does
not necessarily exhibit symmetry with respect to zem frequency. Figure 10.43(b) therefore represents
698 Chapter 1o: Muttirate Digital Signal Processing

M
x[nj v0 [nl

t v 1[n]
9
~
8. v 2 in]

" •


vM_ 11nl

Figure 10.44: Polyphase implementation of a unifonn DFT analysis filter bank where H~c(:.) = Vk(z:)/ X (z).

the responses of M - I filters H 1 (z). HJ.(z/, ... , Hu-t (z.). which are ur1ifo.rmly shifted versions of the
response of the basic prototype filter Ho(z) of Figure l0.43(a).
The M fillers H_,Jz.) defined by Eq. (10.65) could be used as the analysis :filters in •.he analysis filter
bank of Figure IOA2(a} or as the synthesis filters Fk(Z) in the synthesis filter bank of Figure 10.42(b).
Since the set of magnitude responses IHk{ef"')l. k = 0, L .... M- l, are uniformly shifted versions
of 1 bask prototype 1Ho(e 1 "')j, i.e.,

(10.67)

the filter bank obtained is called a ~..m.ifiJrm filter bank.

10.6.3 Polyphase Implementations of Uniform Filter Banks


lei Figure !0.42(a) represent a uniform filter bank with the M analysis filters Hx(z) related through
Eq. (H165). The impulse response sequences hk;n) of the analysis filters are accordingly related as
in :Eq. (10.63). Instead of realizing each analysis filter as a separate filter, i: is possible to develop a
computationally more efficient realization of the above uniform filter bank, which is described ne.>::t.
Let the lowpass prOlotype transfer function Ho(z) be reprerented in its M-band polyphase form:

M-'
Ho\z) = L z-i Et{?,.d), (10.68)
(d)

wb•!re Et(z) is the ith polyphase component of Ho(z):


=
Er{z) = I>tfnjz-" = L ho[t + nM)z-", O=:;l::;M-L (10.69)
n=O "=0

SubstitUiing z with zW!, in Eq. ( 10.68), we arrive a1 the M-band polyphase decomposition of Ht\z.l:

M-' M-J
Hk(Z) = L ;:-iw,;.t.t Et(zMw::r) = L z-tvr~/t Et((M),
f=O [.,.,;)
k=O.l, ... ,M-1, {10.70}
10.6. Digital Filter Banks 699

where we have use d the 'd . W''M


1 entity f•f = I.
N{)te that Eq. (l0.70! can be written in ma.triK fom:~ a~

Hk(;_) = [ l
w~~M-nl] [ ;=!1;;:; (10.71)

;::-,M-1) EM-, (:M)

fork = (( L .... M -- J. Allthec;c M equmion;,. can tx~ combined into a matrix equation as

{10.72)

l
w·-21M-·
M
which is equivalent to

I ,'i~:;:~,) ]
flu(;}
HI(<-) r.
Hz_{:) ::-2f:_,(::.M) (10.73)

HM--!l:J ;_, -i\J-l) ~M-t{lMI .

\\'here D denotes the DFf matrix:

I:
I
w,!t wft w(M-1)
M '
D~
w1M w';1-1 w~M-J• J'. (I0.74)

L1 wcM-n
.W
w2;!.t-l'
M
wu<i--1; 2
M

An efficient implementation of the M-band analy:... is tilt.;r b--J.nk ba;,ed on Eq. 00.73) is thu~ as shown
in Figure 10.44, where rhe prototype lowpa;,.s tiller HnC::.) ha:o, been implememed in a polyphasefrmn. The
struL:ture of Figure 10.44 is more commonly known as the unifr•rm DFT ana!ysis filter bank. 9
The computatiQila[ complexity ofFi:gare I0.44 is much smaller than rhatof .a direct implementation, as in
f<i.gure 10.42(a). For example, an M -band uniform DFr analysis filter bank based on an N -iap prototype
lnwpass FIR filler require~ a total of (M !2) log2 M + N multiplier;,.. whereas a direct implementation
requires NM multtplicutions.
FoUawing a development similar to thar oudinerl above, we can deri-.·e the ~tructure for a uniform DFT
symhesis filter bank. Efficient re<~lizalions based on the Types I and I! polyphase dccompositionf; of the
prototype iowpass filter Ho(:::.) are indicated in Figure 10.45.
9 An interestirg applicm:;on ;;.f the un':fo:-m DFT ar;alyslii tiller ,..;th a F!R filler is m the detecti{>ll of a code and the Slmullaneou·-
tnea\uremem oft he Doppler niisd r'-"' synchroffis<~~ion purpmes 111 ~read ~pectnlm wmmnnicatiou sysrem,; [Spa2(XI{}J_
700 Chapter 10: Multirate Digital S1gnal Processing

Y!fi! l~,!Jd

; rl nl -.j ~

'' :..
Q
= j
i'_,!ll~ ~ ,'-~~Ill
' "c
~
~
! .:0

l-L-I!nl
__.,i
(a)

Figure 10.45: CnlfOTm DfT '>Ynlh<:'\i:. tiller h;,nl<.-

1t follows fnxn Eq. { \0_73) that the polypha...e compoueot" E;(:: 11 J can tx: expres:<;eti m terms of the
prototype tran.~f~r function H 0 (::.) <!nd its modulated ver:-.iow.. II, L::f according :o

Eo(~M i

I
I
c- 1 EllCM)
::-2£z(~·w)

L ,_~-IM--!1£ J.f-1 ·-M)


~
( 10.75)

which can be used to determine the po\ypha<;e componenh of an HR tran_<;.fer function (Problems 10. t6
and 10.17).

10.7 Nyquist Filters


In thi~ ~ectinn. we inlrorluce a spedai type of Iuwpass filter With a transfer function thar, by design. has
certain am-valued coefficients. Due to the pn::sence ,)f these zero--valued .::oefficients, lhese filters are,
b)' mature, computationally more efficient than othe: lowpa~s tiller~ of the same order. In addition. when
u:-;ed a" inlerpo;a!or filter~. the} preserve the nonzero samp.ic.'> of the up-sampler OUlpUl at the interpolator
ourput. The..e fillen:, called Uh-hand filters or Nyquist filtf'rs and discussed in this section, are often used
both in single-rate and multirate signa! pro:.:e:..:..ing. For example. they are usuaUy preferred in decirnator
and interpolator de-.ign. Another application is jn the dcs1gn nf a quadrature-mirror filter bank. discussed
in Se-.::tlon HUl A third application. descrihed in Sectiun ll 7. is in the design of a Hilben transformer.
which i<.. employed for the generation of anaf}tic signah.

1 0. 7 .l f th-Band Ft!ters-

Consider the f.actm·-of-L interpolator of F1gure IO.I:'i(aj. The relation between the output and the input of
!he "ntCrp<,Jator is given by
YL::) = H(:z.)X(':".!. )_ (iD.76)

lf the ir;terpolatiun filter H (;::J is rca!i.led in the L-band pnlyphas.e form, then we have

H{::) = Eo(zl..l +::·I E:(zL) + ::~2£2(zL) + .. , + ::-(L-ltEL-1(-:.L),


10.7. Nyquis1 Filters 701

h(nj

-3 6
Figun- Hl.46: Tht impulse respoo'>e of<.~ ~ypical th!rd-band fiiter.

Assume that the kth polyphase cnmponenl nf H!:) i<; a constant, i.e., Et_{:.'.) =a:

H(::_) = Fo(:L) +,;:-I Ej{_.:L) + .. - + ==--\k- I) E~.:-! (;::L_l + az-k


+:-it+! f Eh-1 (;::L) + + ;--;L --! l EL-II.ZL). (10.7?)

Then we can eKpre-ss Y \;:) as

L-'
y i_:) = a:-1 X (zL) + L :-<" Et(-:.L \X {:::L}.
'"'"'
hd.

As a result, y! Ln + kl = ax[n J, i.e., the input samples appear at the output without any distortion at a;J
value~ of n, whereas the in-be<ween (L- 1) samples are determined by in!erpo!ation.
A ll:ter with the above property is called a Nyquist filter or an Llh-handjifter and 1ts impuls~ resJXmse
has many zero-valued samples. making it computationally very attractive. For example. the impulse
response of the Lth-band filter obtained fork = 0 satisfies the following cond\tion:

h[Lnl -ju· n=O, !10'9•


- 0. otherwise. ·' '
Figure 10.46-"hows a typical impulse re~ponse of a third-band filter (L = 3). If H (z) satisfies Eq. ( 10.77)
with k = 0, J.e._ Eu(z} = a. then it can be shown that
l.-1

L Ht:-Wf) =La= I (assuming a= 1/L). ( 10.80)


kdl

Since the frequency response of H !_::Wf) is {he shifted ver..ion H(el'lw-lrrk-/LJ) of H(eJ"'), the sum of all
of these L uniformly shifled vcrsiom. of H(ei'"') add up to a constant (see Figure IOA7}. Lth-band filte~
can be eilher FIR or IIR fi!teTh.

10.7.2 Half~Bant1 F1Hers


An Lth-band filter for L = 2 i~ calkd a halj~bu.nd filter. From Eq. (10.?7) the transfer function of a
half-band filter is thus given by
(10.81)
with its impulse response sati!>fying Eq. ( 10.79) with L = 2. The condition on t:be frequency response
given by Eq. ( 10.81} redu<.·es tu

H(.:) + H(-::) = 1 {a..<<:mming a = 1). (10.82)


702 Chapter to: Multirate Digital Signal Processing

Figure 10..47: Frequency responses of H (:.: Wi) f<X" k = 0, l, .... L - l.

········~

rr12

Hgun- Ul.48: Frequency response of a zerv-phase half-band "filter-.

1f H(::,) ha~ real coefficients, then H (-ej"") = H(ejf;r--wl), and Eq. (10.82) leads to

H(ei,..J + H(ej!;r-w)} = I. 110.83)


The above equt~lity impiies that H (eif,-Jl--.fil) and H(e;f:r/ 2+Bl) add up to unity for all e.
in other words,
H{ei"-') exhibits a symmetry with respect to the half-band frequency ;rj2, thus justifying the name '"half-
band filter." F1gure 10.48 illustrates this symmetry for a hal~-band lowpass filler for which the passband
and SlOpband ripples are equal, i.e., 8p = <'!.,, and the passband and stopband b.andedges are symmetric
wilh respect tor. /2, i.e .. wp + ;;o, = n.
An important atl:r.lctive propeny of the half-band filter i~ that about 50 percent of the coefficients of
h[n] are zero. This reduces the number of multiplications required in its implememation. making the
fillet" computationally qnile efficien1. For example, if N = l 0 l, an arbitrary Type 1 flR transfer function
requires about 50 multipliers, whereas a 1)·pe f half-band fiiter requires oniy about 25 multipliers.
An FIR half-band filter <"an be designed with linear phase. However, there is a constraint on .its
length. Consu:ler "i zero-phase half-band FIR filter foe whidl h(n] = ah*[-nj, with Jal = L Let the
highest nonzero coefficient be h[R]. Then R is odd as a result of the condition of Eq. (10.791 Therefore,
R = 2/r: + l for some integer K. Thus the length of the impulse response hfn} is restri<:ted to be of the
forrn 2R + I = 4K + 3 [unless H(::.) i~ a constant{.

10.7.3 Design of Linear-Phase l.th·Band FIR Filters


A !owpa"S-s linear-phase Nyquist Lth-band FIR filter with a cutoff at We = n(L and a good frequency
response can be readily designed via rhe -..vindowed Fourier series approach deSL'Tibed .in SectH:m. 7.6. In
10.7. Nyquist Fiflers 703

this approach tbe impulse response coefficients of the iowpass filter are chosen as

hfnl = hLp[n] · w[n], (10.1!4)

w!rere h 1.Frnl i.s the impulre response of an ideallowpass filter with a cutoff at ::r I L, and w[n] is a suitable
window function. If
forn=±L,±lL, ... , (10.85)
then Eq. {10.79) is indeed satisfied.
Now the impulse response hLp[n] of an ]deal Lth-band filter is obtained from Eq. (7.59) by substituting
We = 7< / L and is given by
sln(Jrn/L)
hLP[n] = , -oo.::;: n :5 oo. (10.86)
~"
It can be seen from the above that the impulse response coefficients do indeed satisfy the condition
of Eq. (10.85). Hence, an Lth-band filter can be designed by applying a suitable window function to
Eq. ( 10.86).
Likewire, an Lth-band filter with a frequency response as in Figure 7 .25{a) can also be designed from
Eq. {7.87} by replacing £<\· with rr 1L. resulting in an impulse Tesponse

r·, n = 0,
hu•fnl = (10.87)
( . ~1n(x:11jL}
In!> 0,
'"
wbich .again is seen to satisfy the condition of Eq. (10.85). Other candidates for Lth-band filter design are
the Jowpass filleTs ofEq. (7.8B) and the raised cosine filter of Eq. (7.165) in Problem 7.52.
We illustrate the lowpass h.alf-band filter design in the following example.

t jAy w!f 1 >.~


% : "'';:: VP <d" ; .:"."+ Ld + w y : 1+'<f us , t-u ld:m:
0, '{{ :.n2tf::ni\vi?. f' , , ; 0 , , :: <: /VH )l;;;;;;tt"iiG\0 r'i:,

'
{\ "< "£{
t: <"«H&ur:zr Lnr "r ;J:r;r:·nt 7;i
+ "<t;l;;;.±tik ,f; !/1:{
u tn :r , ;
t: ::{0:' ,;,.y ft t F hW "' " h tivww x «'l>t rnr!Ty

,l
{:" "\/ hf(::tf,t\1"" !1)/ j {

L: :i\£42 : ' " ""jtfrHjVt " { <k f }/ ~,4\}::ii;A ;" j ' ::J& t !\ Af" j
704 Chapter 10: Multirate Digital Signal Processing

»+T "Ui '""!ttl if <<I 01110 :U i410:t!Xii<UZ:-%#t


,,4 '"' hffl <I ,JJ1t2JU$.8ilHt'JiL l<:L;
"' , <L 0!1 ,':;J;
<J , t!E IW* ~futi}'$,; >:&'J :

An elegant method for the design of half-band linear-phase FIR filters is considered in Section 11. 7.3.
Also described here is a method for the design of half-band IIR filters. Several other design approaches
have been advanced ir; the literature [Ren87]. [Vai87aJ.

10.7.4 Relation between Uh-Band Fillers and Power-Complementary


Filters
Recall from Section 4.8.3, a set of filters is said to be power-complementary if their square magnitude
responses add up to a constant. Consider an Lth-band transfer function H(z} represented in the L-band
polyphase decomposition form given by
L~'

H(z) = Lz-.eEe(zL).
~~

Define a new transfer function G(::} = ii{z)H(z). 10 Then the set of transfer functions {Eo(z), E 1(z), ... ,
EL 1(z)J is power-complementary if and only if G{z) is an Lth-band filter [Vai90].
To prove the above property, define
L~'

Hr<z) = H(zWLr) =L z-twf'" Ei(z_L)


tdJ

for 0 :::; r ::s_ L - 1. We can write the above set of equations. in matrix fonn as
0
[

~
Ht (z) ]
Ho(z)

HL-l(Z)

H(z)
=D
[0
'

~
z~•

0
Aid
J_J [ E1
Eo(h
(zL} ]

EL-~(zL}
----..-.'
Ei/l
(10.90)
I
=
fi

where Dis the L x L DFTmatrixsatisfyingDtD = LI. 11 If the setoftransferfunctions {Eo(z), E 1 (z},.,,, }
EL-: (z)} is power-complementary, then E(z)Efz) = y so that 12 ).
;
ii(z}H{z) = Etz)A(z)DtDA(z)E(z) = Ly.
~,~,~"",~,,-_-cHc'~'l~,~.,~.----------_c______ 1
j
tt nt denotes the conjugate transpose of u. ;
\'ZI:i:{;:) is the conjugate transpo&e cl'H(l_h• }, i.e .• it(~) = ut OJ<:*).
1O.S. Two-Channel Quadrature-Mirror Filter Bank 705

- ~---~-- ---- ----·

c J.2

Figure 10.49: Gam respon~c of a l~nglh-23 linear-pha:se half-band FIR filter of Example 10.12.

T;1is in tum implies that the set {HQ(z), H 1(z}, ... , HL_ 1(z)} i~ power-complementary. In other words,
G(z) = il(z)H{:) ~atis.fies
L-1
L G(zWLk) = Ly (10.92)

so that it is an L th-band filter.


·~
Conversely, we can prove thatthe set {Eo(z), £;. (<:), .. , , EL~dz)j is power-cornple:nemary,assuming
that G(z) is an Lrh-band filter, simpl)' by inverting the above matrix equation and carrying out an argument
si,m:ilar to that outlined above.

10.8 Two-Channel Quadrature-Mirror Filter Bank


In many applications, a dis.."rete-time signal x[nJ is first splil into a number of subband signals lv_i<[n]}
by means of an analysis filter bank; the suhband signals are then processed and finally combined by a
synthesis filter bank resulting in an output signal y!n ]. If the subbandsignals are band limited to frequency
rat1ges much smaller than that of the ori,ginal input signal. they can be down-sampled before processing.
Because of the lower sampling rare. the processing of the down-sampled signals can be carried out more
efficiently. After processing, these signals are up-sampled before being combined by the synthesis bank
imo a bighec-rate signaL The combined structm·e employed is called a quadrature-mirror filter {QMF)
lx:nk. If the down-sampling and the up-sampling factors are equal to or greater than the number of bands
of the filter bank, then the output y[ttJ can be made to retain some or all of the characteristics of the input
x[nl by properly choosing the filters in the structure. in case of equality. the filter bank is said to be a
critically sampled filter hank. The most common application of this scheme is in the efficient coding of
a signal x[n] (see Section 11.8)_ Another possible application ls in the design of an analog voice privacy
system to provide secure telephone conversation [Cox83l In thls sec-tion, we study a two-channel QMF
bank.

10.8.1 The Filter Bank Structure


Figure i0.50 .shows the basic rna-channel QMF bank-based subband codec {coder/decoder). Here, the
input signa] x[n] is firM passed through a 1wo-band analysis filter bank containing the filters Iiu(z) and
H, (Z), wluch typically have lowpass and highpass frequency reo,ponses, respectively. with a cutoff fre-
quency at 1r /1. as indicated in Figure 1051. The subband signals tv.~: [nj} are then down-sampled by a
706 Chapter 10: Multirate Digital Signal Processing

y[ni

Figure Hi.50; The two-channel filtcr hank based coder/decoder.

Figure 10.51: l)>pkal frequency responses. of the ll!'.alysis filters.

J{nl

Hgure 1052· The two...chanocl quadlattre-mirror filter {QMF) bank.

factor of 2. Ea;._-h down-sampied subband signal is encoded by exploiting the special spectral properties of
the signal, such as energy leveb and perceptual importance (see Section ll.8). Tbe coded subband signals
are oombined into one sequence hy multiplexing and either stored for later retrieval or transmitted. At the
receiving end, the coded subband signals are first recovered by demultiplexing and decoders are used to
produce appro-ximations of the original duwn-sampted signals. The decoded signals are then up-sampled
by a factor of 2 and passed through a two-band synthesis filter ban};: composed of the filters Gc.(z) and
G 1(z) whose outputs are then added yielding yfn J. It follow; from the figure that the sampling rates of the
input signal xfn} and output signal yfnJ are the same. The .analysis and the synthesis filters in the QMF
hank are chosen so as to ensure that the reconstructed output _v[n] is a reasonable replica of the input x [n }.
Moreover. they are also designeC to provide good frequeru:y select:hcity in order to ensure that the sum of
the power of the subband ;..ignals is reasonably dose to the input signal power.
1n pmctice, various errors are generateC in this scheme. In addttion !.o the coding error and errors caused
hy transmission rhrough tlJe channel, the QMF bank itse(f introduces several errors due to the sampling rate
a!temttons and imperfe-Ct: filters. We ignore the coding and channel enors, and investiga:e only the errors
generated by the down--samplers and up-samplers in the filter bank and tt\ei.r effects on tile performance
of the system. To this: end, we consider the QMF bank structure without the coders and me decoders as
indicated in Figure 10.52 [Cro76a}.[Est77].
10.8. Two~Channe' Quadrature-Mirror Fitter Bank 707

10.8.2 Analysis of the Two-Channel QMF Bank


[tis convenient to analvze the filter bank: in the z:-domai11. To this end, we make use of the input-output
relations of the up-samPler and the down~sampler derived earlier in Section 10.1 and given by Eqs. (I 0.5)
and (10.12). The expresslOns fur the z-transfonn& of various intecrnedi:ate signals in Figure 10.52 are then
given by
V.ti.::) = Hk(Z)X(z). (10.93a)
U;,(z.) =! IVk(Z:I/2) + V.~;( -zt/1)}. (10.93b)
2 (l0.93c)
Vk(Z) = Ut(z ).

fork = 0, I. From Eqs. ( f0.93a) to (10.93c), we obtain after some algebra

Vx(l) = ~ {Vx(z) + l'.t(-z)} = 11Hk(z)X(z) + H,t(-z)X(-z)}. {10.94)

The reconstructed output of the filter bank is given by

(10.95)

Substituting Eq. (10.94) tn Eq. (10.95), we obtain after some rearrangement the expression for the output
of the filter bank as

Y(z.)= ~ {Hot:)Go(z) + H 1 (z)G:(z)! X(z.l


+ i (Hu( -z)Go(z) + H1 ( -z)G1 (z)} X ( -.:). ( 10.96)

The second term in the above equation is precisely due to the aliasing caused by sampling rate alteration.
The above e<JUation can be compactly expressed as

Y(z) = T{z)X(z) + A(z)X(-Z}, (10.97)

where
T(z) = ~ {Ho(z)Go(Z) + HI(Z)GI(Z)) (10.98)
i;:: cdled the distonion transfer function, and

A(z:) = ~ I Ho( -z)Go(z:) + H; ( -z)GJ (.:)J. ( I0.99j

1 0.8.3 Alias-Free Filter Bank


As noted in Section l0.1, the up-sampler and the down-sampler are linear time-varying components and,
as li result. in general, the QMF structure of Figure 10.52 is a linear time-varying (LTV) system. It can be
shown also that it has a period of 2 {Problem 10.35). However, it is possible to choose the analysis and
synthesis filters such that the aliasing effect is canceled, resulting in a linear time-invariant (LTI) operation.
To this. end. we need to ensure that

Lt:.,
Ho( -z)Go{z) + H 1( -z)GI (.::) = 0. (10.100)
For aliasing cancellation we can choose
Go(z} HJ(-z.)
(10.101)
GI(Z) = Ho(-z)'
708 Chapter 10: Multirate Digital Signat Processing

which yields
Go(z) = C(z)HI ( -;:-). G 1 (z) = -C(z)Ho(-z). (10.102)

where C{z) is an arbitrary rational function.


If the above relations hold, then Eq. (10.96) reduce,. to

Y(::) = T(z)Xc~). (IQ.103)

with
T{z) = ! {Ho{z)HJ( -z) - H1 (z)H{J(-z)}. (10.104)
On the unit circle, we have

Y(ei"') = T(ei"')X(ejm) = jT(e''w)lej¢!"';X(el"'). (10.105}

If T(z) is an allpass function, i.e., IT(ei"')l =d :;ofo 0, then

(10.106)

indicating that the output of the QMF bank has the same magnitude response as that of the input (scaled by
d) but exhibits phase distortion, and the filter bank ls said to be magniuule preserving. lf T (z) has linear
phase, i.e.,
¢(w) = aw + p, (10.107)
then
arg{ Y(el~<')} = arg {x(e)("}J +aw+ {J, (10.108)

and the filter bank is said to be phase preserving but exhibits magnitude distortion. Jf an alias-free QMF
bank has no amplitude and phase distortion, then it~ called a perfect reconstruction. (PR) QMF bank. In
such a case, T(z) = d z-t , resulting in

(10.109)

which in the time-domain is equivalent to

y[n] =dx[n - f ] (10.110)

for all possible inputs• .indicating that the reconstructed output y[n 1 is a scaled. delayed replica of the input.
lG.B. Two-Channel Quadrature-Mirror Filter Bank 709

1 0.8.4 An Alias-Free ReaJiz-ahon


A very simple alias-free two-band QMF bank is obtained when
Ht(z) = Ho(-z). (10.111)

The abo-.-e condition. in the case of a real coefficient filter, implies

IHt(ej"'): = 1Hn(e1 (""-m))' (10.112}

indicating that if Ho(Z) is alowpass filter. then H1lz) 1sahighpass filter, and vice versa. In fact.Eq. (10.112)
I
indicates that : H 1{el"-') is a mirror image of jHo(ej";}: with respecl to ;rf2, the quadrature frequ.eru:y.
This has given rise to the name quadrature-mirror filter bank.
Substituting Eq. (lO.l fl) in Eq. (10.102). we anive at, with C(z) = I,
(10.113)

Equations (10.111) and (10.1 13) imply that the two analysis filters and the two synthesis filters in the QMF
bank are essentially determined from one transfer function Ho(z). Moreover, Eq. (10.1 13) indicates that
if Ho(z) is a lowpass filter. then Go(z) is also a lowpass filter, and Gt (z) is a highpass filter. The distortion
transfer function T{z) of Eq. (10.104) in this case reduces to

(10.114)

A computationally efficient reaJization of the above alias-free two-channel QMF bank is obtained by
realizing the analysis and the synthesis filters in polyphase form. Let the two-band Type I polyphase
representation of Ho(z) be gi\len by
2
H:.;(z) = Eo(z 2) + z- 1 E 1(z ). (JO.ll5a)

From Eq. (10.11 i) it follows then that


H1 (z} =Eo{:?)- z- 1E; (zh. (10.1 15b)

In matrix form Eqs. (lO.J 15a) and (l0.115b) can be expressed as

(10.!16)

Likewise the synthesis filters, in matrix form, can be expressed as

[Go(t) GJ(.::)]=[z- 1 El(Z 2 ) Eo(z 2 )][: ~ 1 ]. 110.117)

L'smg Eqs. {10.116) and (10.117), we can redraw the two-channel QMF bank as shown in Figure 10.53(a),
whfch can be further simplified using tl;e cascade equivalences of Figure 10.14, resulting in the computa-
tionally efficient realization of Figure l 0.53(b ).
The expression for the distortion tra.,sfer function in this case, obtained by substituting Eqs. (10.115a)
and (l0.115b) in Eq. (10.104), is given by

(10.118.)
The following example illustrates the development of a very simple perfect reconstruction QMF bank.
710 Chapter 10: Multirate Digital Signaf Processing

e<[nj

AnaJpis filter bank Synthesi~ Illter bank


(b)

Figure 10.53: Polyphase .realization of the twn-cbannei QMF bank. (a} DireL"t polyphase realization and {b) compu-
tationally efficient realization.

'll!m4l-4J"m&4JmlllllfitAllllilli&m,lkm 'liJmk!illklJ.Rkilnk 11iJSVi$!St'Ff


,,,

10.8,5 Alias-Free FIR QMF Bank


Let the prototype analysis filter be a linear-phase FIR filter of order N with a real coefficient transfer
func1ion Hotz) ,given by
N
Ho(z) = Lho(n]z-". (10.119}
n=O
Note that Ho(z) can be either a Type I or '!ype 2 linear-phase ftmction since it has to be a lowpass filter.
As a result. its impulse-response coefficients must satisfy the condition ho[n] = ho[N- n], in which case
we can write
10.6. Two-Channel Quadrature-Mirror Filter Bank 711

\ 10.120)

where il0 (w) i<; the .amplitude function, areal fltnction of w. By making use ofEq. (10.120) in Eq. ( lO. 114)
along with the property that IHokl"")l i:s..an even iuoctioo of w. we can exp:ess !he frequency response of
the distortion Transfer function a1<>

{ 10.121;

From the above it can be seen that if N is even, then T(ejw) = 0 at w = rr/2. implying severe
amplitude distortion at the output of the filter bank. A'> a result, N must be chosen to be odd, in which case
Eq. (10.121) reduces to

(10.122)

It follows from the above expression, the FIR two-channel filter bank with linear-phase analysis and
synthesis filters will be of perfect reconstruction type if

(10.123)

Le., tbe two analysis filters are powe.r-complcmen!ary. Ext:ept for the two trivial filter banks of Examples
10.13 and 10.14, it can be shown that it is not possible to realize a perfect reconstruction two-channel filter
bank v.ith linear-pha&- powe£-complementary analys.ls filters fVai85c].
As can be &een fmm Eq. (10.121), the QMF bank has no phase distortion, but will always exhibit
amplitude distortion unless :T{elw)l is a constant for all values of w. If Ho(z) is a very good lowpass filter
wlth IHo(el"')l ~ I in the passband and JHo{eJw)l ~ Oin the stopband, then H 1 (z) is a very good highpass
filter with its passband coinciding with the stopband of Ho(.::), and vice versa. A» a resu~t. jT(ej"'')! ;:: l/2
in the passbands of Ho(z) and H1 (z}. The amplitude distortion thus occUl'fi primarily in the transilion band
of these filters, with the degree of distortion being determined by the amount of overlap between their
squared-magnitude responses. This distortion can~ minimized by controlling the overlap, which in turn
can be controlled by appropriately choosing the passband edge of He(<:).
One way to minimize the amplitude distortion is to employ- a computer-aided optimization method to
ite.ratively adjust the filter coefficients holn] of Jlo(z) such that the constraint

. •' I ,2
IHo(eY")~ +;H1(el"'):,,;:;: J (10.124)

is satisfied for all value.s of w [Joh80l. To this end, the objective function ¢ to be minimized can be
chosen as a linear combination of two functions: (l) stopband attenuation of Hu(z) and (2) the sum of
th(~ squared-magnitude response~ of Ho(zj and Ht(z) as indicated in Eq. {10.124). One such objective
function is given by
¢ = a¢1 + (1 -a)t/JJ.. (10.125)

00.126)
712 Chapter 10: Multirate Digital Signal Processing

and 0 < a < 1 and Ws = (n"/2) + e for some small & > 0. Note that since IT(e1"")~ is symmetric with
respect to ;r /2, the second integral in Eq. (10.126) can be replaced with

f 12 (l-jHo(efw)j2-jHo(ef").')'
2 dw.

After ¢ ha:> been made very small by the minimiLation procedure, both ¢1 and tfr1 will also be very
smalL This in tum will make Ho(z) have a magnitude response satisfying IHo(e>'"-'}1 ~ l in its passband
and 1Ho(e1"')1 ~ 0 in its stopband, as desired. Moreover, since the power-complementary condition of
Eq. ( 10.124) will be satisfied approximately, the magnitudercsponseofthe power-complementary highpass
filter H 1 (z) to the lowpass fiber wiil satisfy jH1(efw)j ;;: 0 in the passband of Hu(z) and IH1 (ej"')j ;: 1 in
the stopband of Ho(z).
Us.i.ng the above .approach, Johnston has. designed a large class oflinear-pha.se FIR lowpass filters Ho(z)
meeting a variety of specifications and has tabulated their impulse response coefficients [Joh80], [Cro83],
[Ans93j. The follow~g example examines the performance of one such filter.

\ /)s'\:\ll+:t+\'lt! Lt\i"' <:t'\l\!tt'\001JI/0f;<&Vy


!\ I'

%\7'41
a S::t\IIV'ii+ '1<~&
;v,, wf
"'" ill:bdi
! fVJ,'Jh Wl &Ai i?lf:I di:\& 1
!Jl 111711 { Jl!h! ; if;;" "
1f ¥"' 'Ln t '; :&'\!\! w r">:01P¢"fr&SUP ::ww :1 it:s:r:r
''*''
:pr, itT,, ' ' : ,±
X::ii!i3Pibl i ' , ~Uti r \p:: I t; :iF" r
f\IIVtwliSir
10.8. Two-Channel Quadrature-Mirror Filter Bank 713

(a) (b)
Figure 10.54: Johnston's l2B filler: (a) gain responses. and (b) re.::onstruction efroi' in dB.

10.8.6 Alias-Free IIR QMF Bank


We now comider the design of an alias-free QMF bank employing IIR analysis and synthesis filters. Under
the alias-free <-'Ollditions ofEqs. (10.102) and (10.111), ilie distortion transfer function T(z) of the two-
cha."lnel QMF bank is given by 2z- 1Eo(z:Z}E 1(z 2). as indicated in Eq. (10.118). If T(z) is an allpass
function, then its magnitude response is a constant and, as a result,. the corresponding QMF bank has no
magnitude distortion {Vai87f1. Let the polyphase components Eo(z) and Et (z) of Ho(z) be expressed as

Eo(z) = !AoCd, (10.127)

with Ao(z) and A1 (z) being slable allpass functions. Thus.

Ho(zl = !£AJ(t:h + z- 1At <..:h]. (10.128a)


H, (z) ~ ![Ao(z'J - ,- 1A, (z')]. (l0.128b)

Substituting Eq. (10.127} in Eq. (l0.116J, we obtain the expl'essions for the analysis filters as

Ho(z)]
[ H, (z) = ' [ ' ' ] [ Ao(z
2 t -1 z- At(?)
1
2
)
J
l ·
(10.129)

The corresponding synthesis filters are obtained from Eq. {10.117) and are given by

[Go(z) G1{z;)} = i [z- 1A1{22) Ao{z 2 )] [ i _: ]. (10.130)


714 Chapter 10: Muftirate Digital Signal Processing

which yields

Go(z) = ilAo(z1 ) + .z- 1.A; (z 1 )] = Ho(z). (10.131a)


Gt(z) = ![-Ao(?) + z- 1A1(z 2H = -HI(z). (l0.13lb)

The realization of the magnitude-preserving two-channel QMF bank shown in Figure 10.55 is obtained
by making use of Eq. (10.127) in Figure l0.53(b).
From Eq. (10.l28a), we observe that the lowpass transfer function Ho{z) has a polyphase-like decom-
position, except here the polyphase components are stable allpass transfer functions. The existence of this
type of decomposition bas been illustrated before in Example lO.lO. It has been shown that a bounded
real (BR) lowpass transfer function HG{Z} = Po(z)/ D(z) of odd order, with no common factors between
its numerator and dencminator, can be expressed as in Eq. (l0.128a) if it satisfies the power-&ymmetry
condition given by
(10.132)
and the numerator Po(z) of Ho{z) is a symmetric polynomial [Vai&?f]. It can be easily verified that
the transfer function H(z) of Example 10.10 satisfies these conditions. It has also been shown that any
odd-<lcderelfiptic lowpass half-band :filter Ho(z) with a frequency response specification given by

I - Op ::5 IHo(ei"')/ ::51. forO:::: w::::: wp, (10.133a)


!Ho(ei"")j:::: Os, for w~ :5 w :5 :rr, (10.133b)

andsatisfyingtheconditionswp+ws = :r ando; = 4-bp(l-lip)canalwaysbeexpressed as inEq. (l0.128a}


fVai87f]. The poles of the elliptic filter satisfying these two conditions lie on the imaginary axis. Using
the pole-interlacing property outlined in Section 6J 0, we can readily identify the expressions for the two
allpass transfer functions Ao(z} and A1 (z).
We illustrate the design of an IIR half-band filter using the above approach.

10.9 Perfect Reconstruction Two-Channel FIR Filter Banks


A pelfect reconstruction two-channel F1R filter bank with linear-phase FIR filters -can be designed if the
power-complementary requirement of Eq. (10.123) between the two analysis fiJters Ho(z) and H 1 (z) is
13 Becl!llse of numerical accuncy proble~ the real part of the pole,.: ace no1 exactly zero but are qmre sma.II in value and have been
ignrn-ed.
10.9. Pertect Reconstruction Two-Channel FIR Filter Banks 715

+;

/ +
+

Analysis filter bank Synthesis filter bank

Figur? 16.55: Magrutude-preserving rwo-channe! QMF bank.

o-.- - - ·~

F)~ \
"'-20~
~

Figure 10.56: Gain response of the half-band .filter of Example 10.16.

,r-·
'"' "'
'" ' '-"
1"
"' [
. '
. ' -fl5 0
Re..<l p&t
D.5

Figure 10.57: Pole-zero plot Qf the fifth-order ellipt1c ITR half-band lowpass tiller of Example 10.16.

not imposed. To develop the pertinent design equations, we observe from Eq. (10.96} rhat Y(z) can be
expre;;sed in matrix fDrm as

Ho(z)
Gt (z)J [ H, (z)
Ho{-z)
H;{-z) J[ {10.135}

From the above we obtain

X(z) l
J[ X(-z) J· (10.136)
716 Chapter 10: Mulflrate Digital Signal Processing

Combining the above two eglilltlons we arrive at

[
Y{z)
Y(-z)
J= 2 l-
l Go{z)
Go(-z)
G 1 (z:i
GJ(-z) J[ z~g~
HfJ(-z)
H,(-.z) ][
X(z) l
= ~G(-"'i(z) lH(m:(z)]T [ X(-~):,'
{10.137)

where
Ho(z) H1 (z) l
Ho(-z) H1i-z) J (10.J38)

a._re called the modulation matrices.


It follows from Eq. ( 10. 137) that for perfect reconstruction we mm.t have Y (z) z-r X (z) and,
correspondingly, Y ( -z} = (-d-f.X ( -z). Substituting these relations in the above equation we conclude
that the perfect reconstruction condition is sati'>fied if

(10.139)

Thus knowing the analy::ils filters Holz) and H1 (;::), rhe .synthc-:is filters Go(Z) and G1 (z) are determined
frum
G(m;(z) = 2 [ z~i
which yic!ds after some algebm
z~-z

Go(z) = det[~!"'l!z)J. HJ{-z). (1 0.140a)

2z-t
(l0.140b)

where
(10.140
and I! is an odd pos.1tive integec
Fm FfR analysis filters Hu(z) and H1 (z), tht synthesis filters Go(z) and G 1(z) will also be FIR filters
if
(lO.l42}
where{' is a real number and k is a positive integer. In which case, the two synthesis filters are given by

G o(.::l = 2-;;, -H-klH 1( -;:J. (lC.l43a)


c
Gt(z) = -~z-(t-kl Ho(-z). (l0.143b)
c
Orthogonal Filter Banks
Smilh and Barnwell [Smi8.4l and Mir.tz.er fMm85] inde-pendently showed how to >:boose the FlR analysis
hlters to satisfy the condition of Eq. ( 10. 142) un the determinant of Him!(;:). Let Ho(z) be. an FIR filter of
odd order JV sati.sfymg the power-symmetric condition of Eq .•: 10.l32). If we then cho05e

(10.144)
10.9. Pertect Reconstruction Two-Channel FIR Filter Banks 717

Eq. (10.141) reduces to

L -I-)
( )
det(Hm tzJ]=-~-
N ( " -
Ho(z.}Hot::: u u ·
,+~•ot-.:)uol-:.. J_ =-z -1•.'
· · (10.145)

Comparing the abeve equation with Eq. (10J42) we observe that c = -l and k. N. Using
Eqs. ( 10.144) and (10.145) in Eqs. (10.143a} and (10.143b) with f = k = /\' we get

00.146}

It should be noted that if Ho(z) is a cau:;al FIR filter. the other three filters are also causal FIR
filters. Moreover. from Eq. (10.144) it follows tl:at jG;(eJw)! = !H;(el""H, fori = I, 2. In addition,
!Hl (ei"')l = !Ho( -eiw)l. which for a real-coefficient transfer function implies that if Hu(::) is a Iowpass
filter, then H; (Z) is a highpass filter. A perfect reconstruction power-symmetric fiJtcr hank is also called
an orthogonal filter bank.
The filter bank design prubk:m thus reduces to the design of a pmver-symmetric lowpass filter Ho(::J.
To this end. we can design an even order F(z) = Ho(z)Ho{::- 1) whose spectmifactoriwtion yields Hot::_i.
Now. the power-symmetric condition of Eq. (10.132) implies that F(;J be a zero-phase half-band lowpass
filter with a non-negative frequency response F(ef'"-'). Such a half-hand filter can be obtained by adding a
constant term K 10 a zero-phase even-order half-band filrer Q(z) such that F(el"') = Q(efw} + K ::::_ 0
for all w. The half-band hlter Q{:) can be designed using any of the methods. outlined m Sections 10.7.3
and 11.7.3.
We summarize below the steps for the design of a real coefficient Ho(z)lSmi84] :

Step 1: Design a zero-phase real-coefficient FIR half-band Iowpass filter Q(z) = L::'=-N q[nlz-n. of
order 2N with Nan odd positive integer.

Step 2: Let 8 denote the peak stopband ripple of Q{eiw). Define F(z) = Q(z) + 8 which guarantees that
F(eiw) ::::_ 0 for all w. Note that if q(n] denotes the impulse response of Q(z). then the impulse
response ffn] of F(z) is given by

q[n]+J, forn=O.
/In]~
l q[n], forn =/:0.
\10.147)

Step 3: Determine the spectral factor Ho(z) of F(z).


718 Chapter 10: Multi rate Digital Signal Processing

~
§n.of

"
~0.4:
o.-:n

Figure tO..SS: Magnitude responses of third-ocderpower-symmetric maximally ftatrumlysls filters of Example !0.17.

Several comments are in order here. FU"Sl, as shown in Section 10.7.2, the order of the half-band
filter F(::i is of the form 4K + 2 where K is a positive integer. This implies that the order of Hn(z) is
N = 2K + 1, which is odd as required. Second, the zeros of F(z) appear with minor-image symmetry
in the ;::-plane with the zeros on the unit circle being of even muhiplicity. Any appropriate half of these
zeros can be grouped to fonn the spectral factor Ho(z). For example, a minimum-phase Ho(z) can be
formed by grouping all the zeros inside the unit circle with half of the zeros on the unit circle. Likewise,
a maximum-phase Ho(z} can be fornred by grouping all the zeros outside the unit circle with half of the
zeros on the unit circle. However, it is not possible to form a spectral factor with a linear phase. Third. the
~topband edg-e frequency is the same for F(z) and Ho(z). If the desired minimum stopband attenuation of
Ho(z) is as dB. the minimum stopband attenuation of F(z) is approximately 2a ... + 6.02 dB.
We illustrate the power-symmetric filter bank design in the following example.

~--
{ 1dliN!\1H j "f ill0f'
flt:t11'1}h 4/Ln '111 1J'iu :r:rnil• t•* I'H:tttmniWs
10 9. Perfect Reconstruction Two-Channel FIR Fitter Banks 719

I..em IOC>.th>m of H,RI


7.ern :oca&m, <>'R.4
c '. 0

0
05
< c 0

~ " 2 c .:£
~ J' 0 a ~ c l 0
~ " a 0 i
'
- -0_5 0 c
_, 0
-"'
_,. 0

_, ~5
05
" " R=l ~'lin
' ' "
keal Par. "'
(d) (b)
Figure 10.59; Zero locations of the zero-phase half-band filter F(;:_) and its minimum-phase spectral factor Ho(zJ .

;H, j j;;
h;,'*'t:X'
fine~ !!11 11: iwml•kf iw1l#Hu*HM MM!it:«~
'W ( t !iliw 1!¥&0'il!i¥!JJ!If
'" MJI.

In realizing the analysis filter bank, if the two filters Ho{z) and H 1{z) are implemented .independently,
the overall s:ructure would require 2(N + 1) multipliers and ZN two-input adders. However, a compu-
tationally efficient realization requiring N - 1 multipliers and 2N two-input adders can be developed by
exploiting the relation of Eq. (10.144) {Problem l0.42).

Paraunttary FUter Banks


A p-input. q-output LTi discrete-time system v."ith a transfer matrix T pq(Z) is called aparaunitary system
if'Ipq(z) is aparaunitary matrix, i.e .•

w-here T pq(z) is the paraconjugate of Tpq(Z) gi\·en by the transpose ofTpq(z- 1) with each coefficient
replaced by its conjugate, IP is an p x p identity matrix, andc is a real constant. A callSa}._stabJep.araunitacy
system is also a lossless system.
720 Chapter 10: Multirate Digital Signal Processing

.,,_ ..

'
Gr -

-!0>'


J ::l'
' ...

''If "'
•I
I I'
-W

-~!:1-! - - - - ~---
__j_ _j_
{) 0.2 H..J

Hgure 10.60: Gain responses of seve;Jth-urder powet"-symmetric analysis filters of Example 10.! 8.

YigUn' 10.61: The paraunitary lattice structure.

Jt can be eas.iJy shov.rn that the modulation matrix Fml (z) defined in Eq. (1 0.138) of a power-symmetric
filter bank Js a paraunitary matrix.. Hence, a power-symmetric filter bank has also been referred to as a
paraurU.tary filter bank.
Since the cascade of a para unitary system with a transfer matrix Ti!; (z) and a paraunitary system with
a transfer matrix: ~:;_;(z) is also paraunitary, it is easier to design a paraunitary filter bank without resorting
to spectral factorization by cascading simpler paraunitary blocks. Tc this end, the cascaded FIR lattice
structure introduced in Section 6.9.2 can be employed. The overall structure is Joss les-s as each lattice stage
shown in Figure 10.61 is lossless. i.e.• paraunitar)\ causal, ilnd stable lVai86bJ, [Vai88a]. The synthesis
procedure outlined in [Vai86hJ, fVai88a] realizes both the power-symmetrk transfer function Ho(z) and its
conjuagate quadratic transfer function Hl(Z}. Three importanr properties of the QMF lattice structure ure
structurnUy induced. Fust, fhe QMFiattice filter bank guarantees perfect reconstruction independent of ihe
lattice parameters. Second. it exhibits very small coefficient sensitivity to lauice parameters as each stage
remains lossless under coefficient quantization. Third, its computational complexity is about one-half that
of any other realization as jt requires (N - 0/2 total numbers of multipliers for an order-N filter.

[t +fvnuht 0w
""'"''~" 1rr
Gr'h:x! lr0v:tJ}H ,;t Gm+liJ"\ii>,aJ 4t'<M1 nt
11 t 14 Vff! avt vn;r P';"<lV:'\11¥. !\_t 0 PP+Mf+ i±w ,;t;rlfgj:J$ &( •
1±¥: 1
nt JJa'
·· f 0f f!:wt .>;!
~::~~':'::e:ww
• J: tJJ {!rm: HftYI\f<t ; R1K:tittlt l {L 1{1; ¥nh \ \VIMl: w$t ~ '771\1 fP: WttftY ilf
lbrtd} dW!Jt!'771\l! fY\Jjt£;;(\)Cy tdi>t;; {f!J!\h- SWE Cki 1£\tf\ i{"{;illi i!lwj ibt?rt ""'"''"""
y;rlja f"{ifM!. t]w; tllflth 2{ t1!w %BR:t<r :"DPf!'A.k:nt\ «{Ui\u&n bwNntt:ki ~ ;rlt·0ii!L t1!w t¥31G':is: tsfmtt
mw!o$01<A' torltlkjt'}il< j); ,l Cki<rh&v:r 11Hth bqnH:HDJi"
10.9. Perfect Reconstruction Two-C1annel FIR Filter Banks 721

The QMFlaUice structure can be used directly :o design the power-symmetric analysis filter Hu\::} using
an iterative computer-aided optimization technique. The goal here is. to determine the lattice parameters
k, by minimizing the energy in the stopband of Ho(z). To this end, the objective function i-; given by

(10.151)

(t should be noted thal the power-symmetric property ensures good passband response.

Biorthogonal Filter Banks


h the desigi'. of an orthogonal two-channel filter bank. the zero-phase even-order half-b-.md filter F(::.)
is expressed in the form F(z) = Ho(z)Ho(z-J) by spectral factorization and the analysis filter Ho(z) is
chosen as a spectral factor ofF(:;). Hence, the two spxtraJ factor'> Hn(::.) and H{t(Z-t) of F(z) have the
same magnitude response. As a result, i: is not possible 10 design perfect reconstruction filter banks with
linear-phase analysis and synthesis filters. However, it is possible to maintain the perfect reconstruction
condition with !inear-phase filters by choosing a diffe:-ent factorization scheme. To this end, we factori.ze
:he causal half-band filter z-N F(z) of order 2N as

::;-N F(z} = Ho(z)HJ (-z), (10.152}

where Ho(z) and H1 (:::) are linear-phase filters. The determinant of the modulation matrix H(m!(z) is now
given by
detH'm!(z) = Ho(z)HJ(-:;)- Ho(-z.)H;(<:) = z~N[F(z) + F(-z)l = z-N.
which satisfies the condition for perfect reconstruction given by Eq. (10.142). The filter bank designed.
using !he factorization scheme of Eq. { l 0.152) is called a biorthogonal filter bank.
The two synthesis filters are given by

GJ(;:) = -Ho(-z). (l0.I53.J

0 '% 4MifU" 14?"2±1


{jjhi(iij

11tznrs; llMe ' ' ' " ' '


hUWw81Vi

!l!w mlfuw ~M;~ tKPmMU}j

>
#
> +J." ~
'
;
; ;"t
j ;
~{""""t ~lt
;
# tv 1 ;,
')
/\ FH+ Hi' {jtt frblt ('!tjYril!11dn\ t<~l Eilr 1\i:j;vv; \0 \11 fJ;>U'{lr}
<1:1\<t dw'*w"' {\q Tf+t +MitrMv <<ttm~t 177<" :W"'~'~'"tMt 1V"fi' "'101: 11t1! rtl%1/l!!h¥ 1n hmv4o>;01wrv fhitYs
7:22 Chapter 10: Muftirate Digital Signal Processing

(a) (b)
Figun- 10.62: (a) Magnitude responses of Daubechie& fS/3) analysis filter pair, and (b) magJJitude responses of
Daubechies (4/4) analysis filter pair.

vo{n) V0 [n}
H o(z) Gu(z_)

"t[n] V1[n]
>(n] H 1(z} Gl(t) y[<l-]


Figure 10...63: The basic L-channel QMF filter bank structu:-e.

10.10 L-Channel QMF Banks


We: now generalize the discussion of the previous section to the case of a QMF bank with more than two
channels. The basic structure of the L-channd QMF bank is shown in Figure 10.63.

10.10.1 Analysis of the L·Channe! Filter Bank


We analyze the operation of the L-channel QMF hank of Figure 10.63 in the z-domain. The expressions
for the z-transforms of various intermediate slgnals in Figure 10.63 are given by

v,(z) ~ H,(zJX(z), (lO.l54a)

U~r(z:) = ~I: Hk{zYL Wf)X (z 11 l. Wf), (l0.154b)

'""
Vk(Z) = UK(Z:L), (lO.l54c)

wbereO:::; k _:::: L- J.
Define the vector of down-sampled subband signals U~r.(z) as

u(z} = [Uo(z) Ut{z) (10.155)


10.1 0. L-Channet QMF Banks 723

the modulation vector of the input signals as

(10.156)

and the analysis filter bank modulation matrix as

H~~wj_)
HL-!(Z) ]
. [ HL--i (~Wl)
H(m•(z) = (10.157)

Ho(zwf- 1 ) H L-1 ( Z WLL-l)

'Then, Eq. (l0.154b) can be compactly expressed in the form

(10.158)

The output of the QMF bank is given by

L-1
Y(z) = L G.~;(zrVt(z}, (10.159)
~=0

which can be expressed ln a matrix form as

(10.160)

wbere
g(z) = [Go(Z) GI(Z) (10.161)

10.10.2 Alias-Free L·Channel Filter Bank


We now develop the condition for alias-free operation of the L-channel filter bank of Figure 10.63 [Ai.94].
From Eq. (10.160), the modulated versions of the output signal are given by

(10.162)

wluch can be expressed in a matrix form as

ylm)(z) = ( Y(z) Y(zWd -· · Y(zwf- 1 ) ]T. (10.163)

Using Eqs_ (lO.l60) and (10.161) in the above equation, we therefore obtain

y\m)(z) = G(m)(z)u(zL), (10.164)

where

,
G m 1(z) = l Go{z)
Go(zWl)
.
Go(zw£- 1 )
G,{z)
Gt(zW))
GL-llZ)
GL-t(zWl>

GL-I(zwt-')
]
,
(10.165)

is the synthesis filter bank modulation matrix.


724 Chapter 10: Multirate Digital Signa~ Processing

Combining Eqs. (10.15H) and ( 10.164), we arive at the input--<Xltput relationship of the L-channel
filler bank as

Yid(<:) = _!__G~m};.::)[Him!\.(IJT X\m}(.:)


L .
'
-= T(.::Jxi'"'C:L i 10.160)

whn: TL:J = iGr"n(.::l{H 1"'li:.1]T is call.xi the tramfi"r matrix relating the input ;;igmd XL;:\ and it;;
frequency-modulated version.; X{:: wf },
L :-:: k. -"S: L · ; . rvilh the output signal Y t:::) and 'b- frequency-
modulated ver~ion!> Y(z~t-·~;. I :::: k :s· L -- I.
The filter l-ank is alia.:.fre:.- if the trnmfer matri:~: T\ ::_) i-. ;i diagonal matrix of the fom1

no. 167)
T'lc hrst e~ement T(::.) of the above diagonal matri.>.. is called the distortion transfer jun<tion of the
L-channel tiJter bank. Sutv;.tiuning Eqs.. ( IO.I54a) to 110.154<.:) in Eq. (10.159) we amve at

/.-1

Y(:) = L O-f{?) X (:Wf). dO.l6:R)


r=!)
where
I L -I
-Ur(:::) = L L, H;,:(~Wf 'G~1Z). tl!l.16'J)
k.Jl

On the unit circle the :em1 X\,~ WJ.) become-:

(10.170)

Th'l~, from Ey. ! I 0.168). the- output spe<-trum Y(ejw_) i:.. a weighted sum of X (elw) and its uniformly
shiftcJ ver-:ion~ X (e·1 '"-'- 2" 1 fl.)} for F = 1. 2 ....• L- 1, which are caused by the sampling rate alteration
np<:~rat:ons. The term X(:- W Jj is called the £1h aliasing term, with -ac (z) representing ils ;;ain. at the output.
lrl gent:raL the QMF bank of Figure 10.61 i& a linear, time-varying system with a period of L.
It fot;ow.'". r~om Eq. ( 10.168) that the aliasing effect c<Jn be completely eliminated at the output if and
only i'
(10.171)

f()f all possible inputs x[nJ. JfEq. (l 0.171) holds, then the L-channel QMF bank of Figure 10.63 becomes
a lim::~r 'im<:-invariant sy.;;tem with a~ Input-output relation gi\'ell by

(10.171)

·.~h,~r-e Tl.-) is t~e distortion trun:-;fel function given by

I I.-I
T(z) = -a0 (z) = f
L
L: Ht\z.)G~c.(z.). (10.173)
4 - -:1

If T(z) has a com.tam mJ.gr:itudc, then the system o:- Figure 10.63 is a magnitude-presen·ing QMF hank
If T(::_) has a lin-ear phase. then tht: QMF bank has no phase d:istonion. Finally, ifT(z) is a pure delay, it
is a pt·rfa·! rcrom>truction QMF hank
10.i0. L-Channel OMF Banks 725

Using Eq:;. ( l0.157l and (10.161), we can ex.pres~ Eq. ( 10.169) in a matrix form a1>

L · Aizl = H(m)\z)g(z), (10.174)

where
A<.z) = [.an(z) a; (z) c']T .
4L-I ( -J (10.175)
The aliasing cancellation condition can now he rewriiten as

(l0.l76)

where
t(;::) = !Lao(z} 0 ·· OlT = lL · T(z) 0 ··· 0{. (10.177)
From Eq. (l0.176l. it foHow~ that l:>y knowing the set of analyl>il> filters (Hk(z)}. we can determine the
deoired set ofsynthe~is filters {G~:(z)} as

(10.17!:1;

pmvided of course fdet Hi."''(z)J i= 0 Moreover, a per~et:t reconstruction QMF hunk results if we set
T{;::) = z · "0 in the expression for t(z) in Eq. {10.! 77). ln practice, the above approach is difficult to .:arry
out for a number of reasons.. A more pr.tctical solution tu the design of perfect reconstruction QMF bank
ill obtained via the polyphase representation outlined next IVai87d].

10.10.3 Polyphase Representation


Consider the Type I polyphase representation ufthe kth :malysis filter Hk(z):
~-1

H•(~) = ~ . - l F:·ti·L) O::::::k.:::L-1. (!0.179)


" -· L.... ~" ,, .

A. m.·urix representation of the al:>ove set of equation~ is given by

h(z) = E(::L)e\z), {10.180)

where

00.181a)

(IO.l8lb)
;md
J
Ef:::) = r
''
'
:._ ,t·i.- ._.,L:·)
£01 (Z)
E: 1(z)
Eo.L-<(z)
£1 L-t(z)
(l0.18lc;

EL-' L-J(Z)
The malrix E(..::) defined abO\·C i"- called the T_vpe I polJ7!hase component rrwtrix. Figure lft64(a) 5hows
the Type I polyphase representarion of tile analysis filter ba."'lk.
Likewise, we can represent the t synthesis fi:ters in a Type II polyphase fonn:
L-<
G;y:) = Lz .. o.-t-t:oRt;,(zL), (10.182}
f'-'0
726 Chapler 1o: Multirale Digital Signal Processing


Figure 10.64: (a) T'ype I polypha3e representuti0!1 of the analy~it. filter bank and (h-) Type H polyphase representation
of the synthesis filter bank.

In matrix funn, the above set of L equation:. can be rewritten a:>

(JO.HU)

where

g(;:) = [Ga(z) Gl(z.) {10.!&4a)


(iG.l84b)
and

~~~~;
Ro;{z)

l
Ro.L-;(zi ]
R11\Z) RJ,L-l (z)
R(<.) = 00.184c)

RL-l,Q(<) RL-U-I(Z) .

The matrix R{z) defined above ifi called the T.rpe /J polyphase rampownt matrix. Figure W.64{b} shows
the Type H polyphase representation of the synthesis filter bank.
Making u-se nf the polyphase repre:rent:n-ion,; of Figure 1{!.64 in Figure 10.62 and tile cm.cade equiv-
alences of Figure l0.l4, we arrive at an cquivaten! realization of the L-channel QMF bank shown in
Figure 10.65.
The relation between the modulation matrix HV"l(z) of Eq. ( 10.157) and the Type J polyphase compo-
nent Jru.trix E(t) ofEq. (10.18lc) can be ea~ily established. From Eqs. 00.157), (10.180). and (10.18ta),
we observe that

(H("'\zH 7 = [h{z) h{zW}) ht:;wf-')]


= E(;::Ll [e(;:) e(zW}_t ··· e(zwf-J)J. (10 185)

Now, from Eq. {10.l8lb}, it follow.> that

00.186)
10.10. L-Channel QMF Banks '127

E(z) R{z)


Figure 10.65: L-channel QMF bank structure based 011 the polyphase representations of the analy~>~ and s.ynthesis
filler banks.

where
A(z)=diag[l z- 1 ••• z-{L-1})- (10.187)

Making use of Eq, (10.187) in Eq. {10.185), we arrive at the desired result after some algebra;

H(z} = Dt A(z)ET (ZL), (10.188)

where D is tlte L x L DFT matrix.

10.10.4 Condition for Perfect Reconstruction


Hthe polyphase component matrices of Figure 10.65 satisfy the relation

R(z)E(z) = cl, (10.189)

where I is an L x L identity matrix. and c is a constant, the structure of Figure 10.65 reduces to the
one shown i.n Figure 10.66. Comparing Figure 10.66 with Figure 10.63, we note that the former can be
considered as a special. case of an L-channel QMF bank if we set

Gk(Z) = Z-\L-1-k}' O::;:k::;: L-1. (10.190)

Substituting the above in Eq. (10.169}, we arriv-e at

( l _
aez.-L~z
L-1
_!_'""' -kw-.ek -<L-1-kJ
L z
_
-z
-<L-t>
(
L~
£-1
_!_""' w-"
L
)
· (10.191)
k=O k=O

From.Eqs.(Hl9)and{IO.IO),itfollows.thatao(z) =I and.a..e(z) = Ofori ::f:. 0. Hence,fromEq.(IO.l73),


we note that T(z) = z- (L-l), or in other words, the structure of Figure 10.65 is a perfect reconstruction
L-channel QMF bank if the condition of Eq. (1 O.t89) is satisfied.
The anal~•sis and synthesis filters of a perfect reconstruction filter bank of the fonn of Figure 10.65 can
be easily determined as: illustrated in the following ex.ample.
728 Chapter 10: Multirate Digital Slgnal Processing

Figure 10.66: A simple perfect reconstruction mullirate system.

Flgore 10.67: A duee-d:.:mn.el anal;r·sisisyntheslS filter bank.

,, '
'l
:r+d' 't t
I
{, I
wr•tti:JJtis:t 'W&tmw.i\lt&
"
{qf
"" [: 1
11 Jt'•
+

Now. for a given L-channel analysis filter bank, the polyphase matrixE(z) is known. From Eq. 0 0.189)
it therefore follows that a perfect reconstruction L-channel Q.MF bank can be simply designed by con-
structing a synthesis filter bank with a polyphase matrix R(z) = [E(z)}- 1 • [n general, it is not easy to
10.10. L-Channel QMF Banks 729

Figure 10.68: Gain responses of the three-chan!lel FIR QMF bank of Example .10.22..

compute the inverse of a rational L x L matrix. An alternative elegant approach is to design the analysis
filter bank with an invertible polypha.._.;e matrix. For example, E(zj can be chosen to be aparaunitary matrix
satisfying the condition
E(z)E(z) = cl. for all z. (10.192)
where E(z) is the paraoonjugate of E{z) given by the transpose of E(z-t ), with eoch coefficient replaced
by it" conjugate and cboosing R(z) = E(z).
For the design of a perfect reconstruction FIR L-channel QMF bank., the matrix E(z) can be expressed
in a product form [Vai89}
(10.193)
where Eo is a constant unitary matrix, and

(!0.!94)

in which vc is a column vector of order L with unit norm, i.e., [vf] 7 ve = l. With this constraint on E(z:),
one can set up an appropriate objective function that can then be minimized fo arrive at a set of L analysis
filters meeting the desired passband and stopband specifications. To this end. a suitable c~ective function
i<; given by

¢
L-'
~L
k=l)
1 ktb~
IH,(e'"li
I
2
dw. (!0.195)

The optimization parameters are the elements of Vi and Eo.

t'I!Ll± 'Th\w 4 \W!Hfli''f tfttt :;f Ill


t!f 11Wtd0frr t; 'Viwr t;f 10v:
'0111'\&m '< ,u~ rt;:t
fv Vm• Wii 111 t'tf!f(;t\tar M f»e
730 Chapier 10: Multirate Digital Signal Processing

10.11 Cosine-Modulated /.-Channel Filter Banks


The cosine-modulated filter banks were originaJly developed to provide nearly perfect reconstruction with
aliasing canceliation between adjacent channels and asswne no aliasing bet..,.een nonadjacent channels
due to infinite stopband attenuation of the analysis filter in all nonadjacent bands [Rol83], [Chu85]. Even
though, the laaer a<:sumption do not hold in practice, these filter banks. calledpseudo-QMF banks, provide
quite satisfactory performance if the stopband attenuation IS sufficiently high.

1 0.11_1 Derivation of the Fmer Bank


The pseudo-QMF banks are derived from a modified form of !he uniform DFf filter banks. Let
N
Po(z) = ,L polnl.C" (HU%)
fi=O

<M~note the prototype lowpass filter with real coefficients and a cutoff frequency at rr/2L. We generate a
se.t of filters Qk{Z) from Po{z) by complex modulation at frequencies (2k + I )rr j2L = (k + 0.5)rr/ L as
faUov.;s:
0::::: k ~ 2L- 1. OO.I97)
wtlere W2L = e- J:rr/L. Becau&e of the complex modulation, these filters have complex-valued impulse
responses. Note from rt1e .above, the response of Qo(z) is a right-shifted version of the response of Polz)
shifted by rrj2L. Because of this shift, JQ,t(ei<»)l = IQn-I-k(e-1"")1. and the impulse response of
Qn-1-k (z) is complex conjugate of the impulse response of Qk(Z), for 0 ~ k ~ 1" - l. The pair Qk(Z)
and Q2L-J-.t(z) are combined to generate a filter wirh a real impulse response.
Define !he intermediate transfer functions

(10.198)

The L analysis filters are then formed according to


N
Hk.(Z) = ,L h,[n]z-" = akUk(z) +akVk(Z), (10.199)
fl=0

Likewise, the L <>ynthesis filters arc fonned according to


.v
G.dz) = Lgk[n]z-n =bkUk(Z)+bkVk{Z), O::;:k::::;L-1. (10.200)
ll=C

In che above equations, ak, bk. and q are unit-magnitude constants. They are chosen ro provide alias
cancellation between adjacent channels and to ensure that the distortion transfer function T(z) has a linear
phlSe.
Consider the second channel. The output of the filter G:;:(z) has the components H 2 (.zWf}X(zWf.),
0 :: f. :::::: L- I. However. as can be seen from Figure 10.69, the responses of U2(zWL) and U2(zWL 1) do
not overlap with thar of Uz(zJ. On the other hand, response~ of U2 (zWL 2} and U2 (zWL 3) overlap with
that of V2(z). Likewise, the responses of V2{zWf) and V2(zWl) overlap with that of U2 (z). In general,
"sig.nifi<::ant" alias components of X {z:Wj) at the output of GK(Z) correspond to- values cf

£. = \-k + lfL, \-k}L.k. k + L


·10.11. Cosine-Modulated L-Channel Filter Banks 731

'
V2\tWM)

,.- - I- -._ r-y:·--.


, '
,
,, ,
,
'

L___i,W_'---·~-W
__;;_
_ll!:
M M 'M "'
M

Figure 10.69: Illustratior. of alias componem~ overlapping with IG k(« i"')j_

Similarly, significam alias components of X (z Wi) at the output of Gk-1 (z) co-rrespond to vaJne<; of

i = (-k}L. (-k + lf,._, k- l.k.

An additional requirement in the design of the filter bank is to have no phase distortion. To this end,
the di!>tortion transfer functi-on T(z) as given in Eq. ( l 0.173) should have linear phase. This is achieved if
t:-.e synthesis filters are re-lated to the analysis filters according lo

(10.201)

[( can be shown £hat the constants, ak. b,.;, and q, can be cho-sen to cancel the common aliasing
components X(;;: Wj'k} at the outputs of G~c (z:) and Gk-1 (;::),and to make the distortion function T(z) have
!:.near phase resulting in closed-form expressions for the analysis and the synthesis filters which are related
to the prototype filte-r p0[n I b}' wsine modulation [Chu851. fRot83], [Vai93]:

h.~:[n}=lpo[n]:::os((k+~)(n-4)~ +(-1).1:%),
v) -~L -(-1)-;_; .
gdn1=2po[n]cos\I( k+ 1'.}ln-T
. *") (10.202)

11 should be r.oted that if the prototYJX filter Po(z) has linear phase. the distortion function T(zj has linear
phase. However, in general, the analysis and the syntbesif. filters do not have- linear phase.

10.11.2 Prototype Lowpass Filter Design


1-iem:e. the desigr:. of the cosine-modulated filter bank reduces to the design ofihe Lth-band prototype filter
po[n] such that the magnitude response jT(e1"")1 of the di~1:mtion transf« function T(::) is approximately
fiat for all values of w. To this end, the prototype lowpass filter Po(z:} should satisfy us much as pos..<oible
tte following !wo conditions:

n
O<W< ( 10.203)
l.
and
(10.204)

Tite QMF bank does not exhibit any amplitude distortion ifEq. (10.203) is satisfied exactly. whereas there
is no aiiasing between nonadjacent channels if Eq. (10.204) holds. As noted earlier. aliasing between
ru:~acem channels is canceled structurally,
7~2 Chapter 10: Multirate Digital Signal Processing

T:,~ d:.:-;;ign of the Lti-banJ FIR prototype tiller ";~li<,fJI'.lg: both •.r_,. condition-. •A- E.q;,. ( !{L203) ami
1 i0.2fL\.) j,; not tJOSsible. A re;:tll\.ely straightforward U<:'>J!l'' appm11ch maKL"> uso.:: of !he popular Purks-
McClelb:-~ meiltod 14 t0 design the prototype lOWfXlS" !Jlter : C r~·'J5:. T1c two rond1tior.c. o-f F..qs. ( l 0.20:\ l
and ( 10.204) are ~;:tis tied appru.\utl.th.·ly b_y .rdit•sting ikrJ.tl\ ,-j :· the pa-;<,haod edge IP nlilllrllizc rht: uhjec:ive
funclitm
--.<r{L; !'
' l
-- l . ! i0.2fJ5)

The t:lkr ~ength. -..tnpb:wd ,·J;;c <•! if I r.


and the rda!J\T dmr weighting are kept fi:u;d durmg the oph-
J uit.;ltl, >lljl'HlCedun.·.
!; !!;,,. hct'll found thn;agh e\tensive de-.ign examples. u'"ic: ubjeU1ve function of Eq.! !0.21l5i 1:<. convex
1-, uh '"~pect to the pa;;sb<::-td e•lgc-. A~ -<J. result, the algorithm converges to ;he same global mlnima for
,tf(v init:ai '>laJ1tllf' valm: of the pa~<-band eJge The :iller lt:ngth i:> Jc:-ermlnetl by Jixirg the inirial value
!lr;he passband edge at wj2L ;,uLb that the Ccsired stqpbanJ attenuation is ohlained. At each step of the
optum:r.1twn process, Ihe passbomd ct.lge rnuves clo~er to :t.cn: which in turn increases slightly the s:>:~pband
<tUc:nuation. It is also recommended that the passband crn>r be weighted more heavily than the "tophand
attenuation.
Th.e follo~ing MAn.<>._R M-fi.le-. can be us~ to de-.lgn l~'le pseudo-~ bank ba;,ed on the abO\T
method. 15

delta= Cl.oc-::.;

' '_) Co.OOCG01;


: t cp * ;:>aSS(':'Cg'C;

F.[.'V 1;
-.!..
b--CJS( J ::;-;
,-';(~ ~ 0:
' ·.~.:-Ltc' ~ <';::JJ.'::l:n~:

_:':lop:::; - flop.s;

;,,;:ilEc flAy==:;
1-:o::::t ;:·eT.=oz.\N,: C,possedg2, st-ope,i,;;c, J], [1, 1, 0, C], /5, "-J);
;_1, = :-;t o--,:::;-)1 ,t~cL-Sl;

rlf-l CfVl)">_r )]}}'::_"' ICf,l·:C-.:H:_,·io_;);

";t_ccf_;-';~;

14See Scni,m 7 .7 l
1~The~ MA Tl ..>\ H f.m<.·nom we,-., wrilte-u hy C !)_ Ctt"~""e
10.11. Coslne-Modula1ed L-Channel Filter Banks 733

f\ag = 1;

pcost = t:·o:;::;
tkLs:.~:oJdge = t>:lssedg? 1 way*s::e:;J;
Pnd
=~na.l tune = cpc.ti~ne s_::ime;
t.ota!_f"1ops = flo-ps - s_: :Cops;
c::ave hopt:.mat hop:.-asci.:._;
?a.V\-'> pao;o; passedqe -ascl_;

funct_ic:l H = cow;._;_f.::at:{~--!~:I,Q)
1i Cd:i.culates cosL functio:-t (ovsr1ap):t=>d r.:.pple in passband)
1; be_~ ng mirici zed.
G max(siz~(!Ln)l;

fi8ot·(~043/Ql; H
for k~.l:M
H\k:• = abs(HiL(M
r~nd

func tJ.o.::-, [E,G: = nake_ta.nk{:t:,nDandS!


'15 : t-_, G] mak-e_ba11 k ( h, r.bar.ds;
=
% "T"b.'..s f'.Jnct.i-::.n c:ceatcs tt-.e fille-rs for a pseuCc-QMF filter banks
'15 Hi :..Jc :L~mheL ot La::',ds = r..bans.s
f l er1 r;-;ax uo.l z,~ (hi ;
sqr::: !2},"L;
J or k = l. :n::.Ca:1ds
ai2 .. k-1) t i*t;

Pn~.l
for k= ~: Qbar:.t'l~,

ml = ccs{p_;_*(2*k-1J*;2*.:i_-::.)/(.:l*rba:-.ds;};
m2 0~n(pi* C?*k-l;"" (2-"1-il; 1,4"nhcor;.j_s));
Hlk,il = ).O*ireal{a(ki )*T]_ - imag(a(kJ l*m2)*h(l)

.--nl ~ ccst:;:::i~(2*k-l)""i:C*l<..l "(4*ubands));


:n2 s-:n\}::l*:2*k-1)"";2.,..l--1J '(4"-!l!:ands;);
=
G(k, 1; ~ 2.0* (.n:::;al. (a(<!) "'"ml i~;ctg(a(kll*m2)*hil)
end

We illuslra!e their use in the fo!lowing example.


734 Chapter 10: Multirate Digital S~na! Processing

()_(!)-

''H
'
/ I
'

i "'
"' i

(a) (b)

Figure 10..70-: (a) Gam respomes of the & analysis filters and (b) reconstruction error_

sis!
!::d}k/t i}: X ¢iij:k¥Jii 0i!G At

10.12 Multilevel Filter Banks


In Section 10.10, we analyzed the general L-channei QMF bank and developed the conditions to be
satisfied by the analysis and synthesis filters for perfect reconstruction. It is also possible to develop a
multiband analysislsynthesi'i filter bank by iterating a two-channel QMFbank, Moreover, if the two-band
QMF bank is of the perfecl reconstruction type, the generated multiband structure also exhibits the perfect
reconstruction property (Problems 10.64 and 10.65). In this section we consider this approach.

10.12.1 Filter Banks with Equal Passband Widths


By inserting a two-channel maximaHy decimated QMF bank in each channel of anolher two-channel
maximally decimated QMF bank between the down-sampler and the up-sampler, we can generate a four-
channel maximally decimated QMF bank, as shown in Figure I0.63. Since the analysis and the synthesis
filter hanks are formed like a tree, the overall system is often called a tree-structured filter bank. It should
be noted thal in the four-channel tree-structured filter bank of Figure 10.71, the 2 two-channel QMF
banks in the second Jevel do not have to be identical. However, if they are different QMF banks with
different analysis and synthesis filters, to compensate for the unequal gains and unequal delays of the 2
two-channel systems, additional delays of appropriate value~ need to be inserted at the middle to ensure
perfect reconstruction of the O'>"erall four-channel s:ystem.
An equivalent representation of the four-channel QMFsystem of Figure 10.71 is sbown in Figure 10.72.
The analysis and synthesis filters in the equivalent representation are related to those of the parent two-level
tree-structured filter bank as follm\'s:

Ho(z) = HL{z)Hw(z 1 ), H1 (z) = HL(z)H11 (z 2 ). (10.206a)


H2(z) = HH(z)Hw(z:2). HJ{Z) = HH(Z}Hu(z ),
2
(10.206b)
10.12. Muftilevel Fdter Banks 735

x{nj y[n]

Ftgul't' 10.71; A two-level four-channel maximally decimated QMF structure.

x[n] y[n)

_ Figure 10.72:; An equivalent representation of the fm~r-channel. QMF structure of Figure 10.71.

Go(z) = GL(z)GIO(<:?), Gt(Z) = GL(Z)GJt(:z 2}. (10.206c)


G2(z) = GH(z)GtO(z 2), G3{z) = Gn(z)Gu(z 2). (10.206d)
736 Chapter 10: Multirate Digital Signal Processing

Figure 10.73: Gain res.ponses of the four analysi5c filte.n of Example 10.24.

'11:::1!+ r
,, tS::k' I,
'
I '
'" ¥

From Eqs. (l 0.206a) to (I 0.206d) it can be seen lhat each analysis filter Ht(Z} is a cascade of two filters,
one with a single passband and a single stopband and the other with two passhands and two stopbands.
The passband of the cascade is the frequency range where t_lte passbands of the two filters overlap. On
the other hand, the stopband of the cascade is fO£med from three different frequency ranges. In two of
the frequency ranges, the passband of one coincides with the stopband of the other. while in the third
range, the two stopbands overlap. As a result,. the gain responses of the cascade in the three regions of the
stopba."ld are net equal, resulting in an uneven stopband attenuation characteristic. This type of behavior
of the gain response can also be seen in Figure 10.73 and should be taken into account in the design of the
tree-structured filter bank.
By continuing the process described above, QMF banks with more than four channels can be easily
constructed lt should be noted that tbe number of channels resulting from this approach is restricted to a
power of 2, i.e., L = 2". In addition, as illustrated by Figure 10.73, tbe filters in the analysis (synthesis.)
branch have passbands of equal width given by JT i L. However, by a simple modification to the approach
we can design QMF banks with analyYs (synthesis) filters having pass bands of unequal width as described
next.
10.12. Multilevel Filter Banks 737

(a)

(b)

(c)
Figure 10.74: \a} A tw-o-channel QMF bank, (b) a three -channel QMF bank derh-'ed from the tv.u-channe1 QMFbani:,
and (cJ a tOur-channel QMF bank derived from the three-channel Ql\.fF bank.

10.12.2 Filter Banks with Unequal Passband Widths


Consider the nvo-c-hannel maxlmally decimated QMF bank of Figure 10.74(a). By inserting another two-
channel maximally decimated QMF bank in the top subband channel between the down-sampler and the
up-sampler at the position marked by a *• we arrive at a three-channel maximally decimated QMF bank.
as shown in Figure l0.74(b). The equivalent representation of the generated three-channel fi1ter bank is
indicated in Figure 10.75(a). where the analysis and synthesis fillers are given by

Ho(z) = HL(z}HL(z 2 ), H1 (z) = HL(z)HH(::,Z), H2(z) = HH(Z},


2
Go(z) = G L(z)G L(z ), G: (z) = G L(z)GH{Z 2), G2{z) = G H(Z). (10.207)

Typical magnitude responses of the analysis filters of the two-channel Q11F bank of Figure 10.74(a)
and that of the derived three-dunnel filter of Figure 10.74{b) are sketched in Figure 10.76(a) and (b),
respectively.
We can continue this process and generate a four-channel QMF bank from the three-channel QMF
bank of Figure 10.74(b) by inserting a two-channel QMF bank in the top subband channel at the position
marked by a *• resulting in the structure of Figure 10.74(c). Its equivalent representation is indicated in
Ftgure 10.75(b). where

Hot.<.)= HL~z)HL(:_2)HL(z 4 }, HJ(Z) = HL{z)Ht(z2)HH(Z 4),


1
H2(:) =
HL(Z)HH{Z ), H)(Z.)= Hn(z),
Go CO= G L £z)G L(z 2)GL (z 4 ), G1(zl = GL{z)GL{z 2)Gn(:: 4 },
G2(z) = GL•:z)GH(Z 2 ), G3(z) = G H(Z). (10.208)

Ftgure 10.76(;;) shows typi<::al magnitude responses of the analysis (synthesis) filters of the four-channel
QMF bank of Figure 10.74(c) derived from a parent two-channel QMF bank: v.ith magnitude responses as
indicated in Figure 10.76(a}.
738 Chapter 10: Multirate Digital Signal Proce&Sng

_r:n 1 yjn]

_y{nl

Figure 10.75: Maximally decimated QMF banks with unequal passband wldth analysis (synthesis) filters.

Because of the unequal passband widths of the analysis and synL"Jesis filters, these structures belong
to the class of nonuniform QMF banks. The tree-structured filter banks of Figure 10.74 are also referred
to as actave band QMF banks [Fli94]. Various other types of nonuniform fiber banks can be generated by
iterating branches of a parent uniform tw(K:hannel QMF in different forms. Nonuniform filter banks are
often used in speech and image coding applications_ The former application is discussed in Section 11.8.

10.13 Summary
The basic theory of rnultirate digital signal processing is introduced in this chapter along with the design
of some useful multirate systems. The two basic sampling mte alteration devices are the up-sampler and
the down-sampler. We first discuss the input-output relations of these devices, both in the time-domain
and in the frequency-domain. We then describe severn] cascade equivalences that are used to develop
computationally efficient implementation of a multirnte system by permitting the movement of the up-
sampler and down-sampler from one part of the sy~m to another part.
The sampling rate converter is implemented using either the up-sampler -or the down-sampler or both,
and a law-pass digital filter. The design issues for the samp:ing rate conversion are discussed next. We
demonstrate that a computationally efficient sampling rate converter can often be designed as a cascade
of such converters. We also outline another approach to the implementation of a computationally efficient
sampling rate converter that is. based on the use of the polyphase decomposition of the lowpass digital
filtt~-
The concept of analysis and synthesis filter banks is ilien introduced and a method for designing such
fi1ter banks from a prototype lowpass transfer function is described. A vecy special type of digitallowpass
filter. called a Nyquist filter, which is particularly attractive for computationally efficient sampling r-ate
convener implementation. is considered next.
The remainJng part of the chapter is devoted to the analysis and design of the so-called quadrature-
mirror filter (Q;_\1F} bank that is fonned by a combination of an analysis filter bank with down-sampled
outputs followed by a synthesis f.ltec bank with up-sampled inputs. Conditions satisfied by the analysis
and synthesis filters for an alias-free operation ofiheQMF bank are derived. Severnl types of QMFbanks
10.14. Problems 739

Ho HI

m
0 ,.~
2 ~ •
(<)

Ho H\ H,

0L---~----~--------~---ro

2
(b)

(c)

Figure Ui.76: Magnitude responses of the analysis fikers of a (a) two-channel QMF bank, (bi three-channel QMF
bank derived from a tWQ-channel QMF bank, and {c} four-channel QMF bank derived from a three-<:hannel QMF
bonk.

are defined and their design equations are developed.


Several other applications of multirate discrete-time systems are outline-d in Chapter l I. These include
subband coding of speech and audio signals (Section 11.8), transmultiplexen; for signal conversion between
frequency-division multiplex (FDM) to time-division multip-lex (IDM) communication systerm (Section
11.9), discrete multitone transr.1ission (Section 11.10), digital audio sampling rate conversion (Section
11.11 ), oversampling analog-tn-digital conversion (Sect ton I I .12), and oversampling digital-to-analog
conversion (Section 1 Ll3).
For additicmal details on multirate digital signal processing, we refer the reader to the texts by Akansu
and Haddad [Aka92], Crochiere and Rabiner [Cro83J, Fiiege [Fli94]. Vaidyanathan [Vai93], and Vetterii
and Kovacevic {Vet95).

10.14 Problems
10.1 Show that the up-&ampie defined by Eq. (10.1) is a time-varying system..

10.2 Show that the up-sampler defined by Eq. (10. i) and the down, sampler defined by Eq. (10.2) are linear systems.
74J Chapter 10: Multirate Digital Signal Processing

103 Express the output y(nJ of Ftgun:: 10.6 as a hmction of the inpur x[nj. By simplifymg the- expression derived
;how that y[n I= x[n - IJ_

lOA Prove theidentityofEq. (10.10).

IO.i'i Show Ulat the two possible cascade configurations of a fat-'tur-Df-L up-;;ampler and a factor-of-M down-sampler
shown in Figure IO.l3 are equivalent if and only if L and M ;rre mutually prime.

10..1) Verify the cascade equivalences. of Figure 10.14.

10.'1 Develop an expression for the output ylnJ a'! a function of the input xlnl for the multirate 4mcture of Figure
P W. L [Hint: Replace the factor-of-10 down-sampler with a cascade of two down-samplers, and then make use of the
ideutityofFigme 10.13.1

x{n] y[nJ

Figure PIO.t

IO.U Consider the rnultirat:e structure of Figure Pl0.2{a) where Ho(z), H1 ({!, and H 1 (z) are, respectively, ideal
zero-phase real-coefficient Jowpass, bandpass. and high~ass filter;; with frequency responses as 1ndK:ated m Figure
PJC.2tbt If the illput is a real sequence with a discrete--time Foune-r rransfonn as shown in Figure P10.2{c), skett.:h
the discrete-time Fourier transfo.rms of the ouqruts )-o[n]. y 1[n ), and Y21n }.

x[n) y 1[.11]

~y2[n]
(a)

H (e.f<J.') H (ejJ;)
1 2

n
w
j D j
0 •13 2n/3 • 0
I
I
2::ti3 R
w

(b)
X(e''"')

j 0
/~
n./3 h/3 •
(c)
Figure PlD.2

10.9 Show that the tr:rn!lp()<reof a faclor-of-M decimator is a factor-<:>f-M interpolatGrifthe transpose of afactor-of-M
dow wsampler is a factor-of-M up-sampler.
10.14. Problems 741

10.10 Show that the transpose of an j..f -channel ana:y~is filter h;mk is an M -channel synthe&is bank.

IO.J 1 Develoo an alternate two-~tagc dc.-jgn l'f the decimator oi Example 10.8 by dnigning the dedmation filter in
the form H \::). = G ri'J Ff ,~). Cnmpare its computational requirement!; with that of the design in Example 10.8.

10.12 Rept!al Problem 10.11 foe a filter uf !he form H (7) = G{~ 5 )F(z}. Compare tts computational requirements
with that of1ht> de~igm in Example lOJJ and Problem lO.ll.

10.13 Jcter.nin<' the ..:omputational complexity of a f>ingle-<;tage decirnator designed to reduce t.'le sampling rate from
6[; k!Iz to 3 k!Iz. The decimation filter is lobe designed as an equiripple FIR filter with a passband edge at 1.25 kHz,
a passbnd ripple of 0.02. and a stop::.a."ld ripple vfO.O'.. Use the total multlp'.icatiorn; per second a;; a mea;;ur.e of the
C•.JmpuLational complexily.

10.14 The dccimator ofPro~lem 10.13 is to be designed as a two-stage structure. Develop an optimum design with
!he ~nntlest cumputatiorcal comp!ex.ily.

10.15 (a) Determine the cr>mputational complexity of a single-stage interpolatm to be designed to increaM: the
~amplir.g T<itefrom 600 Hz w 9 kHr.. The interpolato-r is to be designe-d as an equiripple HR filter with a
pas,.;band edge at :ZOO Hz, J passband npple of 0.002, and .a stopband ripple of 0.004. Use Kajs,er's formula
given in Eq. (10.26) ;:o e:U:irnate the order of the AR filler. The measure of computational complexity is given
by the total number of muhiplication!< per second.
(b) Devdop a two-stage design of the above mterpolator and contpare !ts computational complexity with that of
the single-~tage des1gn.

11l.16 Using the meth<YJ uf Eq. ( J0.75) develop a two-band polyphase decomposition of each of the foii(W;ing Iffi
tiansfcr function,-
(a) H:(<:)= l'O+P!Z. ,id1i <I,
-'
I +dp: '

(b;
2 + 3 L:- 1 + 1.sc1
1 -<- n_9~ 1 + o.s::- 1 •
2+ 3.1.::- 1 + J.sz- 2 +4.z>'
0.5.:: ; }(I + 0.9:: l + 0.8:: 2).

1•:).17 U&ing the method of Eq. (If;. 75) devel.op a three-band polyphase dewmposition of each of the following IIR
transfn function!':
PO+ f!F
-'
(a) HJ(Z_! = . , ld;! < ~-
1 + t!,.:: •
~'_+~3c·1c'c-,'~+_lc·c5c'_-"'
(h) ff2\z) =
I +0.9< 1 +0.8z 2·

JC).UI Develop a computationally cffidenl realization of a factor-of-4 interpolator employing a latgth~l6linear-phase


FR !liter.

10.1'): Dc•dvr a n>mputationally efficient rcrlizatiun cf a factor-of-3 decimmor empl_oymg a length-15 linear~phm;e
F: R filter_

W.W Show lffilt the runmng-:..umjilrer. also .:ailed the hox('ar filter. H(z) = L;'~(i 1 z'. can be expressed in the form

wh;;:rc i'/ = 2K. Develop a computatwnally efficient realization of a factor-of-16 decirnator using a Iength-17 boxcar
filter.
742 Chapter 10: Multlrate Digital Signal Pr.ooesslng

10.21 The multirate system of Figure Pl0.3 is usually employed in exchanging discrete-time signals between two
dis:rete-!ime sy~tems with incommensurate sampling rates [Cro83J. Samples at the input of the digital sample-and-
hold cin:uJt may often be repeated OT totally dropped resuhing in an error in the overall sampling rote conversion
precess. Let £ denote the ratio of the energy :n the ~le-to-sample difference signal to that ia the original signal
y[r,] ac the outpct Qf the fuctm-of-l. interpolator, and let C denote the sample-to-sample correlaOOn of the signal y[n l
Express£ as a function of C. and show that as. L becomes large,£ be=mes small i.e., the error in the overall sampling
m~: <::onversion process becomes small.

Factor-of-£ Factor-of-£
Interpolator Decimat.or

x[n] Digital Sampi y{nJ


and Hold
t
L(Fr +E.l) L(Fy +t2)

Figure Plo.J

Hl.:Zl The mu1tirate system of Figure Pl0.4 implements a fued delay of L/ M samples where L and M are relatively
prime integers {Cro8-3]. Let H(:;} be li Type I length-N linear-phase FIR lowpass filter-with a cutoff atJrf Manda
pm:shand magnitude approximately equal to M. Develop the relaOOn bet>h'eell the DTFrs Y(eiw) and X (ej.w) of the
oUlput y[!l] and the input :.:[n], respectively. assuming N =
2K M + I. where K is a positive integer.

>in]
x[n] H(z) y[,!]

Figure P10.4

10.23 Let an ideallov.rpass filter H (;:) with .a cutoff at Jr/ M be exPfessed as

M-l
H(;:.) = L z.-kHt(zM).
,..,
Show that each polyphase subfiltec H;, ( ::) is an allpass filter.

10..24 Prove Llx: Identity s.lrown in Figure 10.34.

IO.lS A generalization of the polyphase decomposition is considered in this problem. Let H(z} be a causal. FIR
transfer function of degree N - ] with N even:

N-•
H(z)

(a) Shm>.· that H(::) can be expressed in the form


=
.L
~
h(n]z-" .

H(z) =0 ..;.. ::- 1)HoU: 2) + (1 - z- 1 )H 1(::h. {10.209)

{b} Expres." Ho(z) and H~ (::) in terms of the coeffiCients of tbe polyphase components Eo{z) and E 1(;;.). The
decomposition of Eq. (10.209) is an example of a generalized polyphase decoU~pQSition. ca:Ued the structural
.wbband der:ompositWn [Mit93].
10.14. Problems 743

(c) Show that the decomposition of Eq. (10.209) can be-expressed in tile form

H(z) = [1 : -1 [' '][Ho(z2)J


] t -1 Hl(Z2l . (10.2101

The 2 x 2 matrill in Eq. {10.210) is called a Hadamard matrix of order 2 and denoted by R2.
(rl) Show that if N = 2L, then H(z) can be expressed in lhe form

Ho(tL)
HJ(l"L)
J {1021 \)
/l(_L) =11

HL-J(ZL)

where RL is an L x L Hadamard m.alrix. Express the structural subband components {fl;(t)j in terms of the
polyphase components {E, (z)l of H(z).

10,26 Develop a comput.ationaHy effident realization of a factor-of-4interpolatoremploying a length-l61inear-phase


flR filter H(z) and using a four-band stmctural subband decomposition of H(~) in the fonnofEq. (10211).

10.27 Design a fr~iorul-rate interpolator with an interpolation factor of 415 using the Lagrange interpolation al-
gomhm. Use a fourth-order polynomial ap]'H'oXirnation. Develop realizations of the interpolator based on a btocli:
filtering approach and the Farrow ~tructun:.

10.28 Lel hfnj denote the impuhe res-ponse of a lowpass half-band filter with a zero at~ = -I. S~· that

..=-=
-~

10.29 Show ttw! !he following FIR linear-phase transfer functioru; are lowpass half-band fillers [Goo77{. Plot the.ir
magrutude responses ooing MATLAB.
(a) HJ(Z-) = 1 +2z- 1 +z-2_
(b) H2tzJ = - I + 9z- 2 + 16z- 3 + 9z- 4 - z-6,
(c) H3(::,) = -3 + 19::- 2 + 32.;::-J + l9z- 4 - Jz-6,
(d) H4(z) = 3- 25z- 1 + 150z- 4 + 256.;::- 5 + 15Gz-ti- 25;::-a + 3z-w.
(e) Hs(z) = 9- 44:;:<0 + 208z-~ + 346:- 5 + zogr6- 44z-B + 9z- 10.
11).30 Consider tbe two-channel analy;;is filter bank structure of Figure PI0.5 where Ho(z) is a length-6 FIR filter
with a transfer function given by

{!0.212)

Jf H; (z) = Ho\ -z_)-, dev~lop a realiz-ation of this filter bank with five delays and six multipliers.
744 Chapter i 0: Multira1e Digital Signal Processing

10.31 Consider the two-.;hanne\ analysis fiLter bank structure of Figure P10.5 whef-e Ho{Z) is an FlR filter of the form
of Eq.. (10.212). If H1 (z:i ha~ a tran5Jef function which is the rn~rror >mage of Ho{z!, i.e .. H1 \::} = z-_'i Ho(;;- 1 },
develop a realization of th1s filter bank with only s-Jx multipliers.

10.32 The four-channel analysis filter bank of Figux P10.6(a), where Dis a 4 x 4 DIT matrix. is characterized by
tnu~sfer functions: H,(<-) = Y; (z)l X(z), i = IJ, i, 2. 3. Let the transfer functions of the fOUI subfilters
tl:-e set of four
he given by

Eo(z} = ! +0 3z-l- 0.8;:- 2 , Ej{z) = 2- L5z- 1 +3.tz- 2 ,


E2(.;:) = <1- 0.9z- 1 + 2.3;:- 2 . E;(z) = l + 3.7z- 1 + l.?z- 2 .

(a) Determine the e.~preS>;iorn;


for the four transfer functions. !10 /_':), H 1{d, Hz (z), and HJ(Zl-
(h) As;;u:ne that the analysis filter H2(z) has a ;nagnirude re~ronse as mdkated in Figure P10.6(b). Sketch the
magnituC:e responses of the other three analysis filters.

X(L} fo\z)

Y (zl
1

Y2_ (z}

Y1(z)

1Hz{e1m:JI

't0
L ' ,,)
3rr
T
'
T
,,' w

(b)
Figun! P10.6

10..33 Consider the analysis-synthesis filter bank Qf Figure Pl0.7. Develop the inpul-output relation of this structure
in dle :-domain. Let Ho(z) = {I + z- 1)!2 and HI(Z) = (I - z- 1}/2. Determine the synthesh filters Go(z) and
G 1(z) so that the structure of Figure Pl0.7 is a perfect reconsc:-uctioo filter bank.

X(;:)

Figun P10.7

10.34 Let the am1ly~is filteP..: Ho(z) and H1 (Z) of the structure of Fig:ae PJ0.7 be power-compleruentary FIR filters
of order N each. ·
10.14. Problems 745

{ai Show that thil> strudure be;:omes a pertect recon,.trunion tilt.:;; bank if the s~nthesis fitters Gu(z) and G J (.:)
are chosen a'<
, (
( roz!=.: -Ntr( -: c' ) -NH 1 (Z -I }. ( !0.213)
noz ). vI \.Z = Z

(b) Show that the synthesis filters are cau.~al FIR filters if the analyt;is filters are causaL
{c) ShO'flo· that !he analysi~ :.~nd .<yr.thesis filtCP.!' sati:.fying the perfect reconstruction condition cannot all be of linear
pha;;e.

]0.35 Show that the twu-ci'..annel QMF bank• .in genera:, is a linear, tune-.,arying system with a period of 2.

10.36 Cons.«:kr the two-channel QMF :ruucture of Figure 10.55 where A;(.:) are stable allpa~ transfer functions. Let
E(z} and R(.:~' denote the polypha~e matrices of the analysis and synthesis filter bank$. (a) Dete:mine the expressions
for 1:{ z) and R (.:) in: terms of A; (;J. {b} is E(:::) lossiess? {c) Express R(z:) in 1enns of E(z). (d) What is the product
R(z)E(;:f:>

10.37 {<~) Decumpuse the third-order transfer functioo

in !he form
G(d = j {_Aot:) +At{;::)},
where Aoiz) and A 1(z) are s.tabie allpas5 tnm.sfer functions.
(b) Realize G(::J as a paraHei connection of allpass filters with A:J{<:) and A1 {z) realized with the fewest number
,,f mulliphers.
{<.:) Determine the transfer function H (z:) which i,; power...:umplementar~' to G(z).
(d) Sketc-h the magnitude resp<Jnses of G(z:) and H (zj.

10.38 Let G., ls) denote the Nth-order analog lowpass Butterworth transfer function with a 3-dB cutoff frequency at
! rod/sec with N odd. Show that the corresponding digi!.al Buuerworth tr.:msfer function Ho(z) obtained by a bilinear
tmnsformation i.~ a half-band loWpa!-..5 filter expressible in the fonn

where ..J.o(z l and A, (z) ru:e stable all pass tran5fer funct.ons.

10.39 Using tire method of Probl~m 10.38, develop the tramfer function of a third-order 1owpass half-band filter
H{)(Z ~ ant.! then determine its power...::omp!emenra...')· transfer functiof'. Ht (z). Develop a realization of a rr,agnitude-
prcserving twu-chc.nne! QMF bank whose analysis filters are Ho(r; and H1 (z). and using no more than one multiplier
fur the analysis stage.

10.40 Csmg the method cfProblem !0.38. devdopthe tnmsfe!"function of a fifth-order lowpass half-band filter Ho(z)
and then detennine its power-complementary lra..'lsfer fuD<..--tion H1 (z). ~velopareal.ization of a magnitude-presetving
1\\<n-channel QMF bank whose analysis filters are Ho(z) and Ht (z), .and using no more than two multipliexs for tbe
am!ly~i:> >tage.

10.41 Let ilic •:rder of the half-band lowpa~ HR filler Ho(z) of !he QMF bank of Figure 1052 be N where N
l"-odd. (a) What is !he total number of mu!tipher.; needed to implement the QMF bank of Figure 10.52'? How
many muhiplications per second are needed tc implement this stmrture'? {h) If the QMF bank i.~ of the magnitude-
preserving type. the analysis and synthesis fitter"- can be realized as~ s.um of fiR aUpass filters resulting in the structure
of ?igure I 0.55. What is the totalnumher of multiplien; needed in the impl~lentation of Figure 10.55? How many
multiplications per secon.d are needed to implement this structure?
/4£ Chapter 10: Multirate Digital Signal Processing

t0.4l Con&der a two-channel ort..'mgonal filter bank structure of Figure 10.5-2 where the ana!y1>is filter flq(z} is a
p<;>wer-symnu:tric FIR tmnsfer function of odd order N. If the ~ond analysis filter H1 {z:) is chosen according to
Eq. (10.144), show that both ana}y5is filters can be realized using only N + 1 multipliers and 2N rwu-input adders.

Ui.43 Omsider the QMF bank structure of Figure 10.65 wirh L = 4. Let the l}'pe I polyphase component matrix be
giver: by

j~ J.
2 3
13 9
9 ll
7 lO
"
Detennine the Type H polyphase component matrix R(,::) such :hat the four-charme! Ql\.tF structure h a perfect
nx:onstruction system with an input-output relation y[nJ 3xin - 3]. =
10.44 Design a three-channel perfect reconstruction QMF bank whose analysis filters are given by

r z~i:; J~ [ ;
;,__ Hz(z) l
i ; J[ ~=: J
2 ._

Deve!op a computationally efficient realization of the filter bank.

H1:.45 Design a four-channel perfect reconstru~;:tion QMF bank whose analysis filters are giveo by

[
Ho(')
H1 (z) ]
H2(z) =
r3I
2
2
2
I
H3(z) 4 2

DeNelop a computationally efficient realization af the filteT bank.

10.46 Design a three-channel perfect recunstruction QMF bank whose synthesis filter.; are given by

Develop a compu!.atwnaily efficient realization of the filter bank.

10.47 Show that, in general, the £-channel QMF bank is a linear, time-varying system with a period of L.

18.48 Consider an alias-free L-channei maximally decimated QMF bank with Ht(z) and G.~;{z), 0 :;:: k ~ L - I.
denoting, respectively. rhe analysis .and the synthe~is filters. Let T(z} denote its di!>tortion transfer function. S.how
that if !he analysis and the synthesis filters of each braneh are interchanged, i.e., Gk{Z) are now the analysis filters,
and Ht(z) are the synthesis filters. the resulting system is still alias-free and has the same distortion transfer function.

10.49 Com.iderthe multi rate system of Figure 10:65 where P(z) = R(z}E(z). Show that this structure is time-iw;ariant

[
with no aliasing if and only if P(z) is a pseudo-circulant matri,;. of t.fre fonn [Vai88bj:

~,,,
z-'p~_,(z)
P1 (I} PL-2(Z) PL-l(Z)

l
Jb(.z) PL-J(Z) PL-z(z)
P= . (10.214)
z-11>2(.z:) c 1 ~(z) P{)(Z) P1 (z)
z- 1 Pt\z} z-1 ~(z:) Z-l PL-J(Z) Po(z)
747

10.50 ConsKler a third-order causal UR filter de~cribed by the difference eqnation

boy{nj + b;yln- l] + bzy[n - 2] + b3yfn - 3] = ao.r!n} + ap:[n - lJ + a2.xln - 2] + a~xin - 31.

where y[n} and x[nl are, respectively, the output and input sequences. By iterating the above difference equation f<Jr
n, n + 1, and n + 2, we arrive at a set of three equatk>m; which can be wr:t:ten .as a block difference equation of the
fom;

where
J
l
y'3k] x[3k]
Y.;,= y[3~+1] , X.t = x(3k + I]
[ [ x[3k + ZJ
y!3k + 2}
Dete011ine the matrices Bo. Bt, -4v- and At in tenruJ of the coefficients {b;} and {a;). Develop a muitir.a!e st:rucrure
to generate '4, fmm lhe tnpul: sequence x[n] and a multirate sttucture to generate the output sequence )'[I'!] from Y*.
[)etermine the expression fm the block transferfuru:.:tionH(z) where Y,1; = H{OXk- Show thatH(z) is pseudo-circulant
as defined in l!q. ( 10.214 ).

IO.Sl The multirate system of Figure P10.8 is called an N-pathfilter and has been proposed for high-speed imple-
mentation of narrowband digital filters [Mit87j. Consider the N ·path filter for N = 3. Show that the three-path filter
i~ time-invariant and derennine its transfer function. Vvnat is the liansfer function for the general N -path filter?

Figure PIO.S

10.52 What is the transfer function of the cascade of a three-pat!: filter and a follf-pa'l'l .filter designed using the same
filtec H{z) in each single path of lxrth structures? Let H (Z) of the single path of an N-palh filter of Figure PIO.S.
be an ideal towp.ass filte:- with a magnitude response as~ in Figure P:0.9. Sketch as accurately ru; possible the
magnitude response of the cascade of a three-path filter and a four-path fiher designed using H(z).

ro
0 E. 7n 2x
4 4
Figure PI0.9
748 Chapter 10: Mulfirate Digital Signal Processing

UI.5J The structure of Figure PlO.lO has been proposed for the computationally efikier.t implementation of FIR
digital filters jVet88].
(a! Show that the siructure is aHas-free arui determine the overall trnnsfer P.mction T{Z) = Y{?)j X(:;;) in terms of
Ho(z) and H1 (;:).
(b! Determine the expression fm T {;:)if

{c~. If H (;)is a ienglh-2K FIR filter. what are the 1<::-~gths of th<: liiters Ho(d and Hr {::.)?
(di Determine 1he computational efficiency oft!.:is structure.

Y{;::)

L__ _ __;!2. H (cl ,


1
l<'igure PIO.JO

1(1.54 (a) Show that the !>lruclure of Figure PIO.ll is !ime-in~.rnam with oo aliasing if and nnly if ihe following
matrix j~ pseudo-c:rcuhmt as defined hy Eq. ( 10.214) fLin%j:

0 l
l
0
~J
fb) Develop an equivalent realization of Figure PI 0.11 based on a critical down-sampl~ng and critical up-sampling.

10.55 Analyze !he Stn.k."'ttL"e of Figure Pl 0.12 anrl deterrrune it~ input-<Jutput relation~. Comment uu yvur re~ult~.

Figure Pl0.12

10Ji6 An efikient implementation of tv.o separate single-mput. .1.ingle--output LT[ discrete-time sy~tems with an
idc:mical transfer function H (z) by a sing;e fW<l-input, two--output mult:rate discrele-tirne system hGbw.ined using the
pipElining/imerlt:!aving (PI) technique as shown in Ftgure PI0.13 ~Jia97J. Show thal the system oJ Figure P 10.13 is
lime-invariant and determine the transfer functions from each input to each output.
10.14. Problems 749

Figu:re Pl0.1J

1057 Show !hal the multirate system of Flgure Pl 0.14 i'> timt:-invariant and determine its U""ansfer ftmctmn !Jia97).

Figure Pl0.14

10.58 Problem 7.57 descnbe:. the filter sharpemng approa;;.·h [Km77j wh.Jd; is used to improve the magnitude response
of a filler H f. z) in both the passbmd aud the s<opband by employing multiple copie.; of the fii,er. For e1.ample, the
thri,-in.g merfwd of n:tet s.ha..,.enirg "implements \he transfer function

(!0.215}

where H (;:) is the prototype Lero-pbse FIR filter. Show that the multi rate :.tructure of Figure PlO. }5 implements the
ahm-e equation using the PI technique for an appropriatt: value o( !he constant C lJia9TI.

Figure Pt0.15

10.59 Show tllat each one ot the ~<Jitowing HR tTam;.fe"i function~ is a power-symmetric functicn.
(at Hu!.z) ==-! -z-t + ¥z<- ¥z- 1 - 5.:- 4 - Jz-5
(b) Ho!z.) = I + 3z-l + l4z- 2 ..,.... 22z- 3 - 12z-4 + 4::- 5 .
li~ing Ho(z) as one of the analyiiC> filter. determine the remaimng three filters of the corresponding (Wo-dwrme!
orthogonal ti!tcr bank. In each case, show t~al lhe filter bank n al.iru.-free and satisfies the ptrl"ecl reconstru<:tion
condnion.

10.60 TI:.e analysis filter; of a biorthogcmU o;wo-channd filter bank are given by Hc(z) = i +act..;.. .c-2, a~d
1
HJ (zi = l -+· a.:- + fn.- 2 + az- 3 + z- 4 LVet891. Determine ttle twQ synthesis filter~ Go(z) and G!(Z) us5:ng
Eq. {10.! 53). Show that the two-channel filtCJ" bank is alias-free at:d satisfies the perlea reconstruction pcoperty with
a'f.Oandb"f.2.
750 Chapter 10: Multi rate Digital Signal Processing

J0.61 Design~ two-chaLnel perfect r-econstruction filter bank suclJ that the lowpass analys1s filter Ho(z) is of length
4 and has two zeros at.;;: = -1, Is 1t possible to design tbe analysi~ filter.> so that they have linear phase in addition to
having two zeros at z = -l?
10.62 We have deinonstrated in Example 10.3 that the m.ul!irate structure ofF1gure Pl 0.6 is a p::rfeCl reconstruction
system with the output y[n} being a replica of the input x(nl but delayed by one sample. Figure Pf0.16(a)shows the
structure obtained from Figure 10.6 using the lifting ~·heme [Swe96]. Show that it is also a perfect reconstruction
syslem.

P(z)

-l
+ y[nl

(b)
Figure Pl0.16

10.63 The lifting ~heme can be repeatedly applred to develop perfect reconstruction systems with mor-e desired
featur~. Figure PIO.l6(b) shows a structure derived from Figure PI 0.16(a) by applying the lifting scheme a >recond
time. Show that this structure is alw a perfect reconstruction system.

10.64 Show that the four-channei QMF bank of Figure 10.71 is a perfect reconstruction type if the two parent two-
channel QMF banks are of the perfect reconstruction type.

10.65 Show that the three-channel and the foor-ebannel QMF bar.ks of F1gure J0.74(b) and 10.74(c), respectively,
are of perlect reconstruction type ii the two parent two-channel QMF banks of Figure 10.74{a) are of the perfect
re.:omtruction lype.

10.15 MATLAB Exercises


M 10.1 (a) Modify Progmm 10) to study the operation of a factor-of-4 up-sampler on the fOllowing input se-
quences: (i) sum of two rcinusoidal sequences of normalized frequencies 0.2 and 0.35 rad/sec, {•i) ramp sequence.
and (iii) square wave sequence w!th various duty cycles. Choose the input length to be 50. Plot the input and
me output sequences.
(b) Re-peat part {a} for a factor-of-5 up-sampler.

M 16.2 (a) Modify Program 10_2 to study the operation of a faccor-of-4 down-s!llllpler on the following input
sequences: (i) $Urn of lwo sinusoidal sequences of normalized frequencies 0.2 and 035 rad/sec, (ii) ramp
sequence, and (iii) square wave sequence with various dnty cycles. Choose the input length to be 50. Plot the
input and the output: sequem:e~.
(b) Repeat pan (a) for a facl.or-of-5 down-sampler.
10.15. MATLAB Exercises 751

Mlll.3 (;,; U..cuvectordlro:qucm:ypoint:.freq- ,r -::·.95 r 98 l] inPrograml0_3andrunttwithan


up-~ampling factm of l. =
5. Comment on your result~.
(b) Rep;;:l!l pt~rt (a) for L =.: (,_ Comrnem on yuur Ti'Sults.

M 10.4 (z;; U>o<.· a •.:c:tCJr uf ln:quem:y puint> fr.::q LJ <). 95 0. 93 l; and a magnitude response vector
wu.q - J.- 0 v' ;: 1 :n Prugmm 1!}_3. and nm it Wllh<ill up-~ampling factor of L = 5. Comment on your
result,,
(b_; Re.pcat par1 (a) tor L = 3. C<Hmnem on yuur resulb.

MlO.S (;:} u~eavectoroffrequ<m<:ypoinh [ r e q - [C U.27 Cl.1S l_i and:amagnitudcn:.sponsc-vecwr


nag ~ , j 0 0 C. in Pmgr.tm Hi_4, and run it with a down-sampling factor of M = 4. Commem on your
result•
HH Repeat jY.lr! la) forM= 5. C0mmeni on your results.
I
M !0.6 Run Progr.!m !0_5 for lhe :·::Uowing ;nput data: :at N = :20, M = 3, /I = 0J)45, _{J_ = 0.029: (b) N = 120.
M =- ~- f1 = iJ.C~5. h = O.tl19_ Comment on your resulls.

i\--1 Ht7 Run Program Hi 0 fqr tb: following ir.pu! data: (a) N = 40. L = 3, ft = [1.045. h = 0.029: (h) N = 30,
L = 4. f; = 0.04:". h_ = 0.029. Comment Oil your results.

M !0.8 Run Prognun 10 7 fur the foUowwg iopul dala: (a) N = 30, L 3, M = = 2, fl = fl.U45, h = 0.029; (b)
.'V = 40, L = :'L M = 5, f1 = O.iJ45. h = 0.029. Comment on your results

1\f 10.9 De-.ign a !t-ongtl:-6 I linea• -phase HR lowpa»s tiJ(er wi(h a :::muff frequency atH/4 using the wmdo-Ned Fourier
;;eri.:-s <~ppru;u:h. Expre.-;s :he tra!l~fer function in a four-band polyphase deeomposlllon. Using A-fATLAB compute
and plot ~he frequency re~p<.mSel. d each pdyphase component. Show thl!t all polyphase components. have constant
magnilurk re~pcn,.,e~.

M 10.10 De~i~n J. tif[h-urder JJR h.!if-band Bullerworth 1-.>wpas.> filter and reali1.e it using Oflly two multiplien:..

M: H).11 Des1gn J. .;even~h-order HR half-band Butten>ronh lo\\eoass filter and realize it using (IDly three multipliers_

M 10.12 Dc~ign using Y1A TI.A.B .a n::al-coeffident elliptic half-band IIR filter Hu(z.) of odd rn-dcr with the following
~peoticatioJlS. "-'s = 0.55;r. and h,- = tLOOI. Note 1hat the half-band filter .::onstraint is satisfied if wp ;- w~ = 11' and
( i - i5p) 2 +a_; = I_ Expre,_,s Hol:J in the form

where .AoC.:_! 1nd A 1(;:) ar~ Hable dllpas~ transfer fun;;!ions. Pint the magnitude responses of Ho(z) and its power-
c;.mpk':nentary tral'.~kr lunl'110n H! (::) in the same figure_

M 10.!3 De:-.1gn a linear-phase snth-l:>and lowpa;;~ FiR filter of order 42 with Wp = 3JT/24 and Ws = 5:rj24 using
thewu1dnwed 1-'uwier :;n:c~ tipprow,:h try mndify:ng Program 10 X_ Use the Hann window.

M 10.14 D<:sl.gn using MATLAB a r~al-l-oefficicnt power-~ymmetric FIR lowpass filter Ho(:.) with a stopband edge
at 0.65}r and a n:inimum ~Cupband an:eount:on ot 25 dB. Design next a perfect nxonstmction two-channel QMF bani::
ba.~cd <>11 Ho(z)_ Shuw the tmnsTer fum::tions o( iill four filters.. Plot the magnitude responses of the tW<"J analysis filter~
in the ,;arne figure
752 Chapter 10: MuHifate Digital Signal Processing

M LO.IS Write a MATLAIJ program to design a two-channel QMFparaunitary Janice filter bank.. Us.i"lg this program
design a lattice structun: with lUten; of order 23 and a stopband edge at Ws = 055:rr. Plot the sum of the magnitude
squares of the tv.o analysis filters. What is the minimum stopband attenuation in dB of your filters? Plot the amplitude
diSlurtion in dB. Quantize the lattice coefficients to 6 decimal digitals and plot the gain responses of the two analysis
fi1Wrs along with those of the original filters on the same figure. Comment on your results.

M 10.16 Design and realize a four-channel uniform DFT analysis filter bank using a prototype-linear-phase AR filter
of length 2L Design the prototype filter using the function remez of MATLAB. Assume a transition band of width
0.1 rr. Plot the magnitude responses of each fil~ on the same figure.

M 10.17 Design a three-channel QMF bank in the form of Figure l0.74{b) by iterating the two-channel QMF bank
based on Filter 16A of Johnston [Ans93], (Cro83], [Joh80). Plot the gain responses of the three analysis fit~, Ho{z).
H1 (ZJ, and ~{z) on the same figure. Comnx:nt on your results.
Applications of Digital
11 Signal Processing

As mentioned in Chapter l. digital signal processing techniques are increasingly replacing coaventional
analog signal processing methods in many fields such as speech analysis and processing. radar .and sonar
slgna1 processing, biomedical signal analysis and ?f"'Xessing, telecommunications, and geophysical signal
processing. Some typical applications of DSP c-hips are summarized in Table 11.1. A complete overview
of these and other applications is beyond the scope of this book. Moreover, an understanding of many
applications requires a knowledge of the field where they are being used. In this chapter we include a few
simple appllcations to provide a glimpse of the potential of DSP.
We first describe several applications of the discrete Fourier transform (DFf) .introduced in Section
3.2. The first application considered is the detection of the frequencies of a pair of sinusoidal signals.
called tones, employed in telephone signaling. Nex1 we discuss the use of the DFT in the determination
of the spectral contents of a finite-Ieng11 sequettce. The effect of the DFT length and the windowing
of the sequence are examined in detail here. In the following section, we introduce the concept of the
s:ilort-time Fourier transform (STFT) and di:c.cuss its application in the spectral analy:;is of nonstatiornny
s·!gnals. We then consider the spectral analysis of random signals using both nonpararnetric and parametric
methods. Application of digital filtering method.E to musical sound processing is considered next, and a
variety of practical digital filter structures. useful fer the generation of certain audio effects, such as artificial
n~erberation, flanging, phasing, filtering, and equalization, are introduced. The digital stereo generation
for FM stereo transmission is trealed in the following section. Generation of discrete-tim¢ ~ytic signals
by means of a discrete-time Hilbert transformer is then considered, and several methods of designing these
circuits are outlined along with an application. The basic scheme of the subband coding of speech and audio
si,gnals is reviewed next. The theory and design oftransmultiplexers are discussed in the following section.
One method of digital data transmission empioyi:ng digital signal processlng methods is introduced then.
A method for audio sampling rate conversion is then outlined. The basic c;:onccpts behind the design of
the oversampling AiD and DfA converters are reviewed in the following two sections. Finally, we review
the sparse antenna array design for ultrasound scanners.

11.1 Dual-Tone Multifrequency Signal Detection


Dual-tone multifrequency (DTMF) signalfng, increasingly being employed worldwide with push-button
u:~Iephone sets, offers a high dialing speed over the rlial-pulse signaling used in conventional rotary telephone
Sl~ts. In recent years, DTMF signaling ha" also found applications requiring interactive control such as in
voice main, electronic mail (e-mail), telephone banking, and ATM machines.
A DTMF signal consists of a sum of two tones with freql.leDCies taken from two mutually exclusive
g'Oups of preassigned frequencies.. Each pair of such tones represents a ut'lique number or a symbol.
Decoding of a DTMF signal thus involves identifying the two tones in that signal and determining their
corresponding number or symbol. The frequencies allocated to the various digits and symbols of a push-

753
754 Chapter 11: Applications of O.gital Signal Processing

'f'.tblc 11.1. Typi..:;t\ applicatwm of DSP chi~-


c----------~--------,

'
Gcnera!-purpu:;~ DSP ! Gmphlcs/imagi:-~g --c-c~~ln_,_t_ru_m~ecnctacuc·o_n
_ _---i
Digi~al fi!tcrlnl' ------+-3-D rotatlOfl Spectrum analysis
Convolution Rnbot vision Function generation
Correlmif'll lmagt' transJ ni~ioni Pattern matching ''
Hither. transtornh compresslo n Seismtc processing '''
Fa><t Fourier Lran-.forms Pattern reco gnition Transient analysis
Adap!ive filtering Image enhan..-ement Digital filtering
\Vindowmg Homomorph tc processing Phase-locked loops
Waveform generation Workt.tation
'
Animution/d tgital map
~-----
VOit.:eispce-ch Co mrul Military
-------
Voin: main D1sk cO.llr:-o J Secure communications I
Speoxh v.;~coJ.iag Servo contro J Radar processing
Speech re-cugnition Robot control Sonar processing
Speaker verification
Speech enhancement
Spe.:ch -.ynthesis , Motor contr ol
I
Laser printe r control
Engine com tul
Image processing
Navigation
Missile guidance
_}'ext to ~ech ________ L ____ _ Radio frequency modems

Tclec\Jmmunications Automotive
Echo cam.:dla-ct,-ocn-- FAX Engine control
ADP(~M tmnseuJcr;, Cclblar tde phone Vibiation analysis
Digital PH:Xt. Speakerphonc~ Anriskid brakes
Line repeaters Digital speecb Adaptive ride control
Channel multiplexing Interpolation (DSIJ Global positioning
1200- to l 9.200-bps modem-; X.25 p.a..:ket switching Navigation
Ad:~ptive equalizers Video confe rencmg Voice commands
DThiF encoding/decoding Spr-ead spec tmm Digital radio '
'
Data_ en_:~yption

Consume-r
--ff<ido:tr dete-:.=ton ---
Pnwer wob
·=r---
--
communicatJons

R.oi.X,t:ic!>
Numeric co ntrol
--·-
Indu:>trial
Cellular telephones

Medical
Hearing aids
Patient monitoring
i
Dignal ;mdto/TV
!>.·1usic -.ymhe-;tler J
Security .ace·e~s
Power line m ,mitor.;
' Ultrasound equipment
Diagnostic tools
Educational toys Prosthetics
_____ ___I___ Fe-tal monltors
Reprin~eC from K. S_ Ijn, .od .. Digiwf Signal Pruce:s.>ing ApplieaJim;s with the TMS320 Fam-
if:-·. vul_ L PI-entice HuJ! and Texas lnsrnn:lt':lltl!.. 1937. Reprinted by Permiss.Jon of Texas
I nstnm:~ent::._
11 _1. Dual-Tone Multitrequency Signal Detection 755

bu!!on keypad are interu<~lirmnlly accepted standard_, and :;r~ .,hown in Figure l _\c; In t ;;44 i. n,.: f"ur kev'-
in rhe last column of the keypad, as !;hown in this figurt' ..tre not yet availahk <m :.UnJanJ haJii.!~h anJ
are re~ed for future use. Since the signaling frequencie-, arc ali lo::ated in the freqt:ency band used for
speech transmission, this is :m in-bami s~·stem. lmerfat:ing with the analog input and output devices i-,
provided by cwiec (coder/decoder) chips. or .AID antl D/A conveners {Sections 5.8 and 5_9).
Although a number of chip-s with analog circuitry are av:tilahle for the generation and decoding Jf
DTMF signals in a single channel, these functions. -can a]~~ be implememcd digitally ("IJl DSP chips. Such
a digital implementation surpasses .analog equivalents in performance, sim.-c it provides hetter precision,
stability, versatility, and reprogrammahility to meet olher tone standards, and the s<..·npe for multi::hannel
operation by time-sharing leading to a lower .:hip count.
The digital implementation of a DTMF signal involvt:s adding two finite-length digital sinu<;oidal
sequences with rhe Jatrer simply generated by usmg look -up rabies or by computing a polynomi.al expansion.
"The digital tone deteetion can be easily performed by computing Lhc DFI' o-!:' the DTMF signal and then
measuring the energy present at the eight DTMF frequencies. The minimum duration of u DTMF signal
is 40 ms. Thus, with a sampling rate of 8kHz. there are at most 0.04 x 8000 = 320 samples avmlahle for
decoding each DTMF digit. The actuaJ number of samples u~d for the DFT comput2tion is le:-,s than th1,;
number and is chosen so as to minimize the difference between the actual location of the smu~n:d and the
nearest integer value r>FT index k.
The DTMF decoder compt:.tes the DFf samples closest in frequency to the eight DTMF fundamcnlal
iones and their respecth·e second harmonics. tn addition, a pmctical DTMF de-~oder <ltso computes H:e
DFf samples closest in frequency to the second hannoni.;;s corresponding to each of the fundamental
tone frequencies. This latter computation is employed to di<otingui'>h between human \ooi<..:cs and the pure
sinusoids generated by the DTMF signaL In gener.ll, the ".peclrum of a human voice con:alo,.; componems
at afl frequencies including the above second harmonic frequenci-es. On the other hand, the DTYJF :signal
generated by the handset has negligible se.::ond harnmnics. The DFI computation scheme employed is a
~lightly modified vers.ion of Goertzet's algorithm, a-s descnbed in Section 8.3.! for the computation of the
squared magnitudes of the OFf samples that are needed for the energy computation.
The DFT length N detennines the frequency spacing between the locations of the OFT samples and the
time it takes toc{l.mpute the Dl-<l sample. A larg~: N makes the spacing smaller, providmg higher resolution
in the frequeocy-dmm;in but increases the computation time. The frequency fk in Hz corresponding to the
DFr index {bin number) k is given by
kFT
!~.:=-. k=O,l, ... ,N-1, (1l.l j
N
where Fr is the sampling frequency. lfthe input Signal contains a sinusoid of frequency };11 different fmm
that given above, its DFT wili contain oot only large~ valued samples at values of k closest to N fin/ FT
but also nonzero values at other values of k due to a phenomenon called leakage (see Ex:!rnple 8.1 J ). Tn
minimize the leakage it Js desirable to choo:--;e N appropriAtely so that the tone frequencies fall as close
as possible to a DFT bin, thus proviDing a very strong DFf >.ample at this index '>:tlue relative to ail
other v.alues. For an 8-kHz sampling frequency, the best \'alue of the DFT length N to detect the eight
fundamental D'P..iF tones has been fouJJd to be 205 and Ihat for detecting the eight second hamtonics is
201 fMar92]. Tab!e IL2 shows the DfT index •·a]m~s dt-\est to each of the tone frequencies and their
second harmonics for thc;;e two \oalue5 of N, respectively Figure J ( .l shows 16 .<:elected DFr :.ampJes
computed using a 205-point DFT of a length-205 sinusoidal sequent·e for each of the fundamental tone
frequencies.
The following MATLAB program demonstnites the OFT-based DTMF detection algorithm. Lt employ~
the function gfft (x,N, k) of Section 83.l to calculate a single DFr sample using G-oertzel':> method.
The input data is the teiephone handset key symbol. The program generates a length-205 sequence
cons-isting of two sinusoids of frequencies according to the convention shown in Figure ( .39. It then
75'5 Chapter 1 t: Applications of Digital Signal Processing

770Hz
100,

- l
~ 50i

~1lo~~~~5~~~20~~~
k k

852Hz 941Hz
100 100

;;

,.
:;;;
50 50
"" ""
,co f . ~· 19,
15 20 25 3{) 15 2{) 25 30
k k

1209Hz 1336Hz
100
100!
-
1 ~
~

50 50
""
25 30 35
k k

1447Hz 1633Hz
100

~
5<J
"" ,Q 9o,
35 40 45
k k

Figure 11.1: Selected DfT samples for each one oftlle DTMF tone signals for- N = 205.

computes the eight DFf samples corresponding to the bin numbers of the fundamental tone frequencies
giwn in Table 11.2 and displays these DFf samples and the decoded symboL As only pure tones are
generated, the program does not employ the test involving tfte second harmoruc detection to distinguish
bet'Neen humar. voice and touch-tone digit The outputs generated by this program for the input symbol#
are displayed i.n Figure 11.2.
11.1 . Dual-Tone Muitifrequency Signal Detectlon 757

"Table 11.2: DFf index values for DTMf tones for N = 205 and theiT second harmonics for N = 201 {Mar92].

Basic Neare.st
tune Exact k integer Absolute
in Hz value k value error ink
697 17.861 18 0.139
770 19.731 20 0.269
852 21.&33 22 <H67
941 24.ll3 24 0.113
1209 30.98} 31 0.019
1336 34235 34 0.235
1477 37.848 38 0.152
1633 4L846 42 0.154
second Nearest
harmonic Exactk integer Absolute
i.nHz. value k value error ink
1394 35.024 35 0.024
1540 38.692 39 0.308
1704 42.813 43 0.187
1882 47.285 47 0.285
2418 60.752 61 0.248
2672 67.134 67 0.134
2954 74.219 74 0.219
3266 82.058 82 0.058

'!- Program 11_1


% Dual-Tone Multif~equency Tcne Detectior.
% us:. ng the LIFT
%
elf;
d"' inpuc('Type in tt-.e telephone digit 's');
symbol "" abs (d) ;
tm ~ [49 50 51 65;52 53 54 66;55 56 57 67;42 48 35 68);
!::or p""' 1:4;
:':or q"' 1:4;
if trn(p,q) abs(d);break,end
end
if tm!p,q) abs(di ;break, end
end
fl ~ [697 770 852 941];
f2 ~ {1209 1336 1477 1633];
n 0:2G~;
x = sir.(2*pi•n*fl(p)/800Cl + sin(2~pi*n*f2(q)/BOOOJ;
k ~ [18 20 22 24 31 34 38 42j;
val ~ ~eros(l,8);
for rn"' 1:8;
75B Chapter 11: Applications of Digital Signal Processing

-,-~ --l
'i

"oJ·5 _---i;; ~ J>_ ~;---~:;;f'-~}~c--"--"<oC~s

Figure 11.2·. A lypica; output of Program 1 1_1.

b'x(wj = g£fttx.7J5,k(mll;
end
val = abs(Fx};
s~em(k,val);grid; xlabel;'k';;ylabeJ·:'IXfi<:]i');
:irnit = 8D;

er.d
~cr: r = ~:4;
iJ· val(r) > iioit,break,end
end
d.: sp( [ 'Touch-~'one Syrrbol = ', setstr(t:r..•:r-, s-4))];

11 .2 Spectral Analysis of Sinusoidai Signals


An important application of digJtal signal processing methods i;. in determining in the dtscrete-timedomain
the frequency -~.-ontent-. of a continuous-time signal, more commonly known a,; spectral analysis. More
specilically, it involves the detennination of either the energy spectrum or the power spectrum of the signa!.
Applications of digital spectral analysis can be found in many fields and are widespread. The spectr-al
analr.;is method~ are based on the following observation, If the continuous-time signal g 6 (t) is rea<>onably
bandiimi!ed, the spectral characteris:ics of :ts discrete-time eqmvaient g fn! should provide a good estimate
of the spectral properties of ga(t). However, in most cases, g"(l) is defined for -oo < ! < =, and as a
re<Mtit. Kin] is of infinite extent ami riefined for -ex:> < r. < =· Since it is diffic.Jlt to evaluate the spectral
parameters of an infinite- length signal, a more pmctical approach is as follows. first. the continuous-time
signal Ka (r) is passed through an analog anti-aliasing filter before it is sampled to eliminate the effect of
aEasing. The output of the filter is lhen sampled to generate a discrete-time sequence equivalent gfn j. It is
a"surned that the anti-aliasing filier has been de.iigned appro!)riately and hence, the effect of aliasing can
be ignored. Moreover, it is further asswned that the AJD convt:-rter wordlength is. large enough so thar the
AiD conversion noise can be neglected_
This and the following two 1.ecti-ons provide a .rev~ew of some sprt-"1ral analy<>is mt!thnd~. In this section.
we (;Of!Sider the Fourier analysis of a stationary &~_gnal composed of ~u;usoida! components. In Section
ll.J v.e discuss the Fourier analysis of nonst.ationary sigr>als with time-varying parameters. Section 1!.4
consi&rs the spectral analysis of random ;;Ignals. For a detailed exposition o.f spectral analysis and a
concise review of the history ofth1s area, see Kumare..--an lKum93 J,
11.2. Spectral Analysis of Sinusoidal Signals 759

For the spectr-al analysis of sinusoidal signal~ we assume that the parameters characterizing the sinu-
soidal' components, such as amp-litudes, frequencies, and phase, do not cha~ge with time. For such a signal
gfnJ, the Fourier analy,;is can be carried out by computing .its DTFI' G(e-1"-'):

G(ef"") = L g!nle-1'"". (11.2)


n=-oc

In practice. the infinite-lengtb sequence g[nl i:s first windowed by multiplying :it with a !ength-N
\Vindow w[n] to make it into a finite-length sequence y[nJ = g[ni · w[n] of length N. The spectral
cha~acteristics of the windowed finlte-length ~uence y[n] obtained from its D1Ff f(eiw) then provide
a reasonable estimate of the DTII of the original continuous-time signal ga(t}. The DTFT r{ci"') oftbe
windowed finite-length segment y[n] is next evaluated at a set of R(R 2': N) discrete angular frequencies
equally spoced in the range 0 ::::; tL' :::: 2:r by computing its R-point discrete Fourier transform (OFf)
l:'lki. To provide sufficient resolution, the DFf length R is chosen to be greater than the window N by
.zero-padding the windowed sequence wirh R - N zero-valued samples. The DFr is. usually computed
Dsing an FFT algorithm.
We exanline the above approach in more detail to understand ils limitations so !:bat we can properly
make use of the results obtained. In particular, we analyze here the effects of windowing and the evaluation
of the frequency samples of the DTFf via the DFT.
Before we can interpret the spectral content of r(ej"'), i.e., G(e 1"'), from r[kJ, we need to reexamine
tle relations between these transforms and their corresponding frequencies. Now, the relation between the
H-point DFf r[k] of y[n] and its DTFT r(el"') is given by

l[k] = r(ej"')l
w=2"'<kjR
, O::;:k:SR-1. (113)

The normalized discrete-time angular frequency Wk corresponding to the DFT bin number k (DFT fre-
quency) is given by
2nk
Wk=--, (IL4)
R
Likewise, the continuous-time angular frequency Qk corresponding to the DFT bin number k (DFf fre-
quency) is given by
(I l.5j

To interpret the results of the DFT-ba~ spectra! analysis correctly, we first consider the frequency-
d·:mmin analysis of a sinusoidal sequence. Now an infinite-length sinusoidal sequence g[nJ of normalized
angular frequency w,-, is given by
g{nj = cos(w..,n ..... ¢). {lL6)
By expressing the above ~uence a;;.

;:;[nJ = i (e.i(wulh·.Pj +e-j\0%>"#))' (!L7)

and making use cfTable 3.1, we arrive at the expression for it.s D1Ff as
X

G{ef">}=rr L (e-'<ftO(w-w0 +2n£)+e-i¢O(w+w<>+2nt)). [I L8)


£----=
7{10 Chapter 11 : Applications of Digital Signal Processing

Figure 11.3: DTFT of a sinusoidal sequence windowed by a rectangular wi.ndow_

Thus., the DTFJ is a periodic function of w with a period :2Jr containing two impulses in each period. Jn
the frequency range, -:r ::;:: w ::;: 7f. there is an impulse at(<) = w 0 of complex amplitude Juj<P and an
impulse at w = -w0 of complex amplitude :r e~ j?.
To analyze g[n] in the spectral domainuslngtbeDFf, we employ a finite-length version of the sequence
given by
y[n] = cos(w.,n + 1/J), (IL9)
The computation of the DFI of a finite-length sinusoid has been considered in Example 8.11. In tlris
example. using MATLAB Program 8_10. we computed the DFI of a length-32 sintlSOid of frequency JO Hz
sampled at 64Hz. as shown in Figure 8.30. As can be seen from this figure. there are only tv.Xl nonzero
DFT samples, one at bin k = 5 and the other at bin k = 27. From Eq. (J 1.5). bin k = 5 corresponds to
frequency I 0 Hz. while bin k = 27 corresponds to frequency 54 Hz, 01" equivalently. -10 Hz.. Thus, the
DFT has correctly identified the frequency of the sinusoid.
Next, using the same program, we computed the 32-point DFT of a length-32 sinusoid of frequency 11
Hz sampled at 64Hz, as shown in Figure 8.31. This figure shows two strong peaks at bin Jocatfons k = 5
and k = 6 with DOnzero DFf samples at other bin locations in the positive half of the frequency range.
Note that the bin locations 5 and 6 torrespond to frequencie:; lO fu and 12 Hz, respectively, according to
Eq. (11.5). Thus the frequency of the sinusoid being analyzed is exactly half\lf"BY between these two bin
l()(~ations.
The phenomenon of the spread of energy from a single frequency to many DFT frequency locations as
demonstrated by this figure is caHed lealwge. To understand the cause of this effect, we re<:all that the DFT
r[_k] of a length-N sequence y[n] is given hy the samples cf its discrete-time Fourier transfonn (DTFT}
r(eiw) evaluated at w = 2n kf l·l. k = 0. l, _ .. , N - 1. Figure J 1.3 shows the DTFT of the length-32
sinusoidal sequence of frequency 11 Hz sampled at 64 Hz. It can be seen that the DFr samples shown in
Figure 8_31 are indeed obtained by the frequency samples of the plot of Figure 11.3.
Toundentand theshapeoftheDTFf shown i.n Figure 11.3 we observe that the sequence ofEq. (11.9) is
a windowed version of the infinite-length sequenceg[nJ ofEq. (11.6) obtained using a rectangular window
w[n]:
w(nJ={l, O_::::n::sN-1, 0 1.10)
0, otherwtse.
Hence, the IYIFT r'(ef"') of y[nJ i:; given by the frequency-domain convolution of the DTFf G(e"'"") of
g[nJ with the DTfT \IIR(eJw} of the rectangular window w(n]:

(II.! I}
where
11.2. Spectral Analysis of Stnusoidal Signals 7£1

'" ( '") _
~R e - e -J<dN-dll~in(wN/2) . (11.12)
sin(w/2)

Substituting G (ej"") from Eq. { 11.8) into Eq. ~ 11.1 I), we arrive at

{l !.13)

As indicated by the above equation, the DTFf r{eiw) of the windowed sequence r!nJ ls a sum of the
frequency shifted and amplitude scaled DTFf 'l'R(ei'-") of the window w[n] with the amounl offrequency
shifts gi.ven by ±w0 • Now. for the length-32 o;;inusoid offrequency 11 Hz sampled at 64Hz.. the nonnalized
frequency of the sinusoid is l1 ;64 = 0.172. HeiKe, its DTFf is obtained by frequency shifting the DTFT
'<~~R(eiw) ofalength~32 rectangular window to the right and to the left by theamount0.172 x 2.Jr = 0344rr,
adding both shifted versions, and then amplitude scaling by a factor 1/2. In the normalized angular
frequency range 0 to 2.n, which is one period of the DTFf, there are two peaks. one at 0.344n and the
other at 2.:tr(l - 0.172) = 1.656."'<, as verified by Figure 1!.3. A 32-poinl DFf of this DTFf i~ precisely
the DfT shown in Figure 8.31. The two peaks of :he DFf at bin locati-ons k = 5 and k. = 6 are frequency
samples of the main lobe located at the normalized frequency 0.172 on both sides of the peak. Likewise,
the rwo peaks of the OFT at bin locations k = 26 and k = 27 are frequeocy samples of the main lobe
located at the normalized freqnency 0.828 on both sides uf the peak. All other DFT samples are given by
the samples of the ~idelobes of the DlFf of the w:ndow causing the leakage of the frequency components
at ±w0 to other bin locations with the ll!r.ount of leakage determined by the relative amplitude of the main
lobe and the sidelobes. Since the relative side lobe level A,.,e, defined by the ratio in dB of the amplitude of
the main lobe to that of the largest sidelobe, of the rectangular window is very high, there is a considerable-
amount of leakage to the bin localions adjacent to the bins '>bowing the peaks in Figure 8.3 J.
The above problem gets more complicated if the signal being analyzed has more than one sinusoid, as
is typically the case. We illustrate the DFf-based SiJectral analysis approach by memts of several examples.
Through these examples we examine the effects of the length R of the DFT, the rype of window being
used, and its iength Non the results of spectral analysis.
762 Chapter 11 : Ap~ications of Digrtal Signal Processing

L
1G y '4t:~t'J 'X",pt"'X'"'XT l/ f
!':R ,, Tt''\, s, t}
-4" ;;,,r;~~
tiLLWA {4 0 (+h(lt{lf1}1r <7
;v:i/t±:wYf ;• »1:YIP \ \ t ~{\ id>tt: t", 7 /
t U \0<+; 1 01 f Ff j ,

As this example point& out, in general, an increase in the DFf length improves the sampling accuracy
of the DTFr by reducing the spectral separation of adjacent DFf samples.

As indicated by Eq. (ll.ll), the DTFT of a length--N sinusoid of normalized angular frequency w 1 is
obtained by frequency translating the DTFT \II R(ei(.;) of a length- N rectangular window ta the frequencies
±w1 and scaling their amplitudes appropriately. In the case of a sum of tv.'<> length-N sinusoids of
normalized angular frequeru..'ies WI and w;;, the DTFr is obtained by summing the DTFfs of the individual
sinusoids. As the difference between the two frequencies becomes smaller, the main lobes of the DTFTs
of rhe individual sinusoids get closer and eventuaJly overlap. If there is a significant ovedap, it \>\-ill be
difficult to resolve the peaks. It folfows therefore tbat the frequency resolution is e..sentiaUy determined
by the w.i_!,ith t..uL of the main lobe of the DTfTofthe window.
Now from Table 7 .2, the main lobe width ~ML of a length-N rectangular window is given by 4rr 1N.
In terms of normalized frequency. the main lobe width of a length-16 rectangular v.indow is 0.125. Hence,
two closely spaced sinusoids windowed with a rectangular window of length 16 can be clearly resolved if
the difference in their frequencies i~> about half of th:! main lobe wi.dth, i.e., 0.0625.
1 1.2. Spectral Analysis of Sinusoidal Signals 763

11 I
' '
'
I f !'

WILl
(a) (b)

,---,
'

"'
(q (d)

(e)

Figure 11.4: DFT-basedspectral anaJ~i5 of a sum of two finite-length .sinusoidal sequences ofnormabzed frequencies
022 and 0.34.respectivcly.of length 16 each for various values ofDFf lengths.

Even though the rectangular window has the sma!Jest main lobe width, it has the largest relative sidelobe
amplitude and, as a consequence. causes considerable leakage. As seen from Examples 11.1 and 11.2. the
large amount of leakage results in minor peaks that may be falsely identified as s:inusoids. We now study
the effect of windowing the stgnaJ with a Hamming window. 1

!<ra 1f

,f.'. !11€ f.0e-tm!IH4h\I'Wk--·-

-c,Fn::-,-,-,-,-,~i~ afsrnnecommonly '"'~ windaws, see Secrion~ 7.1'>.4 and 7.6.5.


764 Chapter 1 1: Applications of Digital Signal Processing

----1

(a_l (b)
w----·
'"
'
~ ·i
, 'I,,
w ro 100 IJO
" "
(ci (d)
'
Figure ll.S: rllustration of the frequency n_.-.,.,_>lution property. (a) fi = 0.2&, f1 = 0.34; (b) /1 = 0.29, .h = 0.34;
(c)fl = 0.3. h = 0.34; and (d) fl = 031, h.= 0.34.

f "ff"''" i' 'Ad •J'VY$/) }f\f j~


fp,;"l! :S:JH ;G;v <tv liwr

l j !dq, ·~ Jt{!\ij'
!G&f& h-
):Jd'Y4u+' h;i/ffP
tiV
4 \il(«"m{0¥W(ta:! TJJii:AG'!!fll}

I !fT ,'\lid& t1f K f"',


0 GEt i"' (i H. 1Nn""'¥t 11ttit 4\M/Iu: %111Gii:hpwr, 10W
t&
Gil#} ?!Ltdih'i&
tl!!u
~§~§~~~~~~~~~~~§~~~
tlw, Ht&Jit l;<&w w±tllt· "'v ps ,v ''"" h- - -
Htn d(!TJJ

It is clear from the above examples that perlonnancc af the DIT-based spectral analysis depends
on several factors, the type of wirulow being u-.ed and its length, and the size of the DFI'. To improve
the frequency remfution, one must use a window with a very small main Jobe width, and to reduce the
leakage, the window must have a very small relative sidelobe leveL The main lobe width can be reduced
by increasing the lenglh of the window. Furthermore, an increase in the accuracy of locating the peaks is
achiv.•ed by increasing the size of me DFr. To this end, it is preferable to use a DFT length that is a power
of 2 so that very efficient FFI algorithms can be employed to compute the OFf. Of cmme, an increase in
the DFr size also increases the computational complexity of the specttal a..'1111ysis procedure.

11.3 Spectral Analysis of Nonstationary Signals


The dU.Crete Fourier tramfonn can be employed for the '>pectral anaJy:ois of a finite-length signal composed
of sinusoidal components a.'> long as the frequency, amplitude, and phase of each sinusoidal -component
11 .3. Spectral Analysis of Nonstationary Signals 765

"
" I
J] I 'i
I i ! : .
..
:1

' I '
- ____;_ - - ___;;__+--4,------,-,-Q_-~ _
'

_l__1 __.,.
:0 1~

(a} (b)

~------~

(c) (d!

" _-.
i\
l
'' ." ',
' I
•oo
(e) (f)

Figure 11.6: Spectral anaiysis using a Hamming windo-v.·.

are time~ invariant and independent of the signal length. There are practical situations where the signal to
be analyzed V:; i.nMead nons.tationary, for which these signal parameters are time-varyiag. An example of
such a rime-varying signal is the chirp signal give:& by
(1 Ll5)

<md :;huwn in Figure 11.? for w" = IOn x 10-:.. Note from Eq. (11.15) that the instantaneous frequency
ofx[nj is g:iven by 2uv1. which is not a constant but ir:.creascs linearly with time. Speech, radaT, and sonar
signals are other examples of such nonstationary signah;. A d~criptioo of such signals in che frequency-
domain using a simple DFf of the complete signaJ will provide misleading results. To get around !he
time-vaf)·ing uature of the sigll<ll parameteni, an alternative approach would be to segment the ;;equence
into a :,et of subsequences of shon length, with each subsequence centered at unifonn intervals of time
dlld its DFT computed >eparately. If the subsequence length is reasonabJy small,. it can be safely assumed
to be stationary for practical purposes. As a result, the frequency-domain description of a long ~quence
is given by a set of short-length DFTs, i.e.. a time-dependent DFT.
To represent a non stationary signal ;:o;[ n j in terms of a sel of short-length subsequences, we can multiply
it \vith a window w{n }that is stationary with respect to time and move the signal thr-ough the window. For
766 Chapter 11 : Applications of Digital Signal Processing

------,--- -- --
] 0.5
\'
~ u
-< -0.5 \ • I
\_!
.
-I _ _ _ ....__L_____ ___ _ _ . _ _ _ _ _ _ ~- --·-

200 300 500


0
'""
Figure 11.7: First 800 •amples <1f a casual ehi;p signal cos(w0 n 2 } with wn = I Orr x w--S.

'i r, /\ (\
:g
1.l 0..5
~ 051 ,/ \ I' '\, ,I I ! I, '
- ! \ I \i
l-0.:_, -~ -!J.')0•1 II
3. \
\
·,
/1 \
j
.' I !.I
'1
I'
-~ -·
I/
/ \\}!
"J ~I
l
1J
\ I

-1 [__ __ ·-----~~-~
0 50 J(J() !50 200 WG i50 200 250 300
Time index n Time index n

l ~\ .'\
., os1, / \ '(r.
~ : \ I \ I. '
0.. 0\ \ \
E ' -~ , , 1
<-O.s!,'
l I
',I
• ,

I L\1__~ _l
200 250 300 350 400 .mo 351} 400 450 500
Tune index n Time index n

Figure 1 LS: E-.;ample:s of sub~uen<:es of t!le chirp signal of Figure J 1. 7 genenueC by a !ength-200 r«tangular
window.

exampJe, Figure l L8 shows four segments uf the chirp signa! of Figure 11.7 as seen through a stationary
rectangular window of length 200. As iilustrated in this figure, tbe segments coolrl be overlapping in
time. A discrete-time Fourier transform of the short sequence obtained by windowing is called the short-
tenn Fourier transform wh1Ch i<> thus a function of the locauon of the wiru:kw.t relative to the original
long sequence and the frequency In this section \I.e review the basic concepts associated with rais type
of trans.fonn, study some of its properties, and point out one of its important appllcations. A detailed
exfosition of tlris subje..."'l: can be found in fA.li77}. INaw88}, [Opp89], and fRab78).
11.3. Spectral Analysis of Nonstationary Signals 767

11 .3.1 Short-Time Fourier Transform


The- shori-limF Fourier tmnsjonn (STFT). also knovm as the time-depmdent Fourier tran.iform, of a
,.;cqucncc _x In I i;;. defined by
=
XsTFT{ei 0 ,n}= L x[n-miwfm]e-f«--m_ {11.16)
m=-:><,.>

where m[n] is a ;;.uitably chosen window sequence. It should be noted that !he function of the window is
·:o extract a fioite-length portion of the sequcn...·e x{n} ;;.uch that the spectral charucteristics of the section
-~xu-acted are approximately stationary over the dumlion of Lite window for practical purposes.
Note that if w[nj = I. the definition of STFT given in Eq. (11.16) reduces to the conventional discrete-
lime Fourier transform (DTFT; of x[nj. Howe·.rer. even though the DTFT of x[n} exists under certain
..,·ell-defino:d cnnditioos, the wmdowed sequence in Eq. (J 1.16) being finite in length ensures the existence
•)f the .STFT for any sequencl! x[n 1. It should be noted :;!so that, unlike the conventional DTFf, the STFf
is a function of two variables; the integer variable t:r:te index n and the continuom. frequency variable w.
At also follows from the defmition of Eq. (I 1.16), th:;t X.HFT(ej«>, n} is a periodic function of w with a
period 2Jr.
Jn muM applications. the magnitude of the STFT is of interest. The- display of the magnitude of the
S.TI--<1 is usually referred to a.s the spectrogram. However. ,;ince the STFf is a function of two 'ltar:iables. the
display of its magnitude would normally require three dimensions. Often, it is plotted in two dimensions,
with the magnitude represented by the darknes.<; of the plot. Here. the white areas represent zero-valued
magnitudes while the gray areas represent nonzero magnitudes, will the largest magnitudes being shown
in black. In the STFf magnitude display. lhe vertical axi;; represents the frequency variable (w} and the
horizontal axis represents the lime mdex (n). The STFT Ciln atc.o be visualized as a rnesb plot in a three-
dimensional coordinate frame in which the STFT magnitude is a point in the :--direction above the x-v
plane.
Figure 1! .9 shows the STI-<'T of the chirp sequence of Ey. ( 11.15), with w'-' = !Orr x w- 5 for a length
of 10,000 samples computed using a Hamming window of lenglh 200 in the above two forms. In Figure
l 1.9, the STFT for a given value of the lime index n is essentially the DFT of a segment of a sinusoidal
;,equence. Recall from our discm>sion in Section 11.2 that the shape of the OFf of such a segment is
;;imilar to that shown in Figure 11.3, with large nonzero-valued DFT samples around the frequency of the
:.inusoid and smaller nonzero-valued DFf samples at other frequency points. In the spectrogram plot, the
hrgc-valued DFf samples show up as narrmv nearly black very short vertical lines while the olher DFT
samples show up as gray points. As the imtantaneous frequency of the chirp signal increases linearly, the
short hlack line moves up in lh~ vertical direction and eventually, because of aliasing, the black line .starts
~1o•·ing down ln the vertical dnection. As a resuH. the spectrogram of the chirp signal essentially appear~
m; a thick line in the fom1 of a tri:mgular ~hape.

11.3.2 Sampling in the Time and frequency Dimensions


L1 practice. the STFf is computed at a :fi3.ite set of discrete values of w. Moreover, due to the finite length
of rhe windowed sequence, the STFT is accurately represented by its frequency samples as long as the
number of frequency samples is greater than the window len_glh. Moreover, the portion of the sequence
x'n i inside the window ~an be fully recovered from the frequency samples of the STFf.
Tb be mnre precise. let the -..,-indow be cf length R defined in the range 0 ::; m ::: R - I. We sample
X STI-"T k 1 "'. n) at N equally spaced frequencies w1 = 2Ir k / N. wi[h N _::: R as ind.icated below:
768 Chapter 11: Applications of Digital Signal Processing

,_,
= L;xln-m]wlm]e-i 2""km/N. 0 S k S N- 1. {11.17)
,.,.,=0

It follows from Eq. (ll.l7}, assuming w[m] ::j:: 0, that XsTFI[k, n] is simply the DFf of xfn - m]w[ml-
Note that XsTFT[k, n] is a two-dimensional sequence and js periodic ink with a period N. Applying the
IDFT, we thus arrive at
.\'-1
, ] = NI "L.., XsTn(k , n]e-
x[n -m]wvn J''rrli:l.,•N
' • (1L18)
k=O

or in other words,
N-1
xfn -m} = I " X (k n ieJ?.-,Jmt.v.
Nw[m] L.., STFT '
{1 L19)
k=O

verifying that the sequence values inside the window can be fully recovered from XsTFJ[k, n] as long as
the number N of frequency samples is greater than or eyual to the window length R. It shouJd be evident
by now that x[n] for -oo < n < :Xi can be fu.Jty recovered if XsTFT(ej"'-'. n) or XSTI"T~k. n] is sampled
also in the time dimension. More precisely. if we set n = n., in Eq. (11. J 9), we recover the signal in the
interval no ::5 n S n 0 + R - 1 from X SlFf(k. n 0 ]. Likewise, by setting n = no + R in Eq. (1 J .19). we
recover the signal in the interval n, + R ::::: n ::S n" + 2R - I from X sTFrlk, n 0 + R], and so on.
The sampled STFf fO£ a window defined in the :-egion 0 :;; m :;; R - 1 is given by
R-1
XsTFTfk . .fLJ = XsTFT{ejhk/N. CL) = L xlfL - m]w[mje-JZ;o;~mjN. ( !1.20)
m=O

e
where and k are integers such that ->x < i < oo and 0 ::: k _::: N - 1. Figure 11. J 0 shows Jines in the
(w, n)-plaoe corresponding to Xsm{elw, n) and the grid of sampling points in the (w, n)-plane for the
case N = 9 and L = 4. As. we have shown, it is possible to uniquely- reconstruct the original signal from
~uch a 2-D discrete representation prO\•ided N ~ R ~ L.

11 ,3,3 Window Selection


As in the case of the DIT-based spectral analyo.is of detem1in.istic signals discussed in Section 11.2, in
the STFT analy;ois of nonstationary signals, the v.i:indaw also plays an important role. Both the length and
shape of the window are critical issues that need to be examined carefuiJy.
11.3. Spectra! Analysis of Nonstationary Signals 769

"' \"''lk.:t.;

(a) {b)

Fi~ure 11.10: Sampling gnd in the{"-', n)·plane for the sampled STf<T XsTFTl!.. iLl fur N = 9 and l. = 4.

The function of the window w{nj is to ex.tra-.:1 a portion of the signal for analysis and ensure that the
extracted section of xl_n) is approximately stationary. Tn thi~ end, the window length R should be small, m
p.arucular for signals with widely varying spectral parameters. A decrease in the window length increase~
the t:me-reso~ution property of:he STFI, whereas the frequency-resOlutiOn property of the STFT increa'>es
'Nith an int.:rea;.;e in the window length. A shorter wintlow thus provides a widt>band .VJeclrogram while a
longer window re:>ulL'\ in::. narrowb£~nd spectrogram.
The two frequency-domain parameters charactenzing the DTFT uf a window are its main lobe width
.6._rqL and the relative side lobe amplitude A,.r. The former parameter determines lhe ability of the window
to rewlve tw;:> ~ignal component& in the vicinity of each other. while the llitter controls the degree of leakage
of on,; component into a nearby <>ignal component. lt thui-i follows that in order to obw.iro a reasonably good
e~aimace of the frequency <;pect.rum of a time-varying sign;;L the window should be cho;;en to have a very
~maH rdatwc siddobe amplitude with a length cho:.en ba~ed on the acceptable accumcy of the frequency
and ume r~.:;olutions.

11.3,4 STFT Computation Using MATLAB


The Signal Processing Toolbox of MATLAB includes the function specgra:n for rhe compuralion of the
STFf of a s.ignaL There are <t nurr..ber of versions of this function, as given below:

B specgramlx),
B s~ecgram (X, nfi:t l
~i3, [ i specgram(x,nfft.Fs)
]B,f,::] spec>;_~rar:l ~x, nfft, Fs)
specgrarn ~x, nff c. Fs, w:..nCow)
E specgram~x,nf:tc.,Fs,window,noverlap)

3 specgra.m (x) computes the STFT of the signal x ~pedfied as a vel:tor using u DFT nf length
~~ f f L "" s.:..r:. ( 2 56,
ler.g t h (x; 1 and a Hann window of length nf ft.. The sampling frequency
used i~ fs = 2, and !he number of sampl€s by which the consecutive segments overlap is gi'>en by
!lover ::a.p ~ .Ce:1gth :window/ 2}. For a real signal x, the DFf is computed at positive frequencies
only. whereas. tOr a complex x, the DFr is computed at both pos',tive and negative frequen<.'ies. The
column indlce1' of B refer to the time position of the window with time increasing acros.' the cnlumns, and
the TQ\.\." indices of B correspond to the frequency with the first row corresponding tow = 0, a~ mdicated
in _Figure 11. i O(b-l.
770 Chapter 11: Applications of Digital Signal Processing

The paramCter5 nf f L, Fs, a.Jd r:ove-:: l apcan be specified along with the type ofwindow,depending
on the ven;ion of f'Pecgrrun being u!>ed, If a scalar integer is used fondndow, the function uses a Hann
window of length given by this integer. The specified window length must be less thaa or equal ton ff L.
The vector of frequencies i at which the DFf is computed is returned when [ B, f l = specg:;:a.n
(x, nff l, Fs) is employed, whereas both frequency and time vedors, f and t, respel..'tively, are returned
in the versJon [ R, ::, t.] = specgru_T, ( x, n fft, C"s :•. In the latter case, tis a column vector of scaled
times llild of length equal to the number of columns of E. The first element of :: Js always 0. specgr a.c
with no output arguments generates and piots the scaled logarithm of the spectrogram.
We illustrate the application of spec gram in the fOllowing section.

11.3.5 Analysis of Speech Signals Using STFT


The short-tenn Fouriertransfonn is often used in the anaJy,;ls ohpeech, since speech slgnafs are generally
non~ationary. As indicated in Section 1.1. the speech .">ignul, generated by the excitation of the ..,.-ocal tract.
is composed of two types of busic waveforms: voiced and unvoiced sounds. A typical s~-cl.i £ignal lS
shown in Figure l .17, As can be seen fmm this figure. a ._p~ech segment over a small time interval can be
considered as a stationary :-;ignal, and as a result, the DFr of the speech segment can provide a reasonable
representation of the frequency-domain :::haracteri~tic of the speech in this time intervaL However, in the
STFT analysis, the size of the wifldow is criticaJ since a shorter window developing a wideband spectrogram
provides a bcttcr time resolution, whereas a longer window developi11g a narrowband spectrogram results
in an improved frequency resolution. In order to provide a reasonably good estimate of the changes in
the vocal tract and the exdtHtion, a wideband spectrogram is preferable. To this end, the window size
is selected to be approximately close ro one pitch period, which is adequate for resolving the formants
though not adequate to resolve the harmonics of the pltch frequencies, On tbe other hand. to resolve the
harmonics of the pitcb frequencieS, a narrowband spectrogram with a window size of several pilch periods
is desirable.
The following example illustrates the STFT analysis of a speech ~ignal.

: ~::~;:~~;''\TT~';' ,,
$
"~

}UJ+tt Wit H \
tc" L«tWtt

nf:f::L tntHrtt":' ;' > r:rn3thw •~~,,,~\,


r::f:V%)1q;p 1':Vl'' ,: :;; ?,k,'\1\ ,'JsHGl TJJ'L'\ \"0':
0'1t£,7il:X£ru$$lf" "kl" , L f : : 4 }, :F , ' ; 0IWfJ\ ; ?itf XWf f t: i ~~"!' f 4 ,;, ;

Pf 18Si' 'P"''*!" .,,


w;wc1tii01if;n;ctn +4
ttu ! 0t' Gf\PH 'HJ(tia:dlt! 'V' 71,0,,!0
771
11 A. Spectral Analysis of Random Signals

Figure 11.11: (a) A ,-;peech signal and its (b) narrowband spectrogram, and (cj wideband spectrogram.

11.4 Spectral Analysis of Random Signals


As discussed in Section 11.2, in the case of a deterministic signal composed of sinusoidaJ components, a
Fourier analysts of the signal can be carried out by taking the discrete Fourier transform (OFf) of a finite-
length segment of the signal obtained by appropriate windowing provided the parameters characterizing
the components are time-invariant and independent of the window length. On the other hand, the Fourier
analysis of nonstationary signals with time-varying parameters is best carried out using the short-time
Fourier transform (STFT) described in Section l 1.3.
Neither tle DFT nor the STFT is applicable for the spectral analysis of naturally occurring random
signals as here the spectral parameters ace also nrndom. These type of signals are usually classified as
noise-like random signals like the unvoiced speech signal generated when letter such as Qf'' or ~s" is
spoken. and signal -plus-noise random signals such as seismic signals and nuclear magnetic resonance
signals {Rob82]. Spectral analysis of a noise-Ii;ce random signal is usually carried out by estimating
the power density spectrum using Fourier-analysi~-based nonparametric methods, whereas a signal-plus-
noise random signal is best analyzed using parametric-model-based methods m which 1le autocovariance
772 Chapter 11: Applications of Digital Signal Processing

sequence- is first es.timated from the model and then the Fourier transform of the esrimate is t>V"aJuated. In
this section we consider both of these approaches.

11.4.1 Nonparametric Spectra! Analysis


Con~idera w;de-~nse stmionary (\VSS) random -;ignal g[n] with zero mean. According to the Wiener-
Khinkhine thoorcm of Eq. (3.14.)), the power :;pe;:tmm of f{{n l is given by
X
~
,..-Rii. (nJJ' = ' r
L...., A.
'f'gg
r··r e -<w'
f __ • ' (1 L2J I
t~-:>...-

where 1/.lgK{fl i-,; ib autocorrelation sequence, which from Eq. (2.160) is given by

<f;i{glfl = E(g[n + tlR"'[nJj. (I L22l

Itl Eq. ( 11.22), £(-)denotes the expection operawr as defined in Eq. (2.122}.

Periodogram Analysis
Assume that the iufinite-Jength random discrete-lime signal gfnJ is windowed by a length-A' window
sequence wl_n), 0 :S n :=:: N - 1, resulting in the length-N sequence y[n J = g1n] · w[n }. The DTFT f'{ew':l
of y [n J is given by
N-l h'-.
J(el'") = L y[n]e-i"'-" = L gfn}· w[nle-J',m. fl L2J)

The e-stimate PK..:(N) of the power "pectrum Pu(m) is then obtained using

CN 'r',{- "'ll ,
= --,
-
~ ' 1. 2 (11.24)
r~_, 1
_,_, _r:.)),

wtere 13e {;OilS!ant C IS a normalization factor given by

(11.25)

al\tl included iD Eq. (11.24) to eliminate any bias in the estimate occurring due to the u~ of the window
w!nJ. The quanti:y ·P;u;kiw) define-d in Eq.-{11.24) is called Ll:te periodogrum when w[n] is a re-ctangular
window a:1d called a modified periodngram for other types of wind01NS.
In practice. the periodogrum P:;:g(w) is. eva!_uated at a discrete set of equally spaced R frequendes,
"-'k = 2rrkj R, 0 ::=: k :s_ R- I, hy replacing the DTFT r{ei'-') with an R-point DFf r[k] of the Iength-N
.<.eq~en1..-c yrnl:
{11.26-)

A'- in the case of the Fourier analysis of :s-inusoidal signals discussed earlier, R is usually chosen to be
greater than N to provide a finer grid of the sample.-> of the ~riodogram.
It can be 5.hovm that the mean value of the periodognun P;c?.(w) is given by

(I L27)
11.4. Spectral Analysis of Random Signals 773

Fignre 11.12: Power ;.pectrum estimate ol a ;;lgna\ C'Dfltaining two sinusoidal components cJrrupted with a white
noise sequence of zero mean and unit variance Gaus-,;ian distribntion. (a} Periodoyam with .a tectangular wmduw of
length N = 128, and (b) periOOogram with a rectangular window of length ,";f = 1024

where Pgg(w) is the desired power spectrum and \l.<(ejw) i;,; the DTFf of the window sequence wlnj. The
mean value being nonzero for any finite-length window sequence, the power spectrum estimate given by
the periodogram i~ said to be biased. By increasing the window length N. the bia." can be reduced.
We illustrate the power spectrum computation of a whtte noise sequence in the following example.

To undcn.tand the cause behind the rapid amplitude variations of the computed power spectmm en-
cout'\_tered. in the previou;:.example we assume wlnJ to be a rectangular window and rewrite the expression
foe the periodogram given in Eq. (11,24) using Ec~:. (11.23) as

• 1 ['i-J .\I-I
Pu(w) = N LL ,!?[m]g*lnk-;w\m-"1
"=0 m=O

-'·=-1\-t I

(11.28)

Now ¢ulkj i~ the periodic correlation of g[nl and is an estimate of the true correlation ¢xxlk]. Hence.
Pf'.f!(w) is actually the DTFT of .P~gfk}. A few samples of g[nJ are used in the computation of¢gg[k] when
k is t:.ear N yielding a poor estimate of the true correlation. This in tum results in rapid amplitude variations
774 Chapter 11 . Applications of Digital Signal Processing

in the periudogmm estimate. A smoothe: pt•wer -:pec!rum esnmatc ;:an he o'-tained b)' the periodogram
av•!raging method di8cussed next.

Pe·riodogram Averaging
The power spectrum estimation method, originally proposed by Bm1lett [Bar48J and later modified b}'
Wdch l We167}, i<> based on the c-omputa:ion of 1hc modified periodogram of R overh:ppmg portions of
length-N input c.ample~ and rlli:n .averaging thcS<_ R ;:oeriodvgrams. Let the overlap between adjacent
<;egmcnts be K sar.1ples. Consider the windowed r!h ,cgment of the input data.

(11.29}

with a DTFT given by r(r; (ejw;. J:s periodogram is given b;.

( 11.30)

The Wekh estimme is then given by the .averuge of :til R periodograms P~~J (w), 0 _:::: r <

{1 1.31)

Jt (:an be shown that the variance of the above estimate .i._,_ reduced approximately by a factor R if the R
petiodogrnm estimates are assumed to be independent of each other. For a fi~ed-length input sequence. R
can be increased by decreasing the window length N which in turn decreases the DFr resolution. On the
other hand, an increa__~ in the resolution is obtained by increasing N. Thus, there is a trade-off between
resolution and rhe bias.
lt should be noted that if the data sequence is segmented by a rectangular window into contiguous
segments with no overlap, the periodiogram e;:timate given by Eq. (11.31} reduces to Barlett estimate
[Bar48].

Periodogram Estimate Computation Using MATLAB

Thto Signal Prm:e.uing Toolbox of MATLAB includes the M-file ;::.:.;d for modified pe-riodogram estimate
cor.1putation using :he Welch and Bartlett methods. Some forms of this function arc

pst.(x)
psd{x,:J.tf l)
t?xx,[] psd ( x, .'l [ ~ t, FT)
Pxx p,:;d ( x, r; f fc: , F'T, •,~o:indow;
?xx psd ~ x, r;f i t , FT, >si::-,doN, no',ie_- ; ap)
py,.x ps:1 i x, r: f ft., F':', ·.-;i:-:tdn~,>;, ::10':c1 _:__ap, 'Gf lag' :,

wht x Px:o.; is the power spectrum estimate of the real-va:ued ~equence x evaluated at positiveo frequencies.
n f t t_ is !he desired FFT length. fT is ihe sampling frcquen::y whose default vdue is two. wir:doY.' is the
vc;;lor of the de;;ired window se4uc-nce which need~ to be generated prior to power spectrum estimation.
The sire of Tw i.:1C.r-,w is med hy psd in sectioning the data vector x. The default wir.dow is the Hann
win.kY><~ of lenglh 25-6. If a scalar number is provided for window, then a Hann window of thxt length
Js u.o.ed. The length of the window must be les~ than the size of the FFI given by n f ft. The parameter
~;::Jver lap i;;; the number of samples by which the data segments overlap. The string 'df lag' i-s uMXI
11.4. Spectral Analysis of Random Signals 775

wL--cc--~-~co-
o o; 0.2 o3

(a) (h)

Figure 11.13: Pov,:erspectrum e"timatn;; (a) Bartlett's meffiorl, and (b) Welch's raethod.

to indicate the desired detrending option, where it is ' l i r:ear' to remo-ve tire best straight-line fit from
the segments of the data segment prior to windowing, 'mean' 1o remove the mean from the segments of
the data segment prior to windowing, and 'none' if no detrending is desired. Note that if nover lap is
set tu 0 and a rectangular window is used. psd evaluates tht Bartlett petiodogram.

"'nj "" &±4:


tiL :S:J\0\r
kll*l~&+ff{
,,
y

fJ 1
776 Cilapter 11: Applications of Digital Signal Processing

11.4.2 Parametric Model-Based Spectral Ana!ys1s


In the model-based method, .a causal LTI discrete-time syste:n wit.!._ a tr;:msfer function
0G

H\zJ = Lh\nj::-"

P(,)
{_ l I .32)
Dfz) ,-·M d k
1 + LA:=l F.

is firsr developed whose output, ....+.en excited by a white nmsc seque-nce e[n] with zero mean ar.d variunce
a'}, matches the specified data '>equence gin l 1Kum931. An advantage of ;:he model-based approach is
that it can extrapolate a short-length dam sequence lo create a longer data sequeoce for improved power
spe<:trum estimation. On the other hand. in nonp2ramctric methods. spectral le-akages limit the frequency
resolution if the data length is short.
The model ofEq. {I t.32j is called anauton--gres.sive monng-average (ARMA) process of order (L. M)
if P(z) -f= 1, an all-pole or autoregressive ( AR)process of orde-r M if P(z) = I, and an all-zer-o or moving-
average (MA) process of order L if D(z) = l. ForanARMA or an.4.Rmodel, for stability, the denominator
D(z) mru,."t have all its. zeros inside the unit circle. In the time-d()main, tbe input-output .relation of the
model is given by
M f.
gin]~- Ld>K(n- k] + L Pke[n- k]. (11.33)
k=l k,o(]

As indicated in Section 4.13.1, the outpuc g[n] of the model is a WSS random 1>ignal. From Eq. (4.214) it
foHows that the power spectrum PgR{w) of g[nl can be expressed as

where H(el"') = P(el"')l D(el"') is the frequency response of the mode-l, and
L M
P{ej"') = Lp~ce-jwk, D(eJ"'I = 1 + Ldke-J"-'k.
b{) k=l

In the case of an AR or an MA model, the power spec1rum i~ thus given by

o}-!P(el"')(", foranMAmodel.
= ""-z (1 !.35)
"Pgg(w)
-
l ID(-ef.._:(i' foranARmodel.

The spectral analysis is thus carried out by first determimng the model and then computing tire power
sp«trum using either Eq. (11.34) for an ARMA model or u.o;,ing Eq. {1 I 35) for an MA or an AR model.
To de:ermine lhe model we need to decide the type of tin: model (i.e., pole-zero IIR structure, all-pole
HR -structure, or all-zero F1R ~tructure} to be used, determine an appropriate order of its transfer function
H(z) (i.e .. boL'l L and M for an ARMA model or M f()f :mAR model or L for an MA model.}, and
then fcom the spe..:-ified tength-N data ,>?[nj e;.timate the coefficients of H(z.). \Ve restnet our discussion
here to t..~e development of the AR model, as it is simpler and often used. Apphcahom: of !heAR !nOdel
include spectral analysis, system identification, s.peech aualysis aruJ compres:-.:ion, and filter design. For a
discussion on the development of the MA model and the ARMA mo..-iel, see !Kum93].
11.4. Spectral Analysis of Ranoom S:gnals 777

Relation between Model Parameters and the Autocorrelation Sequence


The model fil:er coefficiems {p1 } and {d~o} are related to the autocorrelation seqtience ¢n, lfl of the random
;,igna! g[n). To es.tabli,;h this relation, we ohtain fmn: F....q. (11.33},

Af L
¢>;.o:lil = - L dk4l',:;rif- k] +L Pk.Peg[t- kj. -00 < {_ < 00, (11.36)

hy multiplying both sides of the equation v.-ith g*{n- t] and taking the expected values. In the above
e:~preS>.ion, the c:nss-corr-elatior. ¢..-g[i] between g[n] and e!nJ can be written as

¢,.g[tj = E(g''hldn + i])


=
= Lh*!kl E(e*~n -k]e!n +£)) =o-}h*[-fl. ( 11.37}
1.:=0

where h{n] is the causal impulse response of the LTI model as defined in Eq. O L32) and a] is the variance
of 1he white noise .-.equence efn 1 applied to the input of the model.
For an AR model. L = 0, and hence Eq. (ll.J6) reduces to

- L~ 1 dk<PuiF- kl. fort > 0,


rPe;;[f] ={- L!~ 1 d~.;dlgg{f- k~ + o}, fore =0, (l 1.38)
.P,;,.!-n. fore < o.
F.·om Eq. ( 11.38) we obtain for 1 :::=: .[ :;:: /l,.f, a set of M cquat~ons.

M
Ldup,li- kl ~ -¢,g[t]. 12_€::CM.

-
k·c I

wh.ich .:an be written in matrix form as


¢,,oJ ¢gg-L-M +II

l .Pu·i l I

.P~:xlM- 1] ¢.-:s[M- 2]
¢RK~-,\.f + 21
lJ (I 1.39)

h.Jr i = 0 v.·e also get from Eq. ( 1138)

¢"n~OJ + Ld~:¢'g,.l-kl =a}.


k=f

ComDin:ing !he abm·e wilh 1:-',q. ( 11.39) we arrive at

J[ ~~ l~
u''
I
¢'g>;i01 1'n·~-l] ¢g"I-M]
¢g,_,[ I] fOI 0
¢gg ¢K.d-M -t· IJ
0 {11.40)
<Ps,dMl ¢~:x!M- 1] d>g~:IO!
du ..1 0
778 Chapter 11: Appl:cations ol Dig1tal Signal Pre-cessing

·r;1c aboYe matrix equanon is more commonly known as the Yule-R'Cdker equation. ~t can be seen from
Eq" (11.40) lhat knowing theM+ 1 autocorrelatmn sample~ <Pu[£] for 0 ~ f :::= M, we can detenninctlle
modd pnrameten; dk for I :;:: k :::= M by solv:ing the matrix equal ion. The (M + l) x {M + l) matrix in
Eq. 11: .40) is a Toepli:z miltrix. 2 Became of the structure of the Toeplitz matrix, the matrix equation of
Eq. (11.40} can be solved usin~ the fast Leviuson-Durbin aigorithm [Lev47]. [Dur59j. Th~ cau~al all-pole
LTI.system H{::) = 1/D(::.) rcsul~ir:g from the application ofthe Levinson-Durbin recucsions is guar:mteed
to he B!BO stable. Moreover, the recur:.ion automati<-·dly leads to a reaEzari.on in the form of a cascad.;:d
FIR !:mice structure as shown m Figure 6..19_

fl.[)Wer Spectrum Est1mation Using an AR Model


The AR mod.o-1 parameters can be Jetennir:.eJ m.ing the Yule- J.Valker method. whid, makes !.he of th~
estimates of the autocorrelation sequence samples as the1r actual values are not knCJwn a prion. The
mttocorrelation at lag -i' is determi:Ied from the specified data s.amples g[n] for 0::: n. .::: N- l using
N-l--lfl
¢' 88 [£J = N1 '
L..... g'"[nJgrn + t], o:-::t:::N-1. ilL41}
1!=0

The above estimates are used in Eq. { 11.39) i~ place uf the true autocorrelation samples with the AR model
pzrameterH d~:. replaced with their c~timate,.; d~:.. The resulting equation i" next solved using the Lcvimmn-
D<Jrbin algorithm to dete~ine the estimate-. of the AR model parameters d~:.. The power spectrum estimate
is the:~ evaluated using
? ,">f L•
p#IK (w) = c-----""'---- ll 1.42~
j I + L;~; ;A e-J"-->t f".
where EM is the prediction error for the Mth-order AR mode!:

(11.43)

The Yule- Walker mefhod is rt'l<tt.:=d to ~he linear prediction problem. Here !he probl-em is to predict the
N-th s.ample g(N] from the previous M data samples s[n J, 0 ~ n :-::= M- I, with the assumption !hat data
sample~ outside this range are zeros. The predicted value .§[nl of the data sample g[nj can be found by a
lmeac <:<lmbiru:tion of the previous A1 data samples as
M
.~fnl = - L'Jgfn- kl = g£n]- c[nJ, (! 1.44)
k=!

where f'[n J is the prediction error. For the spc:::lfied duta sequence, Eq. {1 1.44) leads to _IV- M predic!ion
t."quatiom; given by

gin]+ L g:n- kJd;_ '--- e/n]. U_<n:::::N+M-1. (1 i .45)


~=·
The oprirnum lmear predictor coeffi{:ients J,. are ob:ained. by minimizing ;:he error *'
L:~jjW-l ldn ]! 2 . It
can be shown that the- solutwn of thL"- minimization problem io. given by Eq. ( 11.39). Thus, the be.'it all-pole
line;1r predictor :liher is also the AR model resultiug from the solution of Eq. (J 1.39).
2
A T<>eriitz m,nnx l:a> the "'""" elemer.t '"--'=~ aloog each diagonal
11.4. Spectral Analysis of Random Signals 779

><~--

.~... - --
0.,

Figure 11.14: Magnitude reiipum.tc of th.o FIR filter (shov.m with "olid line) and the ~11-pole llR model (sbw.vn with
dao.hd !i=).

It should be noted that the AR model is guaranteed sta.bie. But the aU-pole filter developed may nat
model an AR proc~ exactly of the same order due to the windowing of the data sequence to a finite length
with 'illmples outside the window range assumed to be zeros.
The function ::. pc in MATL.AB finds the AR model using the above method. Its basic form is

[d,K] = lpc{g,M)

where C. and K are the denominator coefficients and the gain of the AR model of order M, respectively. and
g is the specified data sequence. We illu5trnte its application in the foUowing example.

f: ¥'r:t>:77A'90i l :t~L
'0 t:¥/10!\ ':!" 1<< FJi: !ts~J

j<: tt, ' rj; t 5£41,


t,; rk. \};
;r. , iljl} 4 f¥£n;;(?P¥"-if;"t}, [i~;(j f
:::n:: dtt!:xAU.Itt
Y &kst· \ +r ' 1t'iwm$0 • ' 11? i · f

In order to apply the .above method tc power .<.-pectrum estimation, it is necessary to estimate first the
model order M. A number of formulae has been advanced for m"der estimatiQn [Kum93}. Unfortunately,
none of the the,o;e formulae yields a really good estmate of the true model order in many .applK:ations.
i'BD Chapter 11: Applications of Digital Signal Processi'lg

11.5 Musical Sound Processing


Rec;;J~ from our discussion In Section i .4.1 that almost :.rH music:Ji prog.r&ms are p:uduced ·m basically
two ".tages. First. snund from each indiv\dual m;;trurn::nt j, rccmded in an acoustically inert studio Ofl a
single track cf a multitrack tape rcC~)I der. Then. the o.ig.nal~ f rum each track <>re manipulated by the :o;ound
engineer to add special audio effects and me combined in;:; mlx-down system to finally g:enemte the stereo
recording on a two-track tape re.t:order 1B1e7SI. ! EarX6]. The audio effects are artificially generated using
various signa! processing circuits and devices. ami they Hl' mt·r~asingly being performed u~ing digital
signal proccs~ing tech!!ique;., [Ble7&]. f0rf90}.
Some of the ,;~dal audm effec-ts \hat :;an he implcmcnt.:d d.igitall y <tre 1 ~v;ewed ;n this se;,:tiun.

11.5.1 Time-Domain Operations


As indicated in Section 1.4.1. the <>ound :-ea:::hing [he ;i1'>t..:ner in a closed space. such as a cpncert hall.
consi,.,ts of ;:.cveml components: direct -.ound, early rcflcc{ion~. and reverberatton. The early reflections me
com;JO!;ed of several closely sp~ced e<.:hoc~ that are basically delayed and attenuated copiel> of the direct
sound, whereas the reverberation i:-. composed of densely packed echoes. The sound recorded in an inert
stJdio is different from that recorded inside a dosed space. and, a:;. a result, the fonner does not sound
··natural" to a listener. However, digital filtering c~n be emrloyed to cunvert the sound recorded in an inert
studio imo a catuml sounding one by artihcia11y creuting the cchucs and adding it to the original signal.
E-choes are simply generated by delay un;ts. t'orexampk, tbe direct suund and a single echo appearing
R <>amplingperiods later can be simply generated by the FIR filter of Figure ll. i S(a). which is chamcterl/...ed
by !he difference equation
yfn1 = _t~n] + O'Y[II - Rl- 0 1.46)
or, equivalently. by the transfer function

(11.471

It~ impulse respohse il'l ,.keoched in Figure l I .l5{b). The magnitude respon;.,e of a single echo FIR filter
for a = 0.8 and R = 8 is shown in Figure 1Ll5{cj. The magnitude response exhibits R peaks and
R dips in the range 0 2 w < 2n, with the peak~ occurring at w = 2nkfR and the dips occurring a<
w = (2k + l)n I R, k = 0, l. .... R - I. Because of the cmnh-Jike shape of the magnitude response. su.:h
a filter is abo known as a comb filter. The maximum and m,ninmm values of the magnitude response are
given by 1 ...__a = 1.8 illl.d I - u = 0.2. respectively.
To generale multiple echoes spaced R sampling periods <>pan with ex.ponentially decaying amplitudes,
one can u"e an FIR filter with a tmmfer fvnction of the fomc
l -aN::.--"'R
Hi 7.) = 1 + a::-R + a2z-'2R + ... +aN -! ::.-' .Y-i;R = '-cc-='-''-c,- t! L48)
l - az k

An HR realization of t.lJ.is filter i~ &ketched in Figure ll.l6(a). The impuJse response of a multiple echo
filte-r with a= 0.8 for N = 6aml R = 4 is ;,;hown in Figure ll.l6(b).
An infinite numher of echoes .'--paced R sampling period~ apart with exponentially decaying amplitude"
can be neated by an HR filler with ;; transfer function of the fonn
__ ,
H (z.) = --'---,,
l -a:_-·
:a: < !. (ll .49}

Figure 11. !7(a) ,;hm'>'S one possib1c realization of the ahnv" !lR filter whose ilf<;t 6l impulse response
samples for R = 4 are indicated in Figure I L I 7(b). The magmtude rcs.ponse of this IIR fi!kr for R = 7 is
115. Musical Sound Processing 781

.t!nj-0----~+ yj_n ;

+~~~~~-!;-~~- r.
I) R
(b)

2 ----

i\

' ' I
'~<' "' '' ''
'
'',, "'I'
',, '

' '• ' ' ,

' '"
ic}

Figuf'l" 11.15. SJDg.le echo !ilte:r_ (a) l-'ilte:r st:ruclure, {b) typical impulse respon:o;e, and (c} magmtude response for
R =A .md or = l}.8.

.t!n I .>-lnl

H; !5 ?.1

(a) (b}

FiJ;::on- 11.16; Mul~iple e<.:ho liltcl- generating .V - I echoes. (a! The filler structure, and (b) impulse re-sponse with
ce ~= O.S fo;- N = 6 ami R = 4.

sketchdin Figure I 1.J7(c}. The magnituceresponseexhibih R peak~ and R dips in tberangeO _:::: w < :br,
with the peaks tK-curring at rv = L"'r k/ R actd the dips ou::urr_ng at VJ = (2k + l}n I R, k = 0, I, ...• R - 1.
The maximum and minimum value" oft he magnitude reo;;ponse are given by 1/(l-a) = 5, and lj( l +a) =
0.555fi, respectively.
Tte HR comb filter of hgure ll.l7(a) by it~lf does not provide natural sounding reverberations for
two rcawn..;; !S~h62l Fir"t. as can be seen from Figure- I L 17(c). its magnitude response is not constant for
all frequcncit:>. resulting ir. a ''ctJlorarion" of many musical sounds that are often unpleasant for listening
purpmes. Second. the- oc!put <.."<:ho den<.;ity, given by the number of echoes per second, generated by a unit
impulse at the mput. is much lower than that observed in :l real room, thus causing a ''fluttering" of the
composite sound. ft has been observed tha! approximately l 000 echoes per second are necessary to- create
a reverberation that sounds free nf flutter ISch62J. To develop u more realistic reverberation, a reverberator
with an allpas.s s!ructu.re, as indicJ.ted m Figux ll.L8(a). has been proposed 1Sch62j. 3 lis transfer function
--:-------·
~·1 he ~tru<.·:u:-.;~ shew" here are lite ~anomc ~mglc mulnplier reahlalmn <Jf J tirst-OlXk:r all pass tran\ferfuru:tioo f!l.1il74aj_ See also
Sccti~Hl tU:> l.
782 Chapter i 1: Applications of Digitaf Signal Processing

1
II
''
II
'I II

,,_
• I 5.:
'l''"'"'bc-J !=.""'"''
(b) (c)

Fi~w 1 L !7: HR !dter gcneratin~ u1 Infinite number of t..'Chve~- j a) The f1her struoure. (b) 1mpuise re~pom.e with
u = {)8 fu£ R = 4. ~nd (.::)magnitude resp.m~c with U' =OJ~ for R = 7.

L-----.•i.±i8+o..------+ A"I
(a)

(bJ
lii~:ure 11.18: Allp<l\.'> re"<c~benltm. (<~.:Rio<.:;; d:agnun :cp:·c,entC<Uon. and fhJ impuhc rc~pon,:e with a= 0.8 fox
R '--- 4.

is given by
a --I- _-N
H(z) = 1 +a: R'
let: < J. (I L50)

ln lhc steady :-.tate, the spectral hahmce of the :-.ounJ signal remains unchanged due In the unity magnitude
n:sy.."Jn~;:-vf the a!JjX>SS rcverbemior
TI..: !fR comb filler of Figure J !.17{a) and :he allpass revcrheratorofFigure II. J X{ a} are basic reverhcr-
ator unit:> that are suitahly int~rconne:cted to develop a natuml sounding reverbera£ion. Figure J 1 .19 shows
~)flt~ ;;uch intcrconne<:tion \.:Ompo~ed of a pa.rallel conncctim1 uf four HR echo generator~ in (_'ascade with
11.5. Musical Sound Processing 783

Figure 11.19· A pn>po;,cd natural ';Ouncimg reve:rbcmtor sc£eme.

two aUpa.ss reverberamrs [Sch62]. By choo~ing different val:Jes f& the delays in each seeiion (obtained by
adjusting R;} and the multiplier constants u:, it i:; ;rossihle to anive at a pleasant sounding revet"berarion,
duplicating that occurring m a specific dosed space, suet. a" a concert halL
An inte-re:;ting modification of the basic UR comh filter of Figure 1 Ll7ia} is obtaine--d b-y replacing
the multiplier a ~ith a lowpas.;;. FIR or ITR filter G{::), as indicated in Figure 11.20(a). lt has a tnmsfe:
function given by
(i 1.51)

obtained by replacing a in Eq. ( 11.49) with G(z). This structure has been referred to as the teeth filter
anj has been introduced to provide a natural tonal character to the artificial reverberation generated by it
I E:~84J. This rype of reverberator should be carefulJy desigr::ed to avoid the stability problem. To provide
a reverberati-on with a higher echo density, the teeth filter has been used as a basic unit in a more complex
structure such as that indicated in Figure ! 1 .JS(b).
Addirional details concerning these and other such composite reverberator structures can be found in
[So.:-h62!, [Moo79j.
T:'lere arc a number of special s.ound effects that are ofteR used in the mix-down process. One such
effect is calledfianging_ Originally. it was created by feeding the same musical piece to lwo tape recorders
and then combining lheiT delayed outputs while varying rhc difference Ll.r between their delay times. One
way of varying t>..; is to slow dov.-n one of the tape recorders by placing the operator's thumb on the
flange of the feed reel, which led to the name flanging 1Ear86]. "lbe FJR comb filter of Figure [ l.l5{a}
can be modified to create the flanging effect. In this case, the unit generating the delay of R samples, or
equivalently, a delay of RT ~ond">, where Tis the sampling period. is made a time-varying delay fJ(n),
a.s indicated in Figure 11.21 _ The cnrresponding input-outpL.t relation is then given by

YlnJ = x[n 1+ axfn - pl(n)]. (l L52)

Periodically varying the delay fj{n) between 0 and R with a low frequency w., such as
R
~(n) =2 (I - cos(w-,,n)) (11.53}
784 Chapter 11; Applications of Digital Slgnal Processing

xlnl

(a)

_>:f,"lj +
I
I
·a,"(
"' {l}

'
+ + _y(nJ
(b)
Figure 11.20: (<~ i Lo.,..pass reverberato:r. and (b) a multi tap reverberator structure.

x[n] -.,--.. --~r y[n]


"-.-,

Figure 11.21: Generation of 11 flanging effect.

generates a Hanging effect on the sound. 11 ;;hou1d be noted that, as the value of fi(n) at an instant n
in general has a noninteger value, in an actual implementation, the output sample value y[n] should be
computed using rome type ofinterpolati£•n method such as thaf outlined in Section 10.5.
The chorus effect is achieved when several musicians are playing rhe saree musical piece at the same
time but with small change.<> in the amplitudes and small timing differences between their sounds. Such
an effect -can also be created synthetically by a choru.~ generator from the music of a single musician.
A simple modification of the digitaJ filter of Figure 1 1.21 leads to a structure thar can be employed to
simulate this sound effecL For example, the structure of Figure 11.22 can effectively create a chorus of
four mu;.;ician!' fwm the music of a single musician. To ach1eve this effect, the delays ih (n) are randomly
vruied with very slow variations.
The phusing effect is. produced by processing the signal through a narrowband notch filter with variable
notch characteristics and adding a scaled portion of the notd, filter output to the original signat as indicated
in Figure ll.23 {Orf96]. The phase of the signal at the notch fHtcr output can ckamaticaHy alter the phase
of the combined s.\gnat, particularly around the notch frequency when it is varied slowly_ The tunable notch
filter can be implemented using the technique described in Section 6.7.2. The notch filter in Figure 1 L23
caL be replaced with a cascade of tunable notch filters to provide an effed similar to flanging. However,
in tlanging the swept notch frequencies are aiways equally spaced, whereas in phasing the locations of !.he
notch frequencies and their corresponding 3-dB bandwidths are varied independently.

11.5.2 Frequency-Domain Operations


The frequency responses of individuaUy recorded instrument::; or musical sounds of performers are fre-
quently modified by the sound engineer during the mix-down process. These effects are achieved by
1 LE. Musical Sound Processing 785

+ +
1
II, !Jl (l'i ~
a,
v
+ + y[n]
1
II, "''"I "-a~
v
+

1--'''"1
. f'-,{13

Figure 11.22: Generation of a chorus effe<.:t.

<[n]
-;;:;::::;::1::::::::::;;;--~~;-. y{ n j
Notch filter wit
vanahle notch
encv

Figure 11.23: Generation of the phasing effect.

p,:tSsing the original signals through an equalizer, briefly reviewed in Section 1.4.1. The purpose of tl1t'
e<JUUJizer is to provide "presenre'' by peaking the midfrequency components in the range of 1.5 to 3kHz,
and to modify the bass-treble relodionships by pro·dding "boost" or '"cut" to components outside this range.
1t is usually formed from a cascade of first-order and second-order filters with adjul>table frequency re-
spor.res. The simple digital filters described in Section 4.5.2 can be employed for implementing these
i"i.mctions. We review these filters here to pomt out their specific properties that are suitable tor musical
snuf'd processing. In addition, we describe some new structures wilh more flexible frequency responses.

First-Order FilteJ"s and Shelving Filters


The transfer functions of a fir&t-order lowpass filter and a highpass. filter with tunable cutoff frequencie~
are gi-.,;en by F..qs. (4.109) and (4.112), respectively. and repeated below for convenienc-e:

1-a 1+ z- 1
Hu·(Z) = ~- · 1' { 11.54n)
2 l- a:
1- .. -I
(ll.54b)
1 az 1 ·
T:1e 3-tiB cut-off frequency We for both ttansfer functions i:- related to the constant a through

_, ( 2a ) {11.55)
We =cos i +a:! .
786 Chapter 1 t: Applications of Digital Signal Processing

r -------<!:<:)--•
:}
Highpass
output
'

Lowpass
output

1
2
+

<;!+}-~yin]

K
1

Figure 11.25· Low-frequency shelving filter.

The two- transfer functions of Eqs. ( l1.54a} and ( l1.54b) can be alternatively expressed as
1
Hu(z) =l {I- A,(z)l. (ll.56a)

l
HHp{Z) = 2 {1 + AJ(Z)}, ([ J.56b)

wh:o:re A 1(z) i!; a first-order aHpass transfer function given by

(1 !.57)

A composite realization of the above two transfer functions is sketched in Figure 11.24, where the first-order
allpass transfer function A 1 (z) can be realized using any one of the single multiplier structures of Figure
6.23. Note that in this stnK:ture the 3-dB cutoff frequency w, afboth filters is independently controlled by
the multrplier constant a of the allpass section.
Combming the outputs of the structure of Figure 11.24 as indicated in Figure 11.25, we arrive at a
low-frequency shelving filter that is characterized by a transfer function G L(Z) given by
K I
G !(Z) =
2 [l - AJ(Z)J + 2 (I + A.1 (z)J. 0 1.58)

where K is a positive constant fReg87bl Figures 11.26 and II .27 show the gain responses cf G 1(z)
obtained by varying the multiplier consrants K and a. Note that the parameter K controls the amount of
boo 'it or cut at low frequencies, while the parameter a controls the boost or cut bandwidth.
A high-frequency she[ving filter is obtained by c-ombining the outputs of the structure of Figure 11.24
a~ indicated in Figure 11.28. which is characterized by a tramfer function

(1 L59)
11.5. Musical Sound Processing 787

··~

'

-10 K,.jj25

figure 11.26: Gain resp:;nses of the tow-frequency shelving filter of Figure ll.25 for various values of the parameter
K with a =0,9.

K,.15

(a) (b)
F'tgUre 11.27: Gain responses of the low-frequency shelving filter of Figure 11.25 for various values of the parameter
a:;wi"!h two fixed values of the p:u-ameter K.

x[n] y{nl

Figure Il.28: High-frequency shelving filter.

Figures 11.29 and 11.30 show th.e gain responses of !be above transfer function obtained by varying the
multiplier constants K and a. Note that, as in the case of the low-frequency shelving filter, here the
parameter K controls the amount of boost or cut at high frequencies, w-hile the parameter a controls the
OOosl or cut bandwidth.
788 Chapter 11: Applications of Digital Signal Processing

Figun ll.29: Gain re:o;ponses of the high- frequency shelving filter of Figure ~ 1,28. for various values of the parameter
K with u = 0,9.

K~04

"
•~ -~ ' __-·_,,
-....0.5
I -J
-W

(a) (b)

Figure 11.30: GaiJJ responses of the high-frequency shelving filter of Figure I L28 for various valueli of the parameter
a with two fixed values of the parameter K.

Second-Order Filters and Equalizers


Thl~ transfer functions of a second-order bandpass and a bandstop filter with tunable cutoff frequencies and
3-rlB bandwidths are given by Eqs. {4.113) and (4.118), respectively, and are repeated here for convenience:

1- a I - z- 2
HB?{Z} = ~,~.
"" l - ,.,.
"(l 4-.az:
)
.az l ...... 2. (1!.61Ja)

l+a 1-2/3::.-l+z-2
Hns(z) = ~2~. 1 - ,8(1 + a)z 1 + az 2"
(ll.60h)

The center frequency Wa uf the bandpass fitter and the notch frequeru::y w 0 of the bandstop- filter are related
to the constant {J through
1
Wo = cos- {.8), (1!.61)
while the 3-dB bandwi.dth Bw of both transfer functions is related to the constant a through

(I L62)
11.5. Musical Sound Process:ng 789

ln{'"ut
1
' ~{::;-}
r ''
'
Ba.'"ldstop
output

t'>'..::._l
' Bandpru;s
+ output
(a)

-~
Input +

(b)
Figure 11.31: A pa:rame:rically tunahle secor:d-onier bo:m.dpas.\/hand:-;tnp fi:ter: (a) overall strudure, and (b) allpass
;ection.

A composite realization of both filters is as indicated in Figure 11.3: (a) ba<;ed on a sum of allpas~ decom-
positions of Eqs. (l L60a) and tll.60b):
. 1
Hsp(ZJ =2 [1- A2(;:)], (ll.63a)
j
Hes{?) = 2 (1 + A2(z.}]. (1 L63b)

where A2(z) is a sec011d-order allpass tnmsfer funetion givn: by


a-{30 +alz:- 1 +z- 1
A2(z) = 1 - ,6(1 +
.
Cl:")Z 1 -;- uz "
L
( 11.64)

and~~ realized using the cascaded lattice sLructure of Figure i L31 (b) for independ.em tuning of the center
(notch) frequency Wo and the 3-dB bandwidth R.,,.
As in the fin;t-ordcr case, a weighted combination of the outputs of the structure of Figure 11.3 l(a)
results in a senmd-order equalizer (Figure 11.32) with a t!'.c.n!<ter function G2(::) given by

(1 J .65)

where K i:-; a positive oomtant iReg87bj. It follows from the ahovc d.iscu~sion that the peak or the dip of
the magnitude response occurs iit the frequency u.>,.,. whlch is controlled independently by the parameter /3
according to Eq. {ll.61 ). and the 3-dB bandwidth Bw of !he magnitude response is determined solely b)'
the par&metcru as indicated by Eq. ( 11.62). Moreover, the height of the peak or the dip of the magnitude
re~.ponse is given by K = G2(e;<.;,,). Figure.\ 11.33 to 11.35 show the gain responses: of G2(z) obtained
by varying the parameters K. u, .and ,8.

Higher~Order Equalizers
A graphic equaliz_e.r with tunable gain respon;,e can be built usmg a cascade of first-order .and second-order
equalizers with extemal control cf the maximum gain values of each sectlOfl in the cascade. Figure tl36(a)
790 Chapter 11 : Applications of Oigltal S1gnal Pfocessing

xlnJ-4-4 ~(;:)

Figure 11.32: A parametrica;!y tunable :..ccond-urder equah.Ler.

[•.2

Figure 11.33: Gain response., of the second-order equalizer of Figure 11.~2 for VariilUS vah:.es of the parameter K
with u = O.S a11d fi = 0.4.

Figure 11.34: Gain respon:-;e~ uf the second-order equ<t!izer of Figure 11.32 for v;orious values of the pllrameter fJ
with a = 0.8 and K = 3.5.

shows the block diagram of a i":a:-;cade of one itrst-onier and three second-order equahzers. with nominal
frequency response parameters as indicated. Figure 1 f .36.:b) llhows its gain response for some typical
values of the parameter K (maximum gain vaiues) o:f l:he individual section;::.

11.6 Digital FM Stereo Generation


Frequency-division mu]tiplexing of stereo used in FM (frequency modulation) radio for the transmission
of left anrl right channel audio signals has been described earlier in .Section L4.3. We now COllsider the
1J .6. Digital FM Stereo Generation 791

Figure 11..35: Gain response& ofthe ;;.econd-order equalizer of Figure ll.32for various values ofthe parame~:er a with
{) = 0.<!. and K = 3.5.

~
' s -oc & secon Second-order
" ~0
"" d '
H'
02> Q.4n 1
= 0 2n: rn, ill 0
I ru, tum
~
L _

ll)c
lupur- f--. B,.o:::0.2;t f--. 8...,=0.21!: B,._-"'0.211:
' K=U K- L2 K = 0.95 ' . K-U

<a!

/'
-----
_/ \
='
" \\
~- \
' '-------"~
'' 0211 04-.: 0.61< {)..lh;-
"""'"'"hud fre<(Oeoq
(b)

Figure 11.36: (a) Block diagram of a typical graphic equalizer, and (b) its gain re&ponse for the section parameter
ViJ.lues shown.

digital stereo generati-on for FM transmission. A simplified block diagram representation of a digital
FM stereo generat<»' is shown in Figure 11.37. As indicated here. the analog outputs of the left and
right microphones, .'iL(t) and SR{t), are first converted into digital signals, SL[n] and SR[n), by means
of ir.dividual A/D converters. In practice, high-frequency components of the modulating signa.i have
much smaller amplitudes, while low-frequency components have much larger amplitudes. However,
smaller amptitude components produce a correspondingly smaller frequency deviation Consequently, the
resulting FM signal does not fully utilize the bandwidth allotted fm its transmission, lowering the signal-
to-noise ratio considerably at the high-frequency end. The output SNR at the PM" receiver is increased by
ennphasizing the higher frequencies of s L{n J and s R [n] by means of digital p-r~mphasis filten:., aE indicated
in Figure ) 1.37,
The sum of the pre-emphasized left input discrete-time signal x L [n] and the preemphaslzed right input
discrete-time signal x R[n i is tr.tnsm.iued in its baseband form for monophonic reception. 1be difference
signal X[.[nl- xe[n] is transmitted by DSB-SC {double-sideband, suppressed. carrier) modu1ation4 using a
38-kHz subcarrier. The transmitted multiplexed signal y[ n] indudes the sum signal, the DSB-SC modulated
' 1See Sect10n 1.2.4.
792 Chanter 11: Applications of Digi1al Signal Processing

y[nl

Pr~mpt'.asis
f.l!Cr
'•',,.---C+l£H±L------(+tJ'J'-1
r r- '---'
r
Oscillator
<.:us(w,-n;

Fignre l J.37; Wo<.:k diagram repre.Y:;nt:;tior. of the FM stereo transmiUer.

Spedrum ol
.rL[n] +:tN nj Spe~-1:mm of
Pilot tone DSB ou:;:a1t
l
l

f
0

Figure J 1.38: Spectrum ot a ;.;ump..'JoSite ~tereo di~crete-time broaffi:l'tst signal.

difference signal, and a 19-kHz pilot carrier:

YlnJ = (;;L[n! + XRin]) + (xiJn]- XR[!1 ) COS(2w,_-n) +r ::os(wcn), (11.66)

w~ert w, =- 2Jr F.-/ TT ~the nom1alizcd mgular frequency of the. pilot carrier J-.~- = 19kHz and Fr is the
sampling frequency in Hz, which i;, typically 32 kHz. The ! 9-kHz pilot carrier is included to provide a
reference i(rr coherent demodula!icn at the stereo receiver. Figure 11.38 shows the power spectrum of a
typical composite ·~aseband signal v[n]_ The composite sign;;l y[nj frequency-modulates the main carrier
to generate the transmitted signal_ The value of the gain com.tant r for the pilot signal is chosen s.uch that
the pilot is allotted about I 0 percent of the peak frequency deviation. The originaJ signal-power distribution
is restored at the receiver output through adeemphasis network.
In the analog FM stereo generation. a first-order analog prcemphas1s network with a transfer function

Ga(s)= s+Ql (II.67a)


J+£21 +S12
is~:n19loyed :Pan65J_ Typ-ically. Qi « 0::;. m which -case. the asymptotic magnituC.e response of the
preemphasis network is of the form shown in Fjgure 11.39. In the frequency range of interest the transfer
fuoction of the analog preemphmis network can be approximated as

(11.67b-)

wh,~re I j rl1 is typically 75 m;,ec Lquation (! 1.67a) can be rewriuen as


s-t-KJ..
(J 1.68)
.~ + ), '
11.6. Digital FM Stereo Generation 793

6-UB/QClave

Figure 11.39: Asymptotic :-n<~gni!u~e response of the preemphasis. netwurK.

where K is the de gain

K (I J.69)

and
II 1.70J
Now, Ga(s} can be decomposed as. [RegS7b]
l K
Ha(s) = 2 [1 + Aa{s)] + ""'·j [1- Aa(<~)J, (I 1.71)

where A .. (s) is a stable analog allpass transfer function:


s - ).
A,(.s) = - - . (I 1.72)
. s + ),
Tile tran.-.,.f<:r function >Jf the corresponding digital preempha<;is network is obtained by applying the
bilinear tnmsforrmrtinn of Eq. (7.20), repeated below for convenience:

,~:'_(1-z-')
- l, (11.73)
T ,1-!-;: .1

where T = 1/ FT. Applying the al:hlve transformation to Eq. ( t 1.71 ), we arrive at


l K
GV.i=z[l +AJ(<::)J+ 11 -A;(:;:)]. (l_L74)
2
where A1. (z} is a stable firM-order digital allpass transfer function
a - --1
41 ( .. \ - <.. (1 1.75}
. .~, - 1 Uc... l '
with the parameter a given by
(ZjT)- J.
{lL76)
(2/T) +A
The c.\pression for G(;:) given by Eq. (1 L74) is precisely the same as that of the first-order equalizer
di~cu.;sed i.n the previous section at:d given in Eq. (11.58) with a realization of G(z) as indicated in Figure
11 25. Now. from Eq. (11. 73), the pole of the analog tram.fer function at s = A is mapped onto the
cwrespondillg digital angular fiequency w, acc.'Jrding to
2
A. = T tan(m, /2). ( 11.77';

The 19-kHz tone signal in Figure 11.37 can be generated using a look-up table method, u~ing the
tri;~ooomctric function approximation method of Section K8.1, or using !he sine generator described in
Section 6.11 .
794 Chapter 11 · Applications of Digital S1gnal Processing


----------~~----------ffi
0 •
(a)
------~--~~--~~----ffi
0 ;rJ2
(b}
;r

Figure UAO· (a) Frequency respo:~se of :he discrete-lime filter generating an amdytic signal, .awl (b) haif-b.:md
h"YWpaso; fiher.

11.7 Discrete-Time Analytic Signal Generation


As discussed in Section I _2.3, an analytic continuous-time sigoat bas a zero-valued spectrum for all negative
f:-equencies. Suc-h a signal finds applications in single-sideband analog communication systems and analog
frequency-division multiplex systems. A discrete--time signal with a similar property finds applications in
digital communication systems and is the subject of this section. We Hlustrate here the generation of :m
analytic signal y(n] from a discrete-time real si.gnat x{n 1 and describe some of its applications.
:"low, the Fourier tr.ansform X (ej"') of a rea! signa! x[nj. if it exists, is nonzero for both po.<;ltive and
negative frequencies. On the other hand, a signal y[n] with a single-sided spectrum Y(el"') that is zero for
negative frequencies must be a complex signaL Consider 6e complex analytic signal

y[nl = x(n] + jLfnJ, 0 1.78)

where x[n] .and i[n] are real. Jts DTFT Y(e 1"') is given by

Y(el"") = X{e-'"") + ji(ej"'}, (1 L 79)

wtlere X{ej"') is the DTFT of X{nJ. Now, x[n] and i[nJ being real, their corresponding DTFTs are
conjugate .symmetric, i.e., X(ei"-') = X"(e-i""} and k(ei'c} = i*{e- J<»). Hence, from Eq. (l 1.79) we
obtain

X(eiw) = 1[Y(e 1"") + Y"(e-.i'")J, {ll.SOa)

jf:(ef&) = ~ [nef"')- Y"'(e-Jw)]. 0 L80b)

Since by assumption, Y (efw) = 0 for -n ~ w < 0, we obtain from Eq. ( ll.BOa)

Ol.8D

Thus, the analytic signal y[nJ can be generated by passing x[n 1 through a linea: discrete-time system with
a freql.lency response HleJ'") given by

H( J"') = J2, O:::;::u)<Tf,


e · 10 (11.82)
-J< :.5 w < 0,

as indicated in Figure 11.40{a).


11.7. Discrete-Time Analytic Signal Generation 795

4n]
:-li!bert
Transf(,nner

Figure 11.41: Gcn::ratior: of an ar.alytic signal u;;mg a Hilbert !Iansfunnei.

11.7. ·t The Discrete-Time Hilbert Transfonner


We now relate the imaginary pan iln] ofrhe analytic sigm:l y[ni to its real part x[nj. From Eq. (J L80b),

XuJ'") = ~j [ r kJ''') - Y"'(e- 1 """)] . (l 1.83)

ForO.:::; bl < ;r, Y(e-Jw) = 0, and for -rr::::: t.:> < 0, Y(ei"') = 0. Using thi~ property and Eq. (11.81) irt
Eq. ttl.83). it can be easily shown that

X(ei"') = l ~ j X c_el"-')' 0 'SO w < :rr' (11.84)


- lJX{c'ili'}, -rr ~w<O.
Equauon ( 11.84) that the imaginary part i[n] of the anai}1l.C signal y;_nj can be genemted by passing its
real part xlnJ through a line<ir discrete-time system with a f;equency response HHT(ef"') given by

HHr(ej"') =I~ j. 0 ::: &J < n· (11.85)


}, -:r s OJ< 0.
The Jine<tr system defined by Eq. (11.85) is. usually referred to as the ideal Hilben transformer. Its. output
i(.nll'> called the Flilberl tmnsform of its input x[nj. The basic scheme for the generation of an analytic
signal yLnl = y...,[n] + }Yin[n] from a real si:gnal x[nJ i~ thus as indicated in Figure 11.41. Observe
I
that !HHT (ej"') = I for all frequencies and has a - 90-degree phase-shift fur 0 :::: w < 1r and a + 90-
degree phase-shift for -IT :S w < 0. As a result, an ideal Hilbert transformer is also called a. 90-degn:c
phase-shifter.
The impulse response hHrfnl of the ideal Hilbert transformer is obtained by taking the inverse DTFT
of H HT(eJw) and can be shown to be (Problem 7 .35)
for n even.
(11.86}
forn odd,
Siltce the ideal Hilbert transformer has a two-sided infinite-length impulse response defined for -rr < n <
;r, 1t!San unrealizable system. Moreover. its transfer function HHT(ZJ exists only on the unit circle. We
describe later two approaches for develop-ing a realizable. appro.'limarion.

11.7.2 Relation with Half-Band Filters


Consider the filter with a trequen<--y response G(_eiw) nbta-ined by shifting the frequency response H(ejw)
of Eq. ( l 1 .82) by,;- !2 radtans and !>caling by a factor ~ (see Figure II AO):

}'"-" _ lH( ,JU»-<-Jr/2))


G( e•-,t;

_
- '"
\!.
i
I, 0 < !wJ -< ~·
'2"< ]W]' < 1f. (11.87)

Fmm our discussion io Section 10.7.2, we observe that G(ej'"") is a half-band lowpas.s filter. Because of
the rdati.on between H(e 1"') of Eq. (I 1.82) and the real coefficient half-band lowpass filter G(el"'} of
Eq. {_ 11.87), the filter H(ej"') has been referred to as a complex half-band filter [Reg93-!.
796 Chaoter 11: Applications of Digital Signal Processing

xln J

Figure 11.42: HR n:alizatnm nf a co::-1plex half-band filler_

11.7 .3 Design of the Hilbert Transformer


It also follows from the above relation that a complex h:~lf-band filter can be designed sir.1ply by shifting the
frequency resp.._>nseofahalf-hand lowpass filter by 7T j2radians and then scaling by afacmr 2. Equivalently,
the relation between the transfer functions of a G.>mplex. half-band filler H (z) and a real half-band lowpass
filter G{z_) lS given by
H\::.) = j2G(- j:). 0 }.88)
De.sigo of the real half-band filter wa<; briefly toud:ed upon in Section 10.7.3. We outline below two other
novel approacheo..

FIR Complex Haft-Band Filter


The real half-band filter de:>ign problem can be transformcJ into the design of a single pa'isband FIR filter
with no stopband, which can be easily de;;;igned u~ing_ the popular Parks-McCle-llan a1gorithm 5 {Vai87bj.
Ar inverse transfOrmation on the realization of this v.i.deband filter t'len y1elds the implementation of the
de·>ircd real haif-band filter.
Let the specifications of the real half-band lowpa.'is filter G (z) of length N be as follows: passband edge
at <Vp. 'itophand edge .at<»~, pa;;;.;;bar.d ripple of 3,, and slop band ripple of 0,. As discussed in Sec! ion 10.7 .2,
the passband and stop hand ripple> of a linear-phase lowpass half-band FIR transfer function G (z) are equaL
i.e. Dp = ,), = S, and the length N is odd. Moreover, the passband and stopband edge frequencies are
related rhrough Wp + ~ = ::rr, It has also been shO\\'ll that (IV - 1)/2 mus1 be odd.
Now, consider a wideband linear-phas<! filter F(z) of degree (_N- I )/2 wjL'I a passband fromO to 'hop.
a transition band from 2wp ton. and a passband ripple of 2.5. Siru.,e (N- 1 )/2 is odd, F(:) has a zero at
::: === - I. The wideb.and filt-er F(:) can be designed using the Parks-McClellan method. Define

(l1.89)

It lOllows from Eq. (1L89) that G(z) is indeed the de~ired half-band lowpass filter and has an impulse
:-cspunc.e
Jr[>J. n even,
J?[nJ ={ i· {l L90)

wh~re ftn 1 i::; !he impulse response of F(z)_
Snbstiruting Eq. (1[,89) in Eq. (t 1.88;, we obtuin

(ll,9lj
11.7. Discrete-Time Analytic Signal Generation 797

- -------~--- -------"
\
\
'\
\

C'-21< OA>< a""' (U3<


N..--.;1z<.O f"'''""'9
(a) (b)
Figure 1IA3: Magnitude responses of (a) the widebarid FIR filter F(z) and (b) the approxinrnte Hilbert transformer
F( -z2).

Figure 11.44: The magnltm1e response of the Hilbert transfonner designed directly using MATLAB.

An FIR implementation of the complex half-band filter based on the above decomposition is indicated in
Figure 11.42. 1he linear-phase FIR filter F( -z.2 } is thu~ an approximation to a Hilbert transformer.
We iJlustrate the above approach by means of an example.

The FIR Hilbert transformer can be designed directly using MATLAB. To this end, the version of the
functionremezemp1oyediseitherb""' re~ez(N, f, m,'Hilbert') orb"' remez(N, f,
m, wt:, 'H::.lbert'). The following example illustrates this approach.
798 Chapter 11: Applications of Dlgttal Signal Processing

Figure 11.45: IJR realization of a complex ha1f-band filter.

,,,____ ~

-LJC I
·-ID
g_, \I I11

~'Li--~----~r~~-~~~,;~('-~L1~'\
0 (l51: ,. l5u ln:
~ ftt.quency

Figure 11.46: Gain responses of the cornplell. half-band filter (normalized to 0 dB maximum gain).

DR Complex Half-Band Flits<


We indicated in Section 10.8.6 that a large class of stable IIR real coefficient half-band filters of odd order
can be expressed as [Vai87t]
(l L92)
where .Ao{z) and At (z) are stable allpass transfer functions. Substituting Eq. (11.92) in Eq. {l 1.88), we
therefore arrive at

(1!.93)
A realization of the complex half-band filter based on the above decmnposition is thus as shown in Figure
11.45.
We :illustrate the above approach to Hilbert tr.msformer design in the fullowing example.
i 1 .7. Discrete-Time Analytic Signal Generation 799

i
.. J '
i
I
~-- __ j

~-~1
Figure 1L47: Pha;;e difference be\ ween lbe two allpas;; sections of the o;;omplex half-b;;nd filter.

_-n_,_________-m
~c_-4k-t:----------,;- "'
M o··- "- __Jm M n
{a}

V(ejw)

/'
-(OOc+!llM}
.......,__-+1
--Jt·---+-1:::-\c..-··
m,. !'
--~\W,--WMj
~+v~;:\c-+..-~·
Q /
'-'\--WM
WL> ~'- l f
-~OO,.+OJM
w

(b)

Figur~ 11-48: Spectra of a reai ;;ignal and ib modulated ver;;inn. {Solid lines rep~~ent the real parts wfule dashed
line;. rcpre~nt !he imagn•.ary parts.)

11.7 A Single-Sideband Modulation


Fur efficient tr.msmission over long distances. a real low-frequency bandlimited signal xfnl, such as
speech {_"\f mu;;ic. is modulated by a very high frequency sinusoidal carrier signal cos wen, with the carrier
frequency w, being less than half of the sampling frequency. The spectrum V (ej""') of the resulting signa!
v[n I = x [n 1cosw,-n is gi.ven by

0 l.95}

As indicated in Figure i 1.48. if X(ei"') is handl:mited to WM, the spectrum V(ej"') of the modulated
<;ignt~l ! 1 [nl has a bandwidth of 2wM centered at ±w~·- By choosing widely separated carrier frequen-
cies, one can modulate a number of low-frequency signab to high-frequency signals, combine them by
frt:quency-divis10n multiplexing, and tran~mit over a common channel. The carrier frequencies are cho-sen
appropriately to ensure that there is no overlap in the spectra of the r.mdulate-d signals when combined by
frequency-div:sion multiplexing. At the receiving end, each of the modulated signals is then separated by
a bank of handpass filters of .center frequeucies corresponding to the different carrier frequencies.
8CO Chapter 11: Applications of Digital Signal Processing

It is evident from Figure 11.48 that. for a real low-frequency signal x[n], the spectrum of its modulated
ve!-sion v[nj is symmetric with respect to the -earner frequency We. Thus, the portion of the spec-trum in
tht; frequency range from 0-',· to (w,. + w_:w ). called the upptr >idehand, has the same irrformation content
as the portion in the frequency range fwm (w'-' - WM) tu (!),.,called the luwer sideband. Hence. for a
more cffit:icnt utilization of the ch;mncl bandwidth, it is sufficient to transmit either the upper or the lower
siCeb:md signal. A conceptu?.lly s:imple way of eliminating one of the sidebands is to pass the modulated
~ignal v[n llhrough a sideband filter whose passband covers the frequency range of one of the sidebands.
An alternative. often preferred. approach for single-sideband signal generation i~ by modulating the
an.±lytic signal whose real and imaginary part:. are. respecti-.ely, the real signal and its Hilbert lransform.
To illlls1rate this approach, let y;n; = xtn] + ji(nl where iln~ is the Hilbert transform of x[nj. Consider

s(n] = y[n]e1 "'·" = (Yrelnl + jy;,fnlJ (CiliWcn +} sinwcu)


= (xlrdcosw,-n- i[n]s.lnw,n)
~ j{xfn]s-int<.>,-n+.\[njcosu.•,n). (11.96!

Fr>)m Eq. (11.96), the real and imaginary parts of s[n J arc !has given by

sTein l = x[n]c~ Wen - ..ilnj si.n w,n. (11.97aJ


S101 [n] = x[nJsinwcn + i[n]cosw..r<- (l : .97h)

Figure I L49 shows the spectra nf-' [nJ, X!nj, y[n], s[nl, s,dnJ, and .'itmfnl. It therefore f<..)llows from these
plots that a single-sideband signa! can be generated using either one of the modulation schemes described
by Eqs. (l L97a) and (I J .97b}, respectively. A block diagram representation of the scheme ofEq. { JL97aJ
is sketched in F1gure I L50.

1·1.8 Subband Coding of Speech and Audio Signals


In many applications, the digital s.ignal obtamed by the analog-to-digila; conversion of a_'l analog signal lm:-.
to :x transmitted over a channel with a limited bandwidth or stored for future me in a storage medium with
limited capaci!y. Signal compreS-!.ion methods are being increasingly employed to increase the efficienq-
of transmission and/or storage. For example, Lhe speech signal in a relephone communicution system ha:-
a bandwidth of about 3.4 kHz. For digitaltr.Jnsmission, it i:_..; sampled at a rate of 8 kHz and then mnverted
by mean~ of an .S-bit AID converter, resulting in a 64,000 bits/sec or, equivalently, 64 kbit5/sec digital
signal. Up to 24 such 64 kbitsisec digital signals can be transmitted over the T -1 carrier chanael in the
United States by time-division multiplexing. By c,:,mpress.lng the speech .signal to a mte of 32 kb[tsfsec,
the number oflelephone signals sharing the channel-can be doubled to a tota1 of 48. Likewise. the storage
capacity of a compact disc can he increased by compressing ~he digilized music.
One of the most popular schemes for signal oompre!';sion is the subband coding that employs the
quadrnture-m)rror filter bank discussed in Sections 10.9 to 10.12, and efficiently encodes the signal by
explo;ting the nonuniform distribution of signal ene-rgy in the frequency band. In this section, we review
the principles of s-ubband coding of speech and audio signals and describe one specific coding system.
A simplified representation of the subb.and coding scheme is shown in Figure 11.51. As indicated here,
in this scheme. the input signal x(nj at the lransmitt:.ng e-nd is decomposed into a set of narrowband slgnals
occupying contiguous freguency bands by means of an analysis filter bank. These narrowband signals
are then down-sampled, producing the s.ubband signals. Each subband signal is nexl compressed by an
encoder, and all compre-;sed subband signals are m'Jltiplexed and sent over the channel to the receiver, At
the receiving ee>d, the composite- signal is. first demultiplexed into a set of .subband signals. Each subband
sigrtal is then decompressed by a decoder, up-sampled, and passed rhrough a synthesis filter bank. The
11.8. Subband Coding of Speech and Audio Signals 801

(a)
n 'X "'

(b)
-n
.·· .. "'
.,:roM
X

Y(elm;

(c) .
\
n ·. n "'
"'"
S(ejC<)

(d) ~~------------------L---------~00~-'---"--t-"'
-1t (' ·-.. 1t
·.. :ro +ro
·.: - M'

(e)

(f)

-1t /i 0
-{W., + WM}"

Figure 11.49: Hlustration of the generation oi single--sideband signals via the Hilbert tr.msfonn. (Solid lines represent
th<! real parts while druJhed lines represent the imaginary parts.)
802 Chapter 11: Applications of Digital Signal Processing

;;[n l ------+
Hilbert
transformer

Figure 11.50, Schematic ofihe single-sideband generation scheme ofEq. (I Ul7a).

xfnj q[n]
To channel

{a)

(b)
Figure 11.51; The bask: subband coding and dt:t::ixling scheme.

output-s of the synthesis filter bank are finally combined to produce a signal yrnl that ili an ac-ceptabk
replica of the original input ~ignal xfnJ. The subband coding offers a better compression ratio than the
direct compression of the originat s1gnal x[n] since it allocate~ a different number of bits to each subband
signal by taking advantage of it-<>. spectral characteristics resulting in a lower average bit rate per sample.
Subhand coding of 7-kHz wideband audio at 56 kbitslsec baset.l on a five-band QMF bank has been
investigated by Richardson and Jayant !Ric-861. The five bands occupy the frequency ranges 0--875 Hz,
875-1750 Hz, 1750-3500 Hz, 3500-5250 Hz, and 5250--7000 Hz. The gain responses of the five filters
used are shown in Figure 11.52 <~.long with the sum of the squares of their magnitude responses.
The five-band partition has been obtained using a three-stage cascade of two-channel QMF banks
as indicated in Figure l \53, which also shows the sampling rate of tbe input and the subband signals
along with the order of tlte filter~ used. The encoding delay ::au..-;ed by the subband filtering operations is
around 10 msec The :mbband si.gnals have been encoded u.->mg an adaptive differential PCM (ADPCM)
scht~me 1Jay741. The bit allocations m;ed are 5 blh/sample for each of the two '!ow-frequency channels,
J bhs/s.ample fDr the highest-frequency channel, and 4 bi!slsample for the two intermediate-frequency
channels. The lotal bit rate of the subband ADPCM coder is !hus 56 kbits!sec, which i;; equivalent to an
average bit rate of 4 bitsJsample of the input signal at the 14-kHz sampling rate.
11.9. Transmultiplexers 803

••r--------------------------,

(b)

Figur-e 11.52: {a) Gain resp.:msesofthe analy~1.'i filter.;; and(~) sum ofl!lagnitude-squa:red responses in dB. (Repwducni
with permission from IR:c86) ©l9S6 [EEE)

Further details of the above subbarui coder along with a comparison of lts performance with !hat of
two other coding schemes can he found m [Ric86].

11.9 Transmultiplexers
In the United States and most other countries, the telephone service employs two types of multiplexing
schemes to transmit multiple lnw-frequcncy voice signals over a wideband channeL In the frequency-
division mul!iplex (FDM) telephone- system, multiple .analog voice signals are fin:t modulated hy single-
sideband {SSB} modulators onto several subcarriers.. combined, and transmitted simultaneously over a
common wideband channel. To avoid cross-talk, the subu.rriers are chosen to ensure that the spectra of
the modulated s.ign.ab do not overlap. At the receiving end, lhe modulated subcarrier signals are separated
by analog bandpass filters, and demodulated to reconstruct the individual Yoi;;:e signals. On the other
hand, in the time-division multiplex (TDM) telephone system, the voice signals are first converted int-o
digital signals by sampllng and AtD conversion. The samples of the digital signals are time-inlerleaved
by a digital multiplexer, and the combined signal is tran:-miued. At the receiving end, the digital voice
si§,.rn.als. are separated by a digital Jcmuhiplexer and then pas-;ed through aD/A converter and an analog
reconst.nJL:tion filter to recover tlie origmal analog voice signals.
The TDM system is usually employed for short-haul communication, while the FDM scheme is pre-
ferred fo.r long-haul transmission, Until the telephone service becomes all digital, it 'is necessary to translate
signals. between the two formats. This is achieved by the tn;ns.multiplexer system discussed next.
The transmultiplexer is a multi-input, multi-output. multi rate structure. as shown in Figure 11.54. It is
exactly the opposire to that of the QMF bank of Figure 10.63 and consists of an L-channel synthesis fiTter
804 Chapter 11 : Applications of Digital Signal Processing

x[n}
3.5 kHz

3.5 kHz

3.5 kHz

-"{n]

14 klli

(b)

Figure lJ.53: (a) Three~ stage realization of the five-band analysis bank ~ (b) its eq\li"<llent representatioo.

bank at the input end followed by an L-cllannel analysis filter bank at the output end. To determine the
input-omputrelation of the transmultiplexec, consider one typical path from the kth input to the lth output as
indicated in Figure 1 I .55{a) [Vat93l A polyphase r,epre~entahon of the structure of Figure 11.54 is :.hown
in Figure l L55(a). Invoking the identity of Section 10.4.4. we note that the structure of Figure t 1.55(a}
is equivalent to that shown in Figure ll.55(b), consisting of an LTI branch with a transfer function Ftt(Z)
that is the zeroth polyphase component of Hit:(Z)G,(z). The input-output relation of the transmultiplexer
is therefore given by
L-'
Y,~:(z) = L Fu(z}Xt{z), 0 S k .:S: L - L (11.98)
f=C

DeiJ:oting

Y(z) ~ [Yo'.z) YJ(Z) YL-I(z)j', (1L99a)


X(z) = IXo(z) X t (z) XL-J(z)jT, (f 1.99b)

we can rewrite Eq. (! 1.9S) as


Y(z) = F(z)X(z). {1 1.100)
11.9_ Transmultiplexers 805

11[!1 1

f'DM

Figure I LS4: The basic L-cbannel transmultiplexer structure.

_1-k[n] ;;; x,[r.] ~ Yt[nl

(b)
Figure 11.55: The- k, £-path of tlJe L-channel transmu[tiplexer structure.

where Ftz) is ao L x L matrix whose (k, t:)th element is given by F~e(z). The obje-etive of the transmulti-
plexer design is to ensure thal Ykfnj is a reasonable replica of xk[n]. If y~;[n] contains contributions from
x,lnl with r -=F- n. then there is cross-talk between these tv.·o channels_ It follows &om Eq. (11.100) that
cnt,;s-blk is totall)' absent if F(z) is a diagonai matrix, in which ca&e Eq. (U.IOO) reduces to
(! !.10!)

A;;. in the case of the QMF bank. we can define three types of transmultiplexer systems. it is a phase-
pt"escrving system. if Fa(::) is a linear-phase transfer function for all values of k. Likewise. it is a
magnitude-preserving sysrem, if Fkk(Z) is an allpass ftmction. Finally, for a perfect reconstruction trans-
multiplexer,
(l 1.102)
where nl< is an integer and r:x;;; is a nonLem constant. For a perfecl reconstruction system, Yk fn] =
a.~;x,,Jn- n.,j.
The perfect rcnmstruction condition can also be derived in te-rms of the polyphase components of the
synthc-~is and analysis filter banks of the cransmultiplexer of Figure 11.54, as shown in Figure 11.56(a}
[Kol91j. Using tbe cascade equivalences of Figw-e 10.14, we arrive at the equivalent representation
lndicati!d in F1gure l1.56(b). Note that the structure in the center part of this figure is a special case of the
system of Figure i l.54, where Gt(z) = c\L-J-i) and H;J::.) = z-k, with C,k = 0, I, .. _ ,L- 1. Here
the zeroth polypha'>e component of H£+ 1(z)Gt(zj is z- 1 fort= 0, I, ... , L - 2. the zeroth polyphase
component of Ho(z)GL-1 (z) is J, and the zeroth polyphase component of Hi,{z)Gi(z) is 0 for all other
ca~es. As a result, a simplified equivalent representation of Figure 1 1.56(b) is as shown in Figure 11.57.
Tbe transfer matrix characterizing the transmultiplexer ~s thus given by

M(z) = E(z) [ z-l:L-l !J R(z), (11.103}

where IL-l if. an (L- 1) x (L- l) identity matrix. Now, for a perfect recoru;truction system it is sufficient
to en>.urc that
(11.104)
806 Chapter 11; Applications of Digital Signal Processing

xoln]

Yc__1 [nl
XL-I[nJ

(•l

Xo-fn] y 0 [nJ

x1 [n] y 1 !n]

~ YL-l [n]
c___,
(b)

Figure 11.56: (a) Polyphase representalion of the L-channel transmulliplex:er, and (b) its computationally efficienl
realization.

y 0 1_n]

yl[n]

)'2 fn]

XL-2[nJ YL-2[nJ

xl.-l[n] J'L-l[n]

Figure 11.57: Simplified equivalent circuit of Figure l LS!i

where n 0 is a positive integer. From Eqs. (11.103) and (11 104) \Ve arrive at the condition for perfect
reconstruction in lenns o-f the polyphase comp<:~nents as

R(z)E(z) = d::.-m" [ z~ 1 \)
1
J, (1 L105)

where mu is a suitable j)OSitive integer.


It is possible ro develop a perfect reconstruction transrnultiplexer from a perfect reconstruction QMF
bar,k with analysis fitters Ht(Z) and synthesis filters Gt(z). with a distortion transfer function given by
T{;:) = dz-K. whece dis a nonzero constant and K is a positive integer. It can be shown that a perfect
rec•)nstruction transmultipiexer can then be designed using the analysis filters Ht(z) .and synthe~is filters
::-RGc(:::), where R is a positive integer less than L such that R + K is a multiple of L {Kof9l]. We
illustrate this approach in the following example.
11.10. Discrete Multitone Transmission of Digital Data 807

lUI

=X..>r>'+::;: l1t1t1= i ·~: .; "'r•+ ..


**l·+·:c:~ ·+7~: 1't;{$f"" { ~.1: ·' •""': .. ;;:

---tr!¥
¥=41110<1$:

"iit'% - !itDXitttl(i -

In a typical TDM-to-FDM format translation. 12 digitized speech signals are .interpolated by a factorQf
12, modulated by single-sideband modulation, digitally summed, and then converted into an FDM analog
signal by D/A conversion. At the receiving end. the analog signal is converted into a digital signal by
AID conversion and passed through a bank of 12 single-'-1deband demodulators whose outputs are then
decimated, resulting in the low-frequency speech signals. The speech signals have a bandwidth of 4kHz
and are sampled at an 8-kHz rnte. The FDM analog signal OCC'Upies the band 60 kHz to 108 kHz. as
iUustrated in Figure 11.58. The interpolation and the single-sideband modulation can be perfonned by
up-sampling .and appropriate filtering_ l.ikewise, the single-sideband demodulation and the decimati.o!l
can be implemented by appropriate filtering and down-sampling.

11.10 Discrete Multitone Transmission of Digital Data


Binary data are normally transmitted serially as a pulse train. as indicated i.n Figure l1.59{a). However,
in order to faithfuUy extract the information transmitted. the receiver requires complex equalization pro-
cedures to compensate for channel imperfection and to make tun use of the channel bandwidth. For
example, the pulse train of Figure ll.59(a) arriving at the receiver may appear as indicated in Figure
11.59(b). To alleviate the problems encountered with the transmission of data as a pulse train, frequency-
division multiplexing with overlapping subchannels has been proJXlsed. In such a system, each binary
digit ar. r = 0, J, 2, ... , N- 1, modulates a subcarrler sinusoidal signal cos(21l"rtfT), as indicated in
Figure ll.59(c) for the transmission of the data of Figure ll.59{a), and then the modulated subcarriers
are sunnned and transmitted as one composite analog signal. At the receiver. the analog signal is pas-sed
thruugh a bank of coherent demodulators whose outputs are tested to determine tbe digits transmitted. This
is the basic idea behind the multicarrier modulation/demodulation scheme foe digital data transmission.
A widely used form of the multicarriermodulation is the discrete multi tone transmission (DMT) scheme
in which the modulation and demodulation processes. are implemented via the discrete Fourier transfonn
(DFI) efficiently realized using fast Fourier transform (PFT) methods. This approach leads to an all-digital
system. ellminating the arrays of sinusoidal ge-nerators and the coherent demodulators lCio91}, (Pel&OJ.
808 Chapter 11- Applications of Dfgital Signal Processing

"
~ ~~
0 4kHz
N

• Lf\
<
0

u" I ~
~

0 4kHz

• " IH4 kHz IG8 kHz

u

N
FDM Signal
jj
~
~
0 4kH<
" IDM Signals

Figure 11.58: Spectrums of TDM signals and the FDM sig!llll.

We outline here the basic idea behind the DMT scheme. Let ~ak~nJ} and {bk[nl), () -s k ~ 1'1-f ~ L be
two M- 1 real_-va.lued data sequences operating at a sampling rate of Fr that sre to be transmitted. Defint:
a new set of complex sequences {a;.:[n]} oflength N = 2M according to
D, k=O,
a.t[n] + jbk[n]. l<k<~-L
- - 2
adnl = O, k = T,
N ll L 106)
{
aN-k[n]- jbN-k[n], '
-+l<k<N-1.
2 - -
w(~ apply an inverse DFT, and the above set of N sequence" is transformed into another new set of N
signa~s {uefn]} given by
l .'V-l
u1 rnJ=- "oi:fn}W,Va:,
,vL-
(lLHi7)
k=CI

where W N = e- j 2 " 1N.Note thar the method of generation of the complex sequence set {akfn }} ensures
that its IDFT {u tfn]) will be a real sequence. Each of these N signals ls then up-sample-d by a factor of N
and time-interleaved. generating a composite signal {xi_n ]} operating ar a rate of N Fr that is assumed to
be·~ual to 2Fc. The ~"'mposite signal is converted 'Jlto an analog signal x.,.{t) by pas"$ing it through a DIA
converter foilowed by an analog reconstruction filter. The analog signal xu(t) is then transmitted over the
channel.
At the recei-ver, the received anal.og signal y,.(r) is passed through an analog anti-aliasing filter and
then convened into a digital signal fy[n]l by an S/H circuit followed by an AID converter operating at a
rak- of N Fr = 2Fc. The received digital £tgnal is then deinterleaved b-y a delay chain containing N - I
unit delays whose outputs are nexr down-'!ampled by a faclor of N. generating the set of signals 1u.e[n 11-
APPlying the DFf to these N signa1s. we finally arrive at N ~ignals (th[n]}
N-l
.Pkln] = L w[n]W,if, O:sk:::N-L (l 1.108)
'~
11.10. Discrete Multitone Transmission of Digital Data 809

"o al ~ a, a, "s
+I "
I
Tl
I

(a}

(b)

2.----------------,
~ 1.5 j!
3
~ '
H- ---------------------4'
e :
-< 0.5
0
L - ________________]
0 0.5 0 0.5
Trme Tbne

'[\ f \ \-- 1 ~ o::\ . (\ {\


t,OV v'
] o.si

-lL_______________~
0
1

0.5
- / ,

~o.s
Q. Q

0
\

_,L_______
V
I
I ' ) \

~------~
0.5
,II
v
Time lime

] 0 :1[ {\1 /\ {\ /\ .
"'o,
lo.s 1 _, 1
\1v f\j'\'\
'I
'v v
0 OS
Time
{c)

Fig~lln' 11.59: (a) Serial bmruy data stream, (b) baseband serially transmitted signal at the receiver, and (c} signals
gen1~ate<l by modulating a set of subcarr:iers by the digits of the pulse train in (a}.
810 Chapter 11: Applications o1 Digital Sfgnal Processing

- rn
01J[n]

a1 I !;:
e
• -~
• ~

UN_Iln]
"'
(al

Y.,(t) Lowpa.ss "O[n]


f>o[n]
From channel 1 filter
v 1[n] !;:
Q Ji1!nj

·~ •
• ::: •
VN-lfnJ
!3/\'-J ( II ]
{b}

Figure 11.60: The DMT scheme: ia) trr.n;;mitter.and (b)recei¥er.

Figure 11.60 shows schematically the overall DMT scheme. If we assume the frequency respon._~ of
the channel to have a flat passband. and assume the analog reconstruction and anti-aliasing tilters to be
ideatlowpas.s filter.~;, tben neglechngthe nonideal effects of the Dl A and theAJD conVerters. we can assume
y[n] = x{n j. Hence, the interleaving circuit of the DMT structure at the transmitting end -connected [o the
delnterleaving circuit at the receiving end is identical to the same circuit in the transmulriplexer structure
of Figure 11.56(b) (with L = ,\'). From the equivalent representation given in Figure 11.57, it follows
then 1hat

v.::lfd = Uk-l [n- 1). O_::k :S N-2,


V(j{.'lj = u,v_![n!, (1 L 109)

or in other words,

,BJ:!ni = ll'k-J[n- i). 0 _:::: k .:S N- 2,


,6o[nJ = a,v_!(n}. (\LIIO)

Transmission channels. in general, have a bandpass frequency response Hch (f) with a magnitude
response dropping to z:ero at some frequency F~·· In some cases, in the passband of the channel, the
magnitude response, instead of being fiat. drops very rapidly outside its passband, as indicated in Figure
11.61. For reliable digital data transmission over such a channel and its recovery at the receiving end. the
channel's frequency response needs to be compensate-d by essentially a hig:hpass equalizer at the receiver.
However, such an equalization also amplifies high-frequency noise that is invariably added to the data
s.ignal as it passes through the channel.
1·1. 11. Digital Audio Sampling Rate Conversion 811

; -.-

:--. --.

:--- ---,

Figure 11.61: Frequency rcspiJnsc of a typical bandlimited channel.

For a large value of the DfT length N. the channel can be assumed to be composed of a series of
Cont:.guous narrow-bandwidth bandpass subchannels. If the bandwidth is reasonably narrow, the corre-
sponding bandpa5s subcbannel can be consl(iered to have an approximately flat magnitude response, as
indicated by the dotted lines in Figure 11.61, and the channel can be appi"oxi.mately characterized by a
single complex number gi,•en by the ..-alue of its frequency response at w = 2nkjN. The values can
bt: determined by first transmitting a known training !>lgnal of unmodulated carriers and generating the
respective channel frequency response samples. The real data samples are then divide.d by these complex
numbers at the receiver to compensate for channel distortion.
Funher details on the performance of the above DMT scheme under nonideal conditions can be found
in {B:in90]. [Cio91J.lShe95J.

11.11 Digital Audio Sampling Rate Conversion


A;; indicated In Chapter J 0, fractional sampling rate conversion of digital signals is often required in
pmfessional audio and video applications. A classical approach to such conversion is fo transfonn the
input digital signal into an analog signal by means of aD/A converter followed by an analog 1owpass
filter, resample the analog signal at the desired sampling rate, and convert the resampled signal into a
digital form by means of an AlD converter. To reduce the effect of aliasing to a minimum, the analog
reconstruction filter must have a very sharp cutoff with sufficiently iarge stopband attenuation and V"ety
small passband ripple, in addition to exhibiting nearly linear phase response in the passband. It is difficult
t(1 build an economical analog filler with such stringent requirements. Moreover. to achieve the desired

ru::curacy in the output digital signal, hoth the D/A converter and the A/D converter must have large enough
resolutions. An alternative approach is to use an all-digital conversion technique. the basics of which have
bt:en discussed in Section 10.2.2.
The complexity of the design of the fractional sampling rate converter depends on the ratio of the
sJJ.mpling rate~ between the input and the output digital signals. For example, in digital audio .applications,
the three different sampling frequencies employed are 44.1 kHz, 32 kHz. and 48 kHz. As :a consequence,
there are three different values for the sampling rate conversion factor- 2:3. 147:160, and 320:441.
Likewise, in digital video applications, the sampling rates of composite video signals are 14.318!818 MHz-
ar:d ! 7. 734475 MHz, whereas the sampling rates of the digital component video signa! are 13.5 MHz and
6.75 MHz for the luminance and the color-difference s.i:gnah, respectively, for the NTSC and PAL systems.
H ~e. lhe sampling rates for the component and the NTSC' composite video s.ignals are related by a ratio
35:33, whereas the sampling rates for the component and the PAL composite video signals are related by
812 Chapter 11 : Applications of Digital Signal Processing

--£~
(a)

F"tgare J 1.62: Various steps in the development of the 48-32-kHz sampling rate ~XJnverter.

a ratio 709,379:540,000. There are applications, such as ir: the pitch control of audio signals, where the
ratio is irrational. in whtch case there is no periodic relation between the sampling instants of the input anC
the output digital signals.
T.>re desigr, of a fractional sampling rate converter for conversion between t\vo digital signals related
by a :;atio of two low-valued integers can be carried out using the method discussed in Section 10.2.2
However, for a sampling rate conversion factor that is a ratio of two very large integers or an irrational
number, the design i:s somewhat more "Complex.
In this section we discuss the design of fractional sampling rate converters for both of the above two
cru~es as encountered in audio applications {Cuc91}, [Lag81J. IRam84]. A similar procedure is followed
in -.,ideo applications [Lut91].

11.11.1 Conversion between 32-kHz and 48-kHz Sampling Rates


The conversion of a digital audio signal of 48-kHz sampling rare to one of 32-kHz sampling rate requires
the design of a fracticmal sampling rate decimator with a decimation factor of 213. Its basic form is as
indicated in Figure 11.62(a). where the l<M·pas.~ filter H(z) has a stopband edge at :rr/3. Note that in this
structure, the filter H(z) operates at the 96-kHz rate. We now outlinethedeveJopmentof a computationally
efficient realization in which all filters operate at the- 16-kHz rate {Hsi87}, [Vai90]. Replacing the Jowpass
filter with its 1YPe II polyphase realization and then making use of the cascade equivalence of Figure
1O.I4(b), we arrive at the equivalent reaHzation indicated in Figure I L62(b) in which now the filters Ro(z)
and R1 (;:}operate at the input sampling rate of 48kHz. Simp~ block diagram manipulations and use of the
cas~e equivalences of Figure 10.13 lead finally to an equivalent realization depicted in Figure 11.62{e).
The computational efficiency of the sampling rate converter can be improved further by realizing the
filk:rs Ro(z) and Rt (z) in Type I polyphase forms and tben applying the cascade equivalence of Figure
10.14(a). The final realization is as shown in Figure 11.63, in which aU filters now operate at the 16-kHz
rate.
11.12. Oversampling NO Converter B13

~ '

l<igure 11.63: Computatmnal!y efficient reaiization of lhe 48-32-k.Hz sampling rate converter.

The transpose of the strucrure of Figure 11.63 yields the realization of a fractional rate interpolator with
an interpolation factor of 3!2 and can be used as the 32-48-kH;o: sampling rate converter (Problem 1!.23 ).

i 1.12 Oversamp!ing AID Converter


For t:::'lc digital processing of an ana!,lg continuous-time signal, the signal is first pa"sed through a sample-
and-hold cin:cit whose output is then converted into a digital fonn by means of an AID converte-r. However.
according to t!le sampling theorem, discussed in Section 52.1, a bandlimiled continuous-time signal with
a !owpass spectrum can be fully recovered from its uniformly o.arnpled version if it is sampled at a sampling
frequency thal i;; at least twice the highest freque:1cy contained in the analog signaL If this condition is
nm satistied, the original continuous-rime signal cannot be recovered from its sampled version because of
a! !.asing. To prevent aliasing, the analog signal is thus passed through an analog anti-aliasing lowpass filter
prior to sampling. which enforces the condition of the sampling theorem. 1be passband cutoff frequency
ot the lowpass filler is chosen equal to the frequency of the highest signal frequency component that needs
to be preserved at the output. The anti-aliasing filter also cuts off all out-of-band signal components and
;u~y h.igh-frequency o01re that may be present in the original analog signal, which otherwise would alias
into the base:band after sampling. The fil:ered signal is rhen sampled at a rate thar is a~ least twice that of
th<;o. c-,Itoff frequency.
Let the- signal band of interest be the frequency range 0 _:::: f :S: Fm. Then, the Nyquist ra1e is given
hy F,..,- = 2F.,. Now, if [he sampling rate Fr is the same as the Nyquist rate, we need to design an
anti-ali:.sing lowpass filter with a very sharp cutoff in its frequency response, satisfying the requirements
ns giYen b; Eq. (5.66). 6 This fC'Juires the design of a very high-order anti-aliasing filter structure built with
high-precision analog components. and it is usually d:ifficult to implement such a filter in \-''LSI technology.
~1Dreov ...-r. such a filter a1su introduces undesirable pha~ distortion in its output. An alternative approach
m.mtioned in Section 5.8.5 is to sample the analog signal at a rate moch higher than the Nyquist rate, use
:.t fast low-resolution AID converter, and then decimate the digital output of the converter to rhe Nyquist
nue. TI1is approach relaxes the ~harp cutoff requirements of the analog anti-aliasing fil-ter. resulting in a
simpler filter structure that can he buiJt using low-precision anaJog components while requiring fast, more
complex digital signal processing hardware at later stages. The overall structure :is not only amenable to
814 Chapter 11. Applications of Digital Signal Processing

VLSI bl:>ricat.on bul abc can be deo.igned w prov1de linear-pha:o.e re:>pon-.c in the- i.igfl<ll hand of :rteresL
The ovcrszmpling approach i:, an elegant application flf multin:r:e digital ~igoai proce._•,sing and i-..
incrcu~ingly bcir:g emplu:.~3. in 1h.; 'kc.;gn or high-r.c~olulioc1 AJD :xmverlers for many practical systcmc.
lCun'-J2J, ~hc94J. II: !hi;; '>Cction. we ana!y;-c the quantizatio•1 noi.;,e performance of rhc conventmna! AID
c-onvert<.:r <~nd sho"" analytical!: how the o\·crsamptine:: approach deneascs !he yu<mtl,:at:on no'se power
in the signDI bund of i~;tac-.1 11-'tc:t;I:J.l We :hen show that ftFlhcr impwvcmenl in the noise performance of
ar. overo.ampling ;VD convo:rlcr c:::m he oht:.;.i:wd ~y empln)- ·ng :. -;igm.a-detta (I: & ) quan:inti!OT'. scherm::.~
For scnplici:y, Wl' rc>.trict our di:<cassiun w the ca~e of a ha:-.ic: flf:"t-orckr sigma-Jelta quantizer
T;> dlustm~e the ahnvt: properly nmsiJer :t h-biL-\/J.) Cl)i1' cn:er operating at Fr- HL Now. for a full-scale
pe~lk-to-JY-ak input D.!lalu~ volta.gr:. of Hro~- t:"le sm<dk:-;t volugc '>tep 1·epre'>cnted by h bits is

.
i\ l; -·Np;
= 1/o ···--
·- I
nL.IIO

Fn:m1 Lq. (9.6-9), the rm-.qu.:lnti:;.ut;vn nni:;c power a} o.f the error voltage. a~~uming a uoifonn distribution
ilf the crn1r herween - Ll \/" /2 and .. ~ \//2. J<,; given by

i..:'l. nc c 1.112)
The rms noise voltage, given by rr,., 1hcrdore has a fl.nt spectmm m rhc frequency range from{) lu Fr /2.
The noJse power per unit bwdv,idth, (\tiled the r.oi.H' dnnity. is 1he:1 given b)

(11.113_1

A plo1 of the noise den~ities for two different ~amphng: rok~ is shov.n :in hgux l !.64. where the shaded
portion md:ica1cs :he signal banJ of intere>t. As can be seen from this !lgure, the total a!1.\0unt of noise i~
the sigr.al band of intec.<;t tO; ~be hi!;h sampling rate ,_·asc i;, s.maller than that for the low sampiing rate
c.:~se. The fffial noi;.e in the signal band of interest. ("ailed th~ in-hand noise power. is given by

(ILI14)

It io. lnterc!.<ing to compute the needed wodleng>h p of the A/D convt-TI:er opera1ing at the Nyquist mte
in order tha! it.-; total noise in the ;.!tpal band of intereo.;t be t:qual to that of a b-bit AID converter operating
at ah;gher rate. Suhstitutmg FT = 2F, m::d replacing h with/fin Eq. (I L 1 !4), we arrive at

(RFs/2''if 2 (Rrs!2t>) 2 Fm
12 - . h·/2· (lLJI5)
12

w!tich lcad:o to the desired relation


(ILl 16)

',>.·here M = Fr /2Fm denotes the ovenampfing ratio (OSR). Thus, f3 - b denotes rhc im:rea;;e in the
resolution uf a b-bit com·erter whose overc.ampled output i;. filtered by an ideal brick-wall lowpass tilter.
A plot of the irtcrca.-;c in :e::;oluli,,n as a functton of the ~wersampling ratio is shown in Figure 11.65. For
~xamp!e, for an OSR of M = 1(}(.0, a.n K-bit ovcr,;ampling A:D converter has an effeL-rive resolution equal
to that of a 13-bit AID convt..'Itcr u;>erating at the Nyquist rJte. Note that Eq. (11.116) implies that the
increare in the res.olution is l/2-hit per do;.;bling ofthc OSR.
11. i 2. Oversamphng A.iD Corwer!er 815

/ "
/"
"/
"/ H1gh sarnJ!iJl~ rak
/
·./ /
;;;
;c;
0 f,, Fj'. F7 Frequcm:y
2
Fi~rc 11.64: ,v~ ..::o..wcrter no1se density.

•>1-

- .....
- .. ··

w'

Figure I 1.65· Ex~·es~ rcsdulmn as a fuun:on {Jf the m•er:sampling ntio !of.

2MFm 2F
:!.:\.11 2MF,
~ ' "c- - - - , r--cc:---, l :-"':;:;::::::;-c 1 l
•. An::oloj': 1-hit , r~~·· Mt~band ~· lM; Dign"l
AiD conve:-1~ 1 • (l!gtw.l 'h · ;---+ DIJ!P"
mtegrator : ·to"' pass fil!~r: ~

~~ "
L_-----;! c~~~ertet rf-----'~ ;
D: A
1

'~---- Sigma-deha qomntiz<:r - - - - • ' - - - Deci'Dator........_;

Figure 11.66: Over;;a!T.pling sigm.a-delc{L A/D converter ~trucwre.

\'•.ie now i!Ju\trate the impm\-ement in the noise pert'ormance obtained by employing a sigma-delta
n::t-.) quant"m:ttion scheme. The -.igma..de.ltaA/0 converter was briefly introduced in Section 5.8.5 and
i<; shown in hlod, Jiagram J;nm in Figure 11.66 f•.Jr convenience. This figure also indicates the sampling
r.ues at variotki '>tagcs cf the structure. It stwuld be noted here Ihat the !-bit output samples ufthe quantizer
after dccim<!!ion hccomc h-hit samples. at the output of the sigma-delta ND ronvcrter due to the filtering
operation~ in\olving b-bit mul{iplier coefficients of the Mth -band digi!al J.owpa.'is filte:.
Since rh-.: oversarnphng ratio M is typically very farge in practice, the sigma-delta AID convener is
most useful in low-frequenq· application-; such a<> digital telephony, digital audio, and digital spe"-:trum
analyzer;._ For o;ample. hgur~ : 1 67 shews the block t.iiagram of a typical compac! disc encoding system
used to com·ert the input ~malog audio signa~ into a digital hit stream that is then applied to generate the
master (\isc tHcc82l Here the uvcrsampliog sigma-delta AID converter employed ha;;. a typical input
sampling rate of 7-175.2 kHt: and an output :.ampl.ing rate of 44,1 kHz [Kam86].
816 Chapter 11: Applications of Digital Signa! Processing

H
Digital
Analog Parity Multi- Multi- bit
au din ·~ AiD I-< Modulator
CCIJYCrter codmg piexer plexer stream
signal

ICu"'rol
clispJay
&I ) Sync
1 pattern
I codmg 1 ) generator

Figun 11.67: Compact di~c encoding system for one channel of a stereo audio.

f---.j~ AID ~f---r~ ;tn]


lntegrntor
!converter J
FT Cloc~
L-----1 D/A
converter
f - - - - -'
(a)
Accumulator Qu;mtizer

x[n] 11+_+ +~w[n] <f>------+]-r-+• yfn]


;:-1 ; --~~~--.J
L__-J '_, f - - - - - _ j
(b)

Flgure 11.68: Sigma-delta quamizarion scheme.

Tc understand the operation of the sigma-delta iVD converter of Figure 11.66, we need to study the
operation of if'.e sigma-delta quantiz:er shown in Figure 11.68(a). To this end it is convenient to use
the discrete-time equivalent -circuit of Figure l L6S(b), where the integrator has been replaced with an
accumulator.'& Here, the input x[nj is a discrete-time sequence of analog samples developlng an output
sequence of binary-valued samples y[n 1. From thi:; diagram, we observe that, a1: each discrete insrant of
time. the circuit forms the difference{~) between the input and the delayed output, which is. accumulated
by .a summer (I:_} whose output .i~ then quantized by a one-bet AID converter, i.e., a comparator.
Even though the input-output relation of the sigma-dcl!a quantizer i.<;. basically nonlinear. the low-
frequency content of the input Xc(l) can be recovered from thE'" output y[nl by passing it through a digital
lowpass filter. This property can be easily shown for a constant input analog signal X 6 {f) with a magnitude
le_-;;; than + l. In this case, the output win] of the accumulator i" a bounded sequence with sample values
equal to either - I or + l. This can happen only if rhe input to the >t<:cumuJator has an average value of
zem. Or in other words, the average value of w[nJ must be equal to the average value of the input x[nJ
[Sch9lj. The foUowing two examples illustrate the operation of a sigma-delta quantizer.

~~·----~

~In p.-litt..:f', the integrator i~ :mplemenled a~ a discrete· lime sw:itched-<::llf'!!Citor inlegralof.


11.12. Oversampling NO Converter 817

fnput analog ~goo] Oulput of sigma-delta modulatm


I 0 9 0 0
j 0.5 ~ 0. 5 r' ''
i '
''

I
''

~ 0 ~ 0 ''
...: -05 ...: -o.s I '
~i L__ _ _ _--cc--~-_j
w
~ ' ! l
0 10 15 5 w zo
j
Time
0
Time "
(a) (b)
F"JgUre 11.69: Input and output waveforms of the sigma-delta quantizeJ- of Figure 11.68(a) for a constant input .

•;;;
t :!IIi;
:u
~ AP,£1\14 {
i:/} ,hf+
"* ' 4
4

il 4

'* ~

.4
1BuP k "' Jt &t?,f,;:,',l;
rl: * wn:.k :r Jt 0 *·

-
Jf{ a TI\\'tjr:;{tt-\•f;:
+>N + ·.-,,z'

341, Tt "<JY \/·J --'"


,_: f{t\'i
818 Chapter 11: Applications of Digital Signal Processing

9 ' t ' ,;,t ,,, \~.,

+ ,, . 'JF~\ ;::,,;:., : ; " ,:, · ;;, ,; •+ ,, 1 : 1"'1 ;;""Vr:w t :uvn


"{
"~"
~" "'"' «
~~

"" ,,Le ~"

'" { J\>'

"'
'"
"'
A '" ">,,
&'
; y·,
,{, f ''
&

n p;
;, ; 'lf' "': '

«}Dlr 0 i ;'t&YP'/' i1'll";, ~1ft r 1;;


X'\ ( \ & r >pp', +{t'' }';:; T'!i,(}t,cy1 ' t ,
{U'f!!"/\(

t>:t; LL#" '

r't:r { ~
"' Zh \,
I;

'\":,
' '
\t< "
~" "
Y' * ~ "/'f: {
"'
" '/ t >. ": {" "
"'
~A

""
&l:S«'
y,J,j A l /
wtnd
<17 ,q:,

"""' H¥,]{,
"'"
/U0l1\ rj '?!' + 0
tlt:{ii\I'W)y, fY}ij
ilhitrii#rl ; ;"Ohtf'; ,
'L{ t I;( j ·;;>4\4):&1\ At'f
::r , r<>t ty:n J";
(hN/nni\

'\(f ' \; ,'?>1;


'::AiL +Xft: i\';0 ;
if,j J;,'Ji 1, 4 Li/j,;;
;:<}, t::, { S L'UL i f
;t,£ A ~;t;,u; t; 7<s'!F' } ,; v { q;,'\itnL +

lt follows fmm Fig. I l .6S(bJ that the output y{nj of the quantizer is gi"en by

y[nJ = u:[nj + efn]. (1Lll7)


where
w!nJ = xln]- J'[n- I]+ w[n- J]. (I Lll8)
11.12, Oversampling AID Converter 819

!npu! ~na1o;! ;igmd


-,-T-
:- ,j-
::II
:ill
Ill
"!!! II 1

r.,(j
I
lilO
Ti::tc
(b)

Figure 11.70: Tnpilt and otHpu! wave:-oml'> o_" the: '>igrra-Jdta qtJ<ml!zet of F:gurc I L68f a! with c. :.inc ....-ave •nput

., 4G no
. _ _ _j
l()(j

Figure 11.7 J: The Jowpass. filte:e-d w~ton of the waveform of f'1gure I J_7\}(1>J-

From the aho"'e equati-on'\, we obtain after some algebra

y[nj = x[n] + (e[nJ- e:n- 1]). (]1.1!9)

where the quantity inside the parenrhese<, represents the mli,~e due to ;.igma-Celta muduli.Jtion. The noise
transfer function is simply G(;:) = 0 -- ::-l ). The power ,<;pectral density ot the modula!iorr noise is
therefore given hy
Py(f) = ' " , I'
IG(eJ~Jl_,T) Pe(fl =4;;in- , (2rriT)
_------.t-_ Pr-(_/). 11 u2m

where we have assumed the power spectra! density P,,(u;) of the qu<.tnlization noise to be the one-sided
power spectral density defined for positive frequencies only. For a random signal input .~ [n J. P,.(_f) is
con,<,iant f-or all frequencies and i:<. given ty

11Ll2l)
Fr/2
Substituting the above in Eq_ ( 11.120), we arrive at the pov,.er spectral density of the output noi~, given
by
2 (Ll. Vt 2 _,
P--,-if) = ---~-- sin-<::r[T;. n 1.1221
3 1-r
Tbe noise-shaping provided by the sigma-delta qlKlnlizcr i' similar to ~hat encountereJ in ;h,; fir-.t-order
error-feedback structures of Section 9.10.1 and shown ln. Figc1re 9.42 For a ver:r large OSR, a:> i;. usually
8.20 Chapter 11: Applications of Digital s:grlal Procc~sing

tl~e ca~. the fn:.quencJ~S in :he ~1gnal band of inlcre~t arc Jiluch smaller than F1 . the '<ampling fre-quency.
Thus, we can approximate Py(f) of Eq. (11.122) as

P(f) :::oo~(llV)
2

,. _, (_:r1• 1 .!2=_;!:·
,rr J.(D.'ll2.T3J·'
'· , f <<Fr. (11.123)
-' FT -
From the above, the in-hand nolse power of the sigma-dclru A/0 convener is thus given by

2 "fl·V)2~J {F"' J'd-- ~


Ptutal.sct=
L F, p ,.( f ) dl'-
- - 31(,.:....>. 1 1- t
•o
<};r
l·6.V)2T3iF
( .
·'
'..,) .

It is instructive to compare the noise pert:.·nmance of the sigma--delta AJD converter with that -of a direct
(I 1.124)

nver;;ampling A/D converter operating a: a sampling rule of FT with .a signal ba:Jd of imere:<.;t from de to
F,,. From Eq. 1J I. 115), the in-band noise power of the latter is given by

{ll.l25)

The 'mprovet:ient in the noise performance is therefo:e gh:en by

(11.126)

wili::re we have used M = F1 /2Fm to denote the OSR. For example, for an OSR ~,f M J{X}(), the
impr;:wemcnt in t'le noise performance using the '>igma-delta modulation scheme i;; abnut 55 dB. In this
case. the increase in the resolution is about I .5 bits per doubling of the OSR.
The 1mproved noise peffonnance of the sigma-CeltaAID converterresufts from the shape of !Gk 1 2~" !T J j,
w:m.:h decreases the noise power spectral density in-band{(} :S f :S F,.,) while increasmg it oursidc the
signal band of interest (f > Fm'l· Since this type of converter also employs oversamphng. it requires a
le:;s !itringent :malog anti-aliasing filler.
The AID convert« of Figure 11.66 employs a single-loop feedback and is often referred to ao. a first-
uder sigmu-delta c-.mverter. Multi pic feedback ioop moduhuion :o:cheme;. have been advanced w reduce the
in--band noise further. However, the use of more than two feedback loops may result in unstable operation
of the system. and care must be taken in the design to ensure l.lable operation [Can92].
A<; indicated in Figure 11.60. the quantizer output is passe-d through an Mth-band lowpass digit<:! filter
whose output is then down-sampled by a fact& of,..., to reduce the sampling rate to t:.e de.~ired Nyquist
rate. The funclion of the digital lowpass filter is to eliminate the out-of-band quantization rwi~e <tnd the
out-of-band signals tha: would he aliased into the passband by the down-sampling operation. As a result,
tl1e filter must exhibit a very sharp cutoff frequency re:.pon'ie with a passband edge at f;.,_. This necessitares
tht: use of a very high order digital filter. In practice. it is preferable to use a filter with a tramfer function
ha·<ing simple mreger-valued coefficients to reduce the .cos.t of hardv.are implementation ant'! to permit all
multiplication operations to be carr;ed out al the down--;amplcd rate. ln addition, most appli-cations require
the use of linear-p:tmse digital filters, which can be ea~ily implemented using FIR filter&.
The simplest lowpa->s FIR filter is the moving-average fiiter of Eq. (2.56). repeated btlow for conve-
nience:"
H (:) = I + z . l + z-2 + . . + .:-U<-l!. { 11.127}
A more convenient form of the above tr.ansfer func:;on for realization purposes is gJv-en by

(ll.l28)

":-·or >lmp'!city. "'"' have igror"'d the ~calo: :·actor or i / N which ·~ lliTdetl ;o provid" a de gain Df G dB.
11.12. Oversampling ND Converter 821

Figure 11.72: A •ery simple factm-of-N decimator stmnure.

Figure 11.73: A two-stage CfC dcc1mmor \tructure.

Digital
Ylni1;;r+
+
----- +++ ----~+
_ Oli'P"'
__ , -N
'
K sections K sections

Figure 11.74: A CIC dedmalor ~tructure wi[h cascaded &ections.

a!<;O known as a recunive running-sum filter or a boxcar fi!rer. A realizatior. of a factor-of-N dedmator
b<:1sed on the decimation filter of Eq. (11.128) is sketched in Figure 11.72. 10
Since the dedmator based on a running-sum filter does not provide sufficient out-of-band attenuation.
often a multistage decimator formed by a cascade of the running-sum decimators. more commonly known
as aJScaded integrator comb (QC) filters. is used in practice [Hog81J. The structure of a two-stage
CJ"C decimator i<> shown in Figure ll.73. It can be easil)" shown that the structure is equivalent to a
fa·:tor-of-R decimator with a length-RN running sum decimation filter. Further flexibility ln the del'lign
of a CIC decimawr is obtained by iru::luding K feedback paths before and K feedforward paths after the
down-sampler, as indicated in Figure ll.74. The corresponding transfer function is then given by

(l U29l

Tf.e parameter<> N and K can be adjusted for a given down-sampling factor R to yield the desired out-of-
band attenuation.
In ~ome aPPlications, it may he preferable lo use a multistage decimation proces:. in which aH but
tht~ last stage employs the CIC decimatOT1i in various fofiiDl. followed by an FIR Iowpass filter providing
a much sharper cutoff before the final down-sampling. For example, in digital tel.ephone app\i.cations.
the folirm.•ing IIR transfer function provides a very good frequency response and can be used to design a
--·----
lf>rne nllegmlor ovu!oad C.!.~ by the mpul add;;;,. {IVerft.ow can be ea~iiy bandied with binary arithmcri.:
822 Chapter 11: Applications of Dig;tal Signal Processing

!.th h,Jfld MSB,!, Analog


h1wpa•s
fi lt"r
' ()>:J(f'\.<(

R-1

}'igun; 11.75: Hlod, thagram reprl:~ntation of :m u-.crwmp!ing ~igma-deita D/A Um>erte<.

factor-of-4 dcclm;,.tor [CanYZj:

H(:::}=\1 i ; 'I --I j ,;;__ -2)1·.1


+z -2-H-)<-.:-
")
··s-
2 --

7 --., i I +::: ')(1 +,: -~J(l -- .::- 5 )


~- :;;;:: ~_}. -----·- (! Lf 30)
(l :: 1 _)

Further details on first- and higher-order sigma-delta t'unverters can be found in Candy and Teme-s
iCm92j.

11<.13 Oversampling 0/A Converter


As indicated earlier in Section 5.1. t!te digital-to-analng conversion process consist~ of two step>.: the
converSion of input digital samples into;, staircase continuou~-timc wm·eform by means of a DJA convel't.:!r
with a?.ero-order hold at ih outptJI. followed by an a:mlog !owp:lss rcconstructi~.;n filter. If the sampling rate
FT of the input digital signal is the same as Jh<e: Nyqui:;r mle. !.he analog lowpass reconstruction fiher must
!la\e :1 very sharp cuwffin its frequency respon;;c, :wtlsfying the requirements of Eq. (5.75). As in t.he case
of the an:.i-alia•ing filter, this involves the des1gn of a very high order analog reconstrut:!:on filter requiring
high-precision analog circuit components. To get around the above problem. here also an oversampling
apf'ruach is often used, in which ca.<;e a wide transition band can be tolerated in the frequency response of
the reconstruction filter allowing its implementation usmg low-precislon analog circuit components while
requiring u more complex digital interpolation fi!ter at the front end.
Funher improvement in the pafurmance of an oversampling D/A convener i:. obtainct: by employing
a digital sigma-delta 1-bit quantizer at the output of the digH:Jl inttrpolmor, as indicated in Figure 5.45 and
rep.~ated in Figure 11.75 for convenience [Can86J, !Lar931. The quantizer extntcts the MSB from iih input
<1nd subtracts the remaining LSBs, the qua:tli.zation noise, from its input. The MSB output is then fed into
a r- bit D! A corcVcrter and passed through an a..'lalog lnwpa.<;s rcc<>n.struction filler to remove all frequency
components beyond the signal lx!nd of interest. Since the ~ij!nal band occupic~ a very srr:.ull portion of
the ba<;chand of the high-<;ample-rate signal, the reconstruction filter in this c:.t<;e can have a very wide
tmnsiton band. permitting its rc-ali.t:ation with a low-order !'!Iter that, for example, ca11 be implemented
u:>.if:g a Be"'-"d filter to provide atl appro-ximately linear phose in lhc ~ignal band. n
Thl; ~peclrum of the quantized l-bit output of the digital ~igma-deita quantizer is nt:arly the same as
thal of iu, input. .'v1oreov-er, it als<J ~hapes t~ qu:mt!zation nol;.,e spectrum by JT,nving <he noise power out
uf the ,jgnal hand of int:;;re~L To verify this re~ult analytkal!y. consider tbe sigma-delta quantizer shown
separately in Figure 11.76. It fo11ow;, from this figure !hat the :Enput-::mtput relation of the quantizer is
given by
11.13. Oversampling D/A Converte.r 823

x:nJ~y[nj

-e!n -~~~ -e(n]

Flglll."e {).76: The sigma-delta quantizer.

Digital inpul A11aiog ourprn: cfDAC

o~t
;r~·.•
·t
]

'a o:' '
Q

I 9
'
Q
',
'
•o:r==' I
~@' 0[
~
E
-05
''
~
'~'- -
"TTl
4
c'
<.o.sL
,I
0 w
L
~
~ w ., 100
Q 2
Sampk,ndeJI' ' JO
'Il~

(a)

i< -0.5

Sample irn:lex
(b)
Figure 11.77: Input and output signal~ of (a) lower-rateD/A converter and (b) oversampling DIA converter.

or equivalently, by
y[n] = x[n] +e[n]- e[rr- 1], (11.131)
where y[n) is the MSB of the nth sample ofthe adder output, ande[n] is the nth sample of the quantization
noise composed of aU bits except the MSB. From Eq. (11.131) it can be seen that the transfer function
of the quanrizer with no quantization noise is simply unity and the noise transfer function is given by
G(z} = 1 - C 1• which is the same as that foe 1he first-order sigma-delta modulator employed in tht::
overs.amplingAID convener discussed in the previous section.
The following example illustrates by computer simulation the operation of a sigma-delta D/A converter
for a discrete-time sinusoidal input sequence.
824 Chapter 11: Applicatfons of Digital Signal Processing

Fi::en-d ornp~.~r of con•·cntmm!l DfA converter Filtered u:l.lpu of cn:rsampling DJA <:ODH~rter
,_ ------.
o.sf'
-~
\ 0 '-~\ ] n;;oi:
~
l~

' ' "-, /


-·-~

/ / l
l ' -0.5 f
'
~,
\ <.o.s· ""'
~-~/
// J.
_, 0'----c~---c~---c~---co-
20 4D 60 110 HXI
Time T:me
(a) (b)

~ 11.78: Lowpas.'i filtered output signahiuf(a) conventional DIA converter, and (b)oversarnplmg D/A convener.

FilterW output o' conventional DJA c•x:vene~

~ osf ~ /
~~:1_, '~/ 1
020406080 100
Time

Figure U. 79: Filtered output signals of lhe conventional D/A converter employing a sharp ct:toff lowpass filter.
11.13. OversamptingD/AConverter 825

W: 0"0i1i4if:r An } LJL
II '.L"ttl Silt i>/A ;:::crttdi!r2LJE.±'

't<U;
* il~.Jl#:Bt
0f " P'>:;:wr t t '";f·:rr"V ,
.0 l 248>1¥1 J$4, '00'4NVi<J'tf;
TI £ t(V\1i :U{j;d Z illiJ;'\0jL}ttt}t%ill "'
\\ "' :; ' t 'r ~!'¥!'!' ''"''~* it{ 4ht+ itt)ft!'\ " 'X{¥
U'J & #"' "-\! •• [fj;;
Tf : 't+:

'} ltstJ <t£ h # 41Vfi <TILLS'"'


X:. r Ltt3 J U\} :r
:&' 'V:V:Xt;i11: JJ + f\>t j j
iB "' y;qw:n# d .ff., l
\ۥ 4' tit
tT;r il Ar ;.f<, t
i&{ v. · {'; DH
iT i11ii tG+ «"' tr
JiiiT*Gf "" A

17
+tFH'\
£'18 A jA
'\(12$.)10 14 -% 2 l ;;t j f
0.Xsr:mt¥t, 0; )~~'i<rt. +rna fl';w;f'*L~',•l!il:' ·1kwLt>kk!:
?L:S~ ·;: '· ' :ix:i:i> t ' r :,;:;,',!; ( '
't.::t.Y.::.%2 } t. {k (j

,. ¥it. t~$\H r
" H h 'tV"> 1LW¥\ht r:. ·+.~,/Yfz%
"' JU +!+:; & FH \Y>V% hr f±M. v';x;'V21'1 h
tl~H ' \ ft t:i¥¥7 t
:n.+nt i41P,<0<t:: L,.

One of the most common applications of the oversampling sigma-delta D/A converter is in the compact
disc (CD) player. Ftgure 11.8 I depicts the block diagram of the ba.'>ic components in the signal processing
part of a CD p:.ayer, where t:;-pically a factor-of-4 mersampling D/A converter is employed for each audio
825 Chapter 11 : Applications of Digital Signal Processing

i!lp'.J! Cigiro.l signal

'' 9 9 ;,,
,,,
'' """
• H.~ li ''' '''
~

'' "'
i
< ""
<( -0.5 ;
'

_j
"'
(a) (b)

(c)
Figure 11.&>: Input and output waveforms of the sigma-delta quantizer of Figure ll. n.

/Clock}
. ----------------~--- -..----------------------r-··--1
Digital E= Ett= !--< ~~
audio - Demodulatoi correctioli 1- concealment Filter '---+
signal ClfCUJt circuit I-< DIA
''
'
Buffer memory

Figure ] 1.81: Signal processing part of a CD player.

channel {Goe82J. Here, the 44.1-kHz input digital audio signal is interpolated first by a factor of 4 to the
176.4-kHz rate and then converted into an analog aadio signaL

11.14 Sparse Antenna Array Design


Li:near-phased antenna arrays are used in radar, sonar, ultrasound imaging, and seismic signal processing.
Sparse arrays with certain elements removed are eccnomicai and as a Jesuit are of practical interest. There
is a mathematica1 similarity between the far-field radiation pattern fur a linear antenna array of equally
spru;ed elements and the frequency response of an FIR filter. This similarity can be exploited to design
spru-se arrays with specific beam patterns. In this section. we point out this similarity and outline a few
slm~le designs of span;e arrays. We restrict our attention here on the design of sparse arrays for ultrasound
scanners.
1 -~ . 14. Sparse Antenna Array Design 827

II
Figure 11.82: Uniform linear antenna array.

Consider a linear array of rV + J isotropic. equispaced elements 'Wl.th interelement ~pacing d and located
at x,. = n . d for 0 :::; 11 ::; N as shown in Figure 11.82. The far-field radiation pattern at an angle fJ away
fmm the broadside {i.e.. the normal to the array), is given by

P\u) = L' tu[n]eii2.r.:-!••P·idln, (ll. i32)


J!={)

where w[nJ is the complex excitation or weight of the nth element, A is the wavelengt.'r:!, and

u.=sinfl.

The function P (u) thus can be considered as the discrete-time Fouriertransform of wln J with the frequency
variable given by 2n(u/A)d, The array element weighting a& a function ofrheelement position is called the
arerture function. For a uniformly excited army, w[nl =a constant, and the grating lobes in the radiation
puttan are avoided if d ~ 1j2. TypkaUy d = ).j2. in which case the range of u is between ~n- and n.
From Eq. (!1.132) it can be seen that the expression for P(u) i!'> identical to the frequency response of an
FIR filter of length N + I. Hence, the FIR filter design methods can be readily applied to design antenna
arra)S with specific radiation patlerns. An often used element weight is win I= I whore radiation pattern
is thi.'i same as the frequency response of a running-sum or boxcar FlR filter.
Sparse arrays with fewer elements are obtaineC by removing some of the elements which increases the
interelement spacing between some consecutive pairs. of elements to mOI'e than 1/2. This usually results
in an increase of sideJobe levels and can possihly cause the appearance of grating lobes in the radiation
pc;J:tern. However. these unwanted lobes ean be reduced significantly by selecting array element locations
appropriately. ln the case of ultrasound scanners, a two-way radiation pattem is generated by a transmit
array and a receive array. The design of such arrays is simplified by treating the problem as the design of
ar, "'effective aperture function" which is given by the com.-o1ution of the rransmit aperture function and
th.::- r«eive aperture function rLoc96J,
The efte<:tive apenure function of a single-element rransmit array and a 16-element nonsparse receive
array is also a 16-e\ement nonsparse array. This array system thus requires 17 elements. However, the
to::al number vf elemenls can he reduced by using either a sparse transmit array or a sparse receive array or
hoth. Consider for ex.ample a non.sparse transmit hl'ffiY and a sparse receive a'Tay with aperture functions
gh•en by lLoc96}

WR[nj = !1 0 I 0 I 0 1 0 1 0 I 0 I 0 l}, (ll.l33)


828 Chapter 11: Applfcations of Digital Signal Processing

--~------

..
: ')
- - -.

"
~ , ,•• (-,.a(I0\2:
5 'l<:>L'>o·
~ ,q.' '
': ',1

·· ~
I >'· {\/·:··J
tL::.v _y__ ~ 'i..__j,'_, __ j _ _ ___l_'i
'.\/\.-\:~-- t\F'i
"----"---. ~ \" -_j
-) --05 [) 0.5

Fi:;un- 11.83: Radiation pattcn;s of transmit <LTTay (dotted line), "e<..:eive :arr.ay (dashed !mt:i, ::md (\>eo-way radiation
pn!tem i ~lid liv.e )- TCe radia:ion patterns have been -'>caled hy a fad or of 16 t0 make the value of the two--way r.adiatinn
pattern at" = n umty.

where 0 in WR[n] indicates. the ab:->cncc ofao element. It is easy to shmv that the effective apenure. function
he:e is given by

{!1.134)

Figure 11.83 shows the radiation patterns of the individual array;-; and the two-way radiation puttern of the
compos ire array. Note that the grating Jobes in the radiation part{'.m ofthereceivearrayare being suppressed
by t.'It: radiation pattem of the lran~mit array Tim~, a nonsparse transmit array with two elemenb and a
;;par-se recdve array with ::ight elt:ments ha5 the same mdl.:ttion pattern of a single-element tmmmit array
and a !6-element nons.parse re:::eive array.
More eccnomic sparse mr.ty designR with eight dements are as follows [LK:96!:

tt~dnl = fl 00000011}, w,.,.[nl=PO 0 1 0 l}.


U:T[n]={l J!. WR(nJ = {J 0 0 0 l lJ 0 0 0 0 0 lj.

with the same effective aperture func"tion w.,ff[lil as given in Eq. O 1.134_).
The basic idea behind the sparse transmit and receive array designs given above is that oue sparse
array essentially "fills i.n" the missing elements in the second sparse .array by some type of interpolation
su11ilar to the concept ofinlerpolatetl HR f:Jter design ofNeu-;:o et ai. [Neu84b). The shape of the effective
aperture function can be made smoother tQ reduce tte grating lobes by controlling the shape ofthe transmit
and receive ~pcrture function:;. For example. rhc transmit and the receive aperture functions given by

wy[nJ ={I 0 0 0 0 I),


'-'-"R(nJ=<,l {) 0 0 0 ll
result in a tri.wguJar-shaped effective aperture func1iun given by

wc~dn] =II 2 2 2 2 1 3 1 2 2 2 j

The ccrresponding scaled radiation patterns are "hown in Figure 11.84(aj. Additional ~moothing can be
ohrained by apodizillg the individual aperture ftmction~ implem~nted by reducing the weigh\;; applied to
the outer elemen:s. For example. ;he transmit and tile receive aperture functions given hy

wr[nj = {l I 0 0 l J 0 0 I J},
WR[t!I={0.5 0 I 0 1 0 1 0 0.5}
11.15. Summary 829

,.
--~------
···-,--·

~u-.i ~ 'lto

'i ~'
i- iLl- 3" "-J:

"-': '' -' - .!


';-.: - ~- - .__:
_,' '"
""
<hl
Fi!~Ure IL84: H!u~tra!Hm <>! dk...:u•.c &:Lnun; ·.mnoth;;u; l~y ·-h";'tn.:' lJa:;<;mt :ul<i rccciv.: a~r:un: functions. The
:a.:i1aNm ?<>!lem~ ha'<i'." been ,,-:fled to oru.kc' ih~ value' ollh~· n.n,-;,,<y c,;,hatnn pattern al u = 0 unity.

LS U- l.5 0.5 O.SJ_

TI-e corrc-~p-om.ling scakd r,tJialion pattern:, an: >.how•~ ,;1 hg-t:r~· ll.S4thJ.

11.15 Summary
Tt.c Jiondo: Fut;ricr tramform 1DFT) i:- a \<,'idely u~d digital signal p_n_>ccs,ing algorithm. On~ of the
reasons for it)' widespread use i<> ti'_e an1ihtbility of fast Fourier tr.msfomJ (FFf) algorithm" for i£s cumpu-
t:ltion. Earlier in Section 3.6 we discusse:i the :mrkmcntation of bigh-~po;;-ed cnnvolut:on using the DFf.
It abn plays a majo: wle m the impkme-ntation d many fum.-tions in the Signal Prv,ys:.inf{ Toolbox of
\1 >\ 11 ,\_IJ_ Three other ap!)lication.• are ronsiden~d in thi~ chapter_ The !irs.t apphcali:un di"cH-'>~d i;> in the
d!ici~nt and robust detection of dl:ul-tonc multifrequcncy (DTMFi tones employed m d:al-pulse telephone
signaling. AT:'\1 :r::achinc'>, vui<T main~. ell·. The next apphcation treated here!'> in th.:- spectral analysi!-;
nf;,t~tlon;_;ry 3ipnah and is the l•<~'>l'i of mo-:t commerc1al spectrum analy.:crs For the spectral analysis. of
nons1a1ionary '>ign<tb, su.::h a., '>p<-'c·c·h and radar signaL-<. the DFTs of smaJl windowed <>egmem~ of these
siE~nals are computed. and a thnx-J.imen•.lonal display of the re5u!ting spectmms, called spcclrngrams,
;m~ employed. This type (>f "ip:,;! <~naJy,_;._ is more P'-''fll!lady called !he short-term Fourier tr<Jnsfonn, ur
ttw time-depende-nt Fourier tr<.uJ:-.fWT:l whi-:~1 JS cc-vcro..'d ne:>.L Thi;; is follov.-ed by a brief discus-;ion of
-;p·~.:tral <!nalysi,; of nmdorn \if!!;.d---.. Ht:r,· both :wnp.aramt:tric and p,c;rametrk spe<.:tral analysi~ m-ethod:-.
are re\ iewcd.
D:gitul oci,!!nal proce.~sing mdiwd,; ;;.re in;::·;;;;!Sinf:IY bt-mg employed in dig1tal audio applications fnr
spo:x:t~·al .;;haping an(f fur th~ g:encralion of -sp~·ci~d audi(1 ~-tlcc1>'. A number of these appli.;.·ations are outlined.
The~ an; fol!n\'<:d by ,, r..:vicw of digit.J.l <>tereo generation fnr F!v1 ,q~:~eo rransmissinn and a disv-'-""io.n
on the u:-c ,_,! mgi1al llll:ring for pnc-cmph"-~j~ of audiO '-Jg!Wb,
The discreh.'-lintc analytic signal ha..;; a 7.ern--valued o;pectrum fur ull negativt• frequem:ics and. as;,
rc>ult i;, a comp~e-x '-~i;ll:>l ':l.uch ;1 -,ignal c.ar: he gener<Jte-d from a real -;ignal by pussing the la1tcr thwugh
a c!Jsn.:le-limc Hilhert rr:u~formcr. Several me!!locls for designing the Hilbert tmnsfomTer arc de"crihed_
One zpplKJ.tion of 1hc analytic <>i)!ll:Jl cnmu.lercli .in this chap!er is m the design of a digital siogle-sideb;md
cnrnmunicatiun ") ::.H:m
The n~·xJ four applicati~mo, treated involve multirate dig.ita.l signal proce~;_;;ing. The first twoapphcatium
an: cmKemed <\ ah !he :.ubband eoding of speech and uudio signals for s1gnal compressiOn, and lhe design
830 Chapter 11: Applications ot Digital Signal Processing

of lransmnhiplexers ror intt·rconnccring frequency-divic.ion multiplex (FDM) and time~division mulhplex


(TDM) ~ommunicGtion -.;y:>rcms. Each of ~:h.:t>e app!ic::tticms require;; the UEe of the ~-called quadrature-
mirror filter (QMF) bunks.
A method for the efficient digital data transmi,,ion based on rnultirme tccimiques ~~ diM.'USSed next This
method makes use of the discrete Fourier transform (DFT)computation am~ as a result. can be implemented
usi:1g fast Fourier tmnsfnnn (FFT) algorithms. A noveJ method of designing a computationally cHicient
ftaclior.<~i sampling rate cnnverter for dig:i~al audio application..'> is then discus!>ed.
The m::xt two apphcations considered in this ;:;h;:;.pter are the oven.mnpling "igma-delta analog-to-digital
(A/0) <md digital-to-analog (DIAJ conv<rter design. Such converter<; are being increac.ing!y employed in
many .<..yslem" bccuusc of lhe:ir improved sJgnal-to-noti>e ra~ins.
Finally. the chapter shows tile ~imilarit) bet wr-en the fa;-tie!d pattcm ofalin<:ar antenna array of equally
spaced elements and the frequen;_:y response of an Flf{ filter. This stmilarity i'> then made U"-c oi· in l:hc
de<;ign of sp;m.;e ante11na array;,.
This chapter has touched upon a variety of practical a.oplic.attuns of digitai signal pnx:eo.c;ing. There
are numerous other applic::Hlons wiL~ many reqwring a knowledge of other ;,uh_jects. and as a re~ult. they
are beyond the scope of this book. Addilion:J! upplicatwn:-. C3n he fuund in vanous other book~; see, for
example, !Bab95J. (Fre94J, [LinE?!, l.Ma.-92). {Opp78]. iPapYOL {Sbe95).

11.16 Problems
11.1 A band limited coll!inuout.-!ime signal b ~<Jmpled at a :-.ue of 7500Hz to en~me nD aliasing, The sar.>pleLi signal
is windowed generu1ing a krrgth-1250 sequ:-nc~ who:-e R-pom! OFf is then computed. (a; Wtta.t i~ the frequeru;y
resolution of the DFf ~amples in Hz 1f R = 1250? {hl \\'hat shnu:tl he the V".liue of R :fa frequern.--y rcsoh.• tion uf
4.5 H:r. ili de~oiret!':'

11.2 A ~pcech stgnal i~ ~ampleJ al the !2-kH.;: rate. We wish to nnalyle, u~in~ the DFT, the spectrum of l~e sampled
speech of length 256.
(.aJ Jf we take a 256-pnim DFT nfth;& ~gcnem, whn would be the rt:'.~Oiution of rhe OFT ~ample,;?

(b) Describe a te<:hnU.f'le !u a1:hieve '-~ 16-Hl resolutmn of the 0£-L !><UTipks.
(c) De:-.cnhe a method W ac:hieve a 128--H:-- resoluti1m uslr.g a minimum !cngrh DFT.

Il_l A b.:md!1mitcd continuous-time .:-eal signal g"(t) it. sampled .:n a n!lt of FT H7 wher>: F 7 = 2F.,. w;th f;,
dei10Cing the highe~t frt'quen~y com:am<O'<J in g., ft l. An R-p<:~int OFf G!kj oi the ~cquence gin i obtained by sampling
g"!!) is ne:\CI \.'o,nputed
(a) If fm
= 5kHz. and R = !000. olctennme 1he continuou:Hlw.e fr""':!Ie3de~ corre.. pvmllng to 1he DFr ..ample
md;ces k = 200, 350. m~.d &24.
<b) IfF, = 7 .kHz and R = 1010. determ:ne the cuminuou;,-t11ne fn:quencit><; cocrcspon.Jing to the DJ-T sample
mdin~s/. = !95,-~W.and9-l7.

(c) E 1-;>; "-'5kHz and R = 517. determine 1he continuous-lime frequeocie'i corresponding ro the DFT sample
!rul!ee" k = ')7, \87, a..-Jd 3UI.

11.4 A cnntimmuH.Hne rr;;;J sinm;oidal signal w1!h a cotllinu"'-L<>-ti:ne Fumier transform Ga(jQ) is sampled m a mre
of FT Hz and pa-,.seU lhmugh a rectangular window gencrating a length-N 1.equeru:e y[nj, 0 ::=: 11 :S N - 1 Let l'!k]
deWJ!e ib .V-pomt DFT.lf lfkoJ is known, for what values of Q can the values of Gn(}Q) be d~t=minetl frum this
value of the Df'T'> Whal are the values of G,!]nl ut these angular frequencie\·o

11.5 A coolinuou:Hime re;;! sitm;;oidal slgillll !:ail) = l.'OS-\200nt) i~ sampled at a rute of FT Hz and passed through
a rec[;:tngularwindow genermiu.g a knglh--512 sequence Yinj, U ~ n _s 51 L Lc! f[kj denote ih 5 12-point Df-T. What
should be the Wille of FT for wbich :-'[k} = 0 for all \la!ues k except k = 64 and k = 448''
11.16. Problems 831

11.6 A bandlimited contmuons.-rime real signa: g6 {ri is 1>ampled at a rare of Fy Hz. where Fr = 2Fm with f;,,
d·:nvting the highest frequency contained in ga(t}. An R-polru DFf Glk !of the sequence x!n Job!alned by s~mpling
g~ft) is next computed.
(a) If F.., = 6kHz, ""hat .u-e the ••dues of R and the ~a1nphng rate FT to provide a DFT r~olution of 2.6 HL wilh
no aliasing"!
(b) If R ha;; to be a power 9f 2 to make u:<e of efficiem FFf algorithms frr the cornpul.almn of the DFT, what should
then be the v·afue of R to achieve the resoluti0£l of 3Hz if/;« {t) is sampled at a rare lu pnKiucc no alia~ing~

t 1.7 Consider i:he Lenglh-64 scqueno::e

l! is known that its 64-pcint OFf Xf k; has zero-Yalued sample~ ro:r all values of k exctopl k = 15. 27. J7 . .and 49. If
=
JXtl:SJ! 32 Jnd 1Xf27Ji = 16, determine the exact ex pres..~ ion foe x!n J without evalua!ing th>:: I OFf. 1:-, your am\\-er
unique? II not, how n1auy ~her sequern:es have tfle <:>am.: DFT magnitude:> ;;.s X !k j? lktenn.int th,; ex<'d e;;pres,.;;iorL'>
fc'l" these sequences.

tl.8 A bandlimitedcontinuous-tirne real signal Xa (t) is ;;ampled at a rate of 100 sample~-;ec ,generating ad!\ieretc-time
sign;;! x[n] of length 500 whose 500-poim OFf X{kj i~ then computed.
{a) Determine the totcl number of o::omplex multiplications anrl addJtwns needed for the d1n."d ev<1!uation of X!k !-
{b) \Vhat is the digital frequency rehlliuhon of the DFT sampl.~?

(c) What is the analog frequency re.~olutiun ofthe DFT ~arnples?

(d'• What should be the stopband edge frequency in Hz of the iillalog anti-aliasing !i:ter prior to sampling-,
(e) Determine the digital frequeru.."Y m rad:ans and the analog frequency in rad/s.ec ofth<:- Of-T .>amples X!31 J and
X!390J.
(f) Tf a Conley-Tukey type FFT algorithm is n~ed to compu!e the DFf samples, what shculd he the minimum
length of the DPT and how many zero-valued samples should be append<Xi ((> .r!nJ"_l
f_g) What are the samples of the mocbficd OFf that are doses! to the DI-T samples X!3ll and X[390!?
(h) What are the total number of complex multiplicahons and additions 11eed~ f<:} compute the ncodified DFf?

11.9 A spectral analysis of a sagnal o_-umposed of two sinu~idal sequences of notmali.~:ed frequencies j 1 and h, With
fl < h, is: to be carried out u_sing the OFT· based approach. To this end, the signal is wirxto'"ved by a Jeng:th-N window
ge·rx:cating a !Cilgth-N sequence xrnl If N = 60 and tl:e frequen,."YOf one of the smuwidalsequence is fl = 0.25.
de·:ennine ihe minimum valu:! of the frequency f2 of the second ,;inusoidtll sequence 'iO that both ,;inu!Soids can be
resolved using & length-R OFT, R > N, for each of the following windows;
{a) Rectangular window, {b) Hamming window, \c) Hann windO"N. and (d) Blackman window.

U.JO Repeat Problem 11.9 for N = 110.

11.11 A fumdlimitcd continuou~;-time Mgnal g"'(!) is sampled at a sampling rate of FT Hz generating the sequence
g[t:'] whi;;:h i~ then windowed by a length-N v.indow resulting in a length .. N ~equenee yin 1- Let the highe;,t frequency
cor,tained in 3a (t) be F,_ HL To analyze the spec!ml contents of fla ( 1) m1 N-point DFT of y[n 1 h carried ou!,
!a) What i:s the minimum value of Fr?
{b) Wba! is the maximum value of FT if the desired h-cquency re~olution of the DFT sample~ is less Ehan ~ F''
(c) lf Fm =4kHz and !'.!.F = IOH7- and N = 2f wi!h (.a positive integer, determine the minimum andmaxi1num
values of the sampling ra•e FT

11.12 l.el XsTFT(eJcv, n) and Ys'JfT(<"Jw, n) denote the iliurt-tenn Fourier transforms of two ~ucnces .dn 1 and
y!nJ .. obtamed by apptying the same window sei:JUCOCe t<>ln l- Prove the following p<operties of the STI--T:
832 Chapter 11; Applications of Digitai Signal Processing

(a) Linearity property: tf g(nl = axl11j ~ p_vjnl, then GSTFT(i>l"', n) = aXsTFT(ei"', n) + JH'STFf(eJ"',r]
where GsTFT\<:'1"', n} i~ the STl-'T of xlnl obtained using the wmdow U.'!nl.
{b) ShiftitlE property· If vlnl = t[n- no I- then rsTFTkJ"-', n) = X~JFT!el"-' .n- noJ, where no is any posith·e
or negative integer.
(c) Mndulation pn>perty: If _vjn] = ej""':lxjnj, tl'.en Y<;TFf(ej"'- n) = XsTFr(eJ{_w--('-'V; ,n).

Il.13 An alternate derln:ition of the short-tern: Fourier tran<:form is ~pven by


00
-~sn-rcej"'.nl = L x!m]w!n-mje-J<n_,.,_ (11.135)
nr=-oc

~ress X STf-'T(~·; '"', n) in tenns uf the short-term Fourier transform X ~rrFrkjru, n) defined in Eq. {11.16). What are
the main differences bet\\-een the two definitl(lns':'

II .14 Sh<ru.· th:ll the inve;,.e STFf ;;,;n be compute<J u>;ing {he following expres1>irnt:

(I Ll36)

provided w[O] #- 0.

11 ..15 Show that the inverse STFT ;:an abn be C(lmputed Ubing the following expression [NawliSl:

~rnl = (11.1371

11.16 Show t.hat t."re sampled short-term discrete Emrier transform Xs1FT(k,n] defined by Eq. (tl.l7) can be in-
terpreted as a bank of N linear time-mvariant titters .as indkated in Figure Pll. [ whose output& Ykfnl arc precisely
XsTFTfk, nj. k = 0, L __ .. N -- J. Determme lhe expre>~inns fnrlhe impulse response>< h.1Jn I <Jfthesc N fHten:. •



l''igure Pl I. I

1
11.17 Let XSTFT(f'_iw, n) denote the STFT uf" real sequence x[n]. Denote the inverse DTFT nf jxsTFTkj"'. n)j
as rlk. n ~- What isthe rela1i{1n between r tk. n J and xln n
J 1.18 A non stationary stgnal Xa {t) w:th lime-varying signal par.tmeters is >;;,impled at .a sampling r..rte- of Fr and the
spe\:tra\ analysis of the 'an1pled signal xln] is carried out uo;.ing the short-time dis..'Tete Fourier transform method with
an /II -point OFf_ {a! If the wmdow u~ed is of Guration r ~onds. what is the length of the wirn:Jo.w in samples:? (b)
If tfte window i"l advanced by K sample~ between two consecutive DFT computntions, what is the number of DPI'
compulations per second?
~1.16. Problems 833

Jl.l9 The short-term autocorrelation function of a deterministic sequence x[n] is defined by [Rab78]

00
:PST[k, n1 = L _-rfm]w[n- m]x[m +klwln - k - m], (ll.J38}
m=-::.>0

where w[nl is an appropriately chosen window sequence.


(a) Show t.i.at ?Srlk n)lS an even functim:. of k. i.e., 'PST[k. n] = 'PST[ -k. n].
(b) Slli.'W that <;JS,-{k, n] can be computed using the digital filter struclure of Fi:gure Pll.2, where hk[n] is the
impulse response of an LTJ discrete-lime system. Determioc the expression for hJ:[n].

k delays

FigurePU.l

1 1.20 The structure of Figure Pll.3 has been proposed as an allpass reverberator [Sch62]. Develop an equiv-alent
realization wlth the .~rnal]est number uf multipliers.

-a
xln I - 0 ---{>--''--<;t+)---- y!n]

Figure P11..3

1 L21 The structure of Figure Pll.4 has been proposed as an allpass reverberator with a variable ratio of direct-to-
rtrverberated !00\md, where the box tabeled "cllpass reverbemtor" is typically a cascade Qf the a~lpass reverberators of
tt;e f:mn of Figure P ll.3 [Sch62j. Develop an equivalmt realization with the smallesr number of multipliers.

x[n] -0------J:>'--""'----~(+ij--- y[n]

Figur-e PIIA

11.22 A generalization of the second-order equalizer of Eq. (11.65) !s given by

K1 K2
G2(z) =
2 { l - Az(z)j +T {1 + Az{z)}, (1U39)
834 Chapter 11: Applications of Dtgital Signal Processing

wtw:re A>( -;:J i\l. the o.ecood-mder all pass tnmsfe~ func(ion of Eq. ( ll.64). De•.-elop the pertinent ill: sign equations for
this equnliLer and plot the gain responses for various values of the- filter parameters K 1, K2, a, and /J.

11.23 Develop a computanonally efficient structure for realizing a fractiQnal-rate interpolatfK with an interpolation
factor of 3!2 by raking the cranspose of Figure 11.63.

11.24 De·.rdop a compu.mtionally eftk-ient strw.::tur-e for realizing a fractionaj-rate interpolalor 'With an i.nterpolaJion
ftlctm of 3/4 using the method outlined in Sa.:tion 11.11 .1.

11.17 MATLAB Exercises


M 11.1 Verify 1he dual-tone muhifrequency {DTMF) tonedetect1un Program ll_l by runniog it fCH" variow (e!ephone
key symbols as inputs.

M ll.l Using Program ll_2 analyze the spectral content of a Iength-16 sequeoce x[nf composed of a M:m uftwo
sious.oidal sequem:es of the fonn ofEq. (1 1.14) with frequencies fr = 0.18 and h = 0.3 for OFT lenglhs R = 16,
32, 64, and 128, resptttively. Comment on your results.

M 11.3 Using Program i 1_2 analyze the spectral content of a length~l6 sequence x{n j composed of a sum of lWO
sinu~oidal sequences of rhe form ofEq. (11.14) with frequencies !J = 0.18 and h = 0.3, 0.27, 0.24. and 0.21,
respectively. Use a DFT kmgth of R = 128. Comment on your results.

M liA Modify Program ll _2 to in'"estigate the effect of a 1apered window on the spectml aru~lylis uf a sequence
.rinl composed -nf a sum d :w;;, sinlLsoidai sequences. with closely spaced frequencie!\. Let x!:n] be of the form
x[ni = sm{h f1nJ + 0.9 sint2nf1n), where /1 = 0.[8 and h = 0.2L Use a Iength-N Hamming window, where
N =' l6, 32, 64, and 121:1. re~pcctively. Comment on yow results.

M U.S Repeat Exercise M! 1.4 using a Hann window.

M i 1.6 Repeat Exercise Mll.4 using a Blackman window.

M 11.7 Modify Program 11.3 to perform a DFf-based spectral analysis of a noise-corrupted sinusoidal sequence of
frequency 0.1 J. Arc you able to detect the frequency of tbe sinusoid? Justify your answer.

M 11.8 Repeat Exercise M11.7 for a frequency of0.l6.

l\f lVJ A signal composed of two sinusoidal components of angulai" frequendes 0.1Jr and 0.2Jr is corrupted with
a Gaussian di<Hnbuted random signal of zero mean and unity variance, and is. windo-w-ed by a rectangular window.
Evaluate and plot the power spectrum of the nois.e-eorrupted signal for two different window lengths: N 64 and =
N = 1024. Comment on ycur results..

.M lJ .10 E\·aluate and plm the Bartlett and Welch estimates of the pcl\\-'eJ" -~truro of the noise-rorrupred signal of
Exercise Ml 1.9 and windowed Dy a Hann window of length 1024. Ev-aluate the Welch estimate for overlaps ofM and
128 samples, respectively.

M 11.11 Design using the function remez a linear-phase lowpass FIR filter of Ol"deT l I widl passband edge al 0.31<
a.nd stopband edge at 0.5:rr. Use equal weights in the passband and the stopband. Next. using the function lpc,
develop an equivalent all-pote IIR model of the F1R lowpass filter for the following values of the orrler: 4, 5, am1 6.
Plot Ihe magnm:Kk response of the FIR filleT and the llR equh'alent in the same figure for each value of the order.
Comment on your result;:_
11 .17. M..uLAB Exercises 835

M 11.12 Design using the fun>.:tlon rente~ a widehand FIR filter F(.z} of degn~e 17 with a passband fmm 0 to U.9Jr
i:nd a stopband from il95J'! to 11". Use an appropriate weight vecror to weigh the passband and the stoph:md of the
w.ideband fiitec Plot the =gnitude response of F(z) and its corresponding Hilbert transformer F( -z 2 ). Hew• many
uwlttplien are needed to implement the Hilbert transformer'?

M 11.13 Design using the function remez a Hilbert transformer of degree 34 with a passband from 0,05J'! to 0.95ll".
Compare its computational complexity with that designed in Exercise Mll.l3.

M. H.l4 Design a real-coefficient baif-band elliptic filter G(;.) with the following specifications: Wp = 0.3tin,
Mr = 0.64n, and <is = 0.014. Plrn: its pole-zero locations using the function z-plane. Express G(rl in the form
e>f Eq. (11.92) and detennine the transfer functiOfls uf the allpass section$ .Ao<zh and A1 {z2 ). Next, derennine the
expression for the complex half-band filter H (;_) according tu Eq_ {1 1.93). Plot the gain responses of the real half-band
filter G(.:) and the complex half-band filter H(z}, along with the phase difference between Ao(-.: 2) and z-1 A 1( -z2).

M 11.15 Verify the operation of the sigma--delta quantizeron a constant-amplitude discrete-time input sequence using
l"rogram ll_6. Choose different values for the length N of the sequen.::e and its amp!itude A.

M-11.16 Verify the operation of the sigma-delta AID converter on a discrete-time sinusoidal input sequence using
Program 11_7. Choose different values for the length N of the sequence, its amplitude A, and angular frequency «.t•·

M 11.17 Verify the operntion of the sigma-delta DIA converter on a discrete-time sinusoidal input sequence using
Program 11_8. Choose different values [OI" the length N of the sequence, its amplitude A, and angutar frequency w0 .
Bibliography
[AbcS3] B. Abraham and J. Ledolter. Statistical Methods for Fhrecasting. Wiley, New York NY, 1983.

[Abc72] M. Abramowitz and I.A. Stegun, editors. Handbook of Mathematicai Functions. Dover
Publications, New York NY, 1972.

[Aca83] A. Acampora. Wideband picture detail restoration in a ditigal NTSC comb-filter s.ystem. RCA
Engineer, 28(5):44--47, September/October 1983.

{Ada831 J. W. Adams and AJ..;. Willson, Jr. A new approach to FIR digital filters with fev."er multipliers
and reduced sensitivity. IEEE Trans. on Circuit.!: and Systems, CAS-31 :277-283, May 19&3.

[Ada9l] J.W. Adams. FIR digital filters with least squares stopbands subject to peak-gain constraints.
IEEE Trans. on Circuits and System.~. 39:376-388, April t991.
[Aga.75] R.C. Agarwal and C.S. Burrus. New recut"!si:ve digital filter structures haYing very low sensi-
tivity and low roundoff noise. IEEE Trans. on Circuits and Systems, CAS-22:921-927, March
1975.
[Aga77] R.C. Aga:wal and J.W. Cooley. New algorithms for digital convolution. IEEE Trans. on
Acou~·tics,
Speech, and Signal Procening. ASSP-25:392-41(}, October 1977.

!Aka92J A.N. Akansu and R.A. Haddad. Multiresolution Signal Decomposirhm. Academic Press, New
York NY, 1992.
[Ala93] M.A. AI-Alaoui. Novel digital integrator and differentiator. Electronic Letters, 29:376--378,
18 February 1993.
IAJ117} J B. Alien and LR. Ra.bineT. A unified appToaclJ to short-rerm Fourier analysis and synthesis.
Proc. IEEE, 65:15.58-1564, ~nvember 197?.

[AliSO] KG. Alles. Music synthesis using real time digital techniques. Proc.JEEE,68:436--449, April
1980.

{Ans93l R. Ansari and R Uu. Multirate signal processing. In S.K. Mitra and J.F. Kaiser, editors.
Handbook for Digitai Signal Prvces.l"ing, cbapler 14, pages: 981-1084. Wiley-Interscience,
NewYorkNY.l993.
[Ant93J A Antoniou. Digital Filter.~: A.nalysix, Desi!(n, and Awlications. McGraw-Hill, New York
NY, 2nd edition, 1993,

[Bab95l J. Babst. editor. DigitalSi{;nal Proce.Ming Applications Using the ADSP-2!00 Family. Prentice
Hall, Englewood Cliffs NJ, !995.

837
838 s;tniography

fBag98J S. Bagchi <md S.K. Mitra Nonuniform Discrete Fourier Transform ond lt,o,· Signal Processing
Applicatiom, Kluwer Academic Publishers, Norwell MA. 1998.
[Bar4R] M$. Bartlett. Smoothing periodograms from the time ~eries. with continuous spectra. Nature
(London), 16l:686-687, 1948.
[Bel76] M. Bellanger. G. Bonnerol, and M. Coudreuse. Digital filtering by polyphase network: Ap-
plication to sample rate alteration and filler banks. IEEE Trans. Acoustics. Speech, and Signal
Processing, ASSP-24:109~1 14, April 1976.
{Bel&4l M. Bellanger. Digital Processing of SigTUJls. Wiley. New York NY, 1984.
1Ben48] W.R. Bennett. Spectra of quantized signals. Bell System Teclmical Journal, 27:446-472,
!948.
[Ber16) P.A. Bernhardt, D.A.Antoniadis and A.D. Da Rossa. Lunar perturbations in columnar electron
content and their interpretations in terms of dynamo electrostatic .fields. Journal of Geophysics
Resean::h. 43:5957-5963, December 1976.
[Bin90] J A.C. Bingham. Multicanier modulation for data transmission: An idea whose time bas.
come. IEEE CommunicaJions Magazine, pages 5-14, May 1990.
[Bla65] RB. Blackman. Linear Data Smootlting and Predictirm in Theory arul Practice. Addison-
Wesley,Reading MA, 1965.
{Ble78f B. B.lesser and J.M. Kates. Digital processing in audio signals. In A.V. Oppenheim, editor.
Applications of Digital Signal Proce;ning. chapter 2. Prentice Hali, Englewood Cliffs NJ.
1978.

[Bol93J B.A. Bolt. EarthqJUJiws. W.H. Freeman, New York NY 1993.


[Bon761 G. Bongiovanni, P. Cors.ini, and G. Forsini. One-dimensional and two-dimensional generalized
discrete fuurier transform. IEEE Tram:. on Acoustics, Speech and Signal Processing,ASSP-
24:91-99, February 1976,
[B(lo51] A.D. Booth. A slgned binary multiplication technique. QUllrt. J. Mech. Appl. Math., 4{Part
2):236-240, 1951.
[Box70} GE.P. Box and GM Jenkins. Time Series Analysis: Forecasring and Control. Holden-Day,
San Francisco CA, 1910.
{Br.a83J R.N. Bracewell. The discrete Hartley transform. Journal of the Optical Society of America,
73:1832-.1835, December 1983.
{Bw·73l R.S. Burrington. Handbook of Mathematical Tables and Formulas. McGraw-Hill, New York
NY, 5th edition. 1973.

fBur72I C.S. Burrus. Block realization of digital filters. fEEE Trans. on Audio and Electrmu:oustics,
AU-20:230--235, October 1972.

[Bur77) C.S. Burrus. fndex. mappings for multidimensional formulation of the DFT and convolution.
IEEE Tram. on A<:oustics, Speech, and Signal Processing, ASSP-25:239-242. June 1971.
rBuriU} C.S. Burrus and P:W. Eschenbacher. An in-place, in-order prime factor FFT algorithm. iEEE
Trans. on AcausJics, Speech, and Signal Processing, ASSP-29:806-&17. April 1981.
B:1bllography 839

lf!.ur92j C.S. Burrus, AW. Soewito, and R.A. Gopinath. Lea_.,t squared error FIR filter design with
transilion bam.!.\. IEEE Trons. on Signal Proce.~sing. 40: 1327-1340. 1992.

[Eut77j M. Burtner. Elimination of limit cycles in digilal filters with ve;y low increase in the quanti-
zation noise. IEEE Trans. on Circuit! and Systems, CAS-24:300---304, l '177.

[Cad73] J.A. Cadzow. Discrete-Time Systems. Prentice Hall, Englewood Cliffs NJ, 1973.

[Cad871 J.A. Cadzow. Foundation.s af Digital Signal Processing and Dat'.l A.naly.vH. Macmillan, New
York NY. !987.
[Can86] J.C. Candy and A-N. Huynh. Double interpolation for digital-to-aoaiog conversion. IEEE
Trmu. on Communications. COM-34:77-81, January 19R6.

{Can92J J .C Candy and G .C. Ternes. Oversampling methods for AID and D/A conversion. ln J.C.
Candy and G.C. Temes.. editors, Oversampling Delia-Sigma Data Cmrvertns, pages l-25.
IEEE Press, Ne<;~.· York NY, 1992.

(Cha73i D.S.K. Chan and L.R. Rabi:~er. Analysis of quantization efTOi's in the direct fonn for finite
impulse response digital filters. IEEE Trans. on Audio and ElectroacoustUs, AU-21 :354---366,
August 1973.

[C'ha&ll T.L Chang and S.A. White. An error cancellation digital filter structure and its distributed
arithmetic implementation. IEEE Tram·. on Circuits and Syste-ms, CAS-28:339--342. April
1981.

[C'haOO] R Chassaing and D.W. Homing. Digital Signal Processing with the TJ.-1S320C25. Wiley,
New York NY. 1990.
[Clu66] E. Christian and E. Eisenmann. Filter Design Tables and Graphs. Wiley, New York NY, 1966.
[CbuS5] PL. Chu. Quadrature mirror filter design for an arbitrary number of equal bandwidth channels.
IEEE Trans. on Acoustics, Speech. and Signal Processing, ASSP-33:203--218, February ]985.
fCbu90j RV. Churchill and J.W. Brown. Introduction to Complex VCmable.v and Applications.
McGraw-Hill, New York :"JY, 5th edition, 1990.
rcnu95J J. Chun and N.K. Bose. Fast evaluation of an integral u.;_,.curing in digital filtering application~.
IEEE Trans. on Signal Processing, 43:1982-1986. Augr:st 1995.
[Cio9l] J.M. Cioffi. A multicarrier primer. A"iSl TIE1.4 Committee Contribution, Boca Raton FL,
Novembe£ 1991.
[Cio93] J.M. Cioffi andY-S Byun. Adaptive filtering. In S.K. Mitra and J.E Kaiser, editors, Handbook
for Digital Signal Processing, chapter 15, pages l085-ll42. Wiley lnterscience, New York
NY, !993_

fC'Ia73] T.A.C.M. Classen, W.F.G. Mecklenbrauker, and J.B.R Peek. Some remarks. on the das.-
si.fications of limit cycles in digital filters. Philips Research Reports, 28:297-305, August
1973.

[Coh86J A Cohen. Biomedical Signai Processinx., volume H. CRC Press, Boca Raton H...., 1986.
{Con10J A C. Con5tantinldes. Spectral trau:sformations: for digital filters. Pmc. lEE, 117: f5S5-1590,
August 1970.
840 Bibliography

[Cuo65J J.W. Cooley and J.W. Tukey. An alg,,rithm for the mac:'linc calculation uf complex Fourier
serie<.:. Mmh. Computation, 19:297-:JOl. 1965.

[Cou83] L.W. Couch JL Digital and Analog Communication Systems. Macmillan, New York NY, 1983.

[Cox83 J R.V. Cox and J.M. Tribo1et. Analog voice privacy systems using TFSP scrambling; Full
duplex and half duplex. Bell System Technical Journal, 62:47--61, JanUill)' 1983.

[Cre95l C. D. Creusere and S.K. Mitra. A simple method for designing high qualjty prototype filters
forM-band pseudo QMF banks.. JF,E£ Trans. Signal Processing, 43: I 005-1007, April 1995.

[Cm75] R.E. Crochiere andA.V Oppenheim. Analysis oflineardigilal networks. Proc. IEEE, 62:581-
595, April 1975.
[Cm76a] A. Croisier, D. Esteban, and C. Galand. Perfect channel splitting by use of interpola-
tion!decimationitree decomposition technique~. In Proc. International Symposium on ln-
fonnation Srience and Systems, P..rtras, Greece, 1976.

[Gu76b] R.E. Crochiere and'L.R. RabineL On the properties of frequency transfonnations for v<triable
cutoff linear phase filter~. IEEE Trans. on Cirmits and Systems, CAS-23:684--686, 1976.

[Cro83J R.E. Crodriere and L.R. Rabiner. Muhirale Digital SigJ1al Processing. Prentice Hall, Engle-
wood Oiff;;; :"-JJ, 1983.

[Cuc91l S. Cucchi, F. Desinan, G. Parladori, and G. Sicuranza. DSP implementation of arbitrary


sampling frequen"'"Y' conversion for high quality .mund .application. In Proc. IEEE International
Conference on Acoustics, S~ech, ami Signal Processing. pages. 3609--3612, Toronto Canada,
May l991.

[Dan74] R.W. Daniels. ApproximaJion Method.rfor Electronic Filter Design. McGraw-Hill, New York
NY, 1974.

[.Dar76] G Daryanaui, Principles of Active Network Synthesis and Design. Wiley, New York I'\Y,
1976.

[Dau88] L Daubechies. Orthonormal bases. of compactly supported wavelets. Comm. Pure Appl. Math.,
41:9{)9........996. 1988.

[IkFS8] D.J. DeFatta, J.G. Lucas, and W.S. Hodgkiss. Digital Signal Processing: A System Design
Approach. Wiley, New York NY. 198K

[D<'J'80l E. Deprettere and P. DeWilde. Orthogonal cascade realization of a real multipart digital filter.
Imemationai Juurno.l on Cir,-uir Theory A.ppl, 8:245-277, 1980.

[Ihe90] J.O. Drewery. Digital filtering of television signals. In C.P. Sandbank, editor, Digital Televi-
-~ion, chapter 5. Wiley, New York NY, 1990.

J.P. Dugre, A.A.L Beex. and LL Scharf Generating covariance sequences and the calculation
of quantization and roundDtf error variances in digital filters.. IEEE Trans. on Acoustics,
Speech. and Signnl Prvces5inr;, ASSP-28: 102-104, 1980.
[D<,g82] J.P. Dugre and EJ. Jury. A nrn:e oo the evaluation of complex integrals using filtering in-
tcrpretations. iEEE Trans. on A.coustics, Speech, and Signal Processing, ASSP-30:804-807,
1982.
Bi!)liography 841

[Duh86] P. Duhamd. Implementation of "split-radix" FFT algorilhms for complex, real, and real-
symmetric data. IEEE Tram. on Acoustics, Speech, mul Signal Processing, ASSP-34:285-
295.Aprill986.
[Dur59) J. Durbin. Efficient estimation of parameters in moving average model. Biometrika, 46:306-
316, 1959.

[DutSOl D.L Duttweiler. Bell'" echo-killer chip. IEEE Specrrum, 17:34-37, October 1980.

[DutB3] S.C. Dutta Roy. Comments on "On the construction of a digital transfer function from its reaL
part on the unit t·ircle." Proceedings nfthe IEEE {Letters), 71:1009-1010, August 1983.

\Ei1r76] J. Eargle. Sound Recording. Van Nostrand Relnhold, New York NY, 1976.

[Eu86] J.M. Eargle. Handbook of Recording Engineering. Van Nostrand Reinbold, New York NY,
1986

[Ebe69] P.M. Ebert. J.E. Mazo. aod M.G. Taylor. Overflow oscillations in digital filters. Bell System
Technical Journal, 48:2999-3020, November 1969.

[Egg84J LD.J. Eggermont and P.J. Berkhout. Digital audio circuits: Computer simulations and lis-
tening tests. Philips Technical Review, 41(3}:99-103. 1983184.
{E;;t77] D. Esteban and C. Galaod. Application of quadrature mirror filters to split-band voice coding
schemes. In Proc. IEEE International Conference on A.coustics, Speech. and Signal Process-
ing, pages: 191-195, May 1977.

[FJKi93] J. Fadavj-Ardekaniand K. Monda!. Software considerations. InS.K. MitraandJ.F. Kaiser,edi-


lDrs, Handbook for Digital Signal Processing, chapter l2, pages 783-905. Wiley-InteTScience,
New York NY, 1993.
[Far88] CW. Farrow. A continuom;;y variable digital delay element. In Proc IEEE international
Symposium on Circuits and Systems, Helsinki, Finland, pages 2641-2645, June 1988.
[Fet71 J A. Fettweis. Digital filter structures related to classical filter net'-"'O£ks. Archiv Elektrotechnik
und Obertragungstechnik, 25:79-81, 1971.

[R~t72] A. Fettweis. A simple design method of maximally flat delay digital filters. IEEE Trans. 011
Audio ami Electroacoustics, AU-20:11-2-114, June 1972.
i:Ft~t86} A. Fettweis. Wave digital filters: Theory and practice. Pmc. IEEE, 74:270--316, February
1986.

{Fla79] J.L Flanagan, M..R. Schroede-r, B.S. Atal, R.E. Chochiere, N.S. Jayant, and J.t-·1. Tribolet.
Speech coding. IEEE Trans. on Comm~<nications, COM-27:710--737, April 1979.
[Fle87j R. Fletcher. Practical !lfethods of Optimization. Wiley, New York NY, 19&7.
[Fli94J NJ. Fliege. Multirare Diftita! Sigmll Processing. Wiley, New York NY, 1994.
[Fre78] S.L Freeny, J.F. Kai~er. and H.S. McDonald. Some applications of digital signal processing
in telecommunications. In A. V. Oppenheim, editor, Applications ofDigital Signal Processing,
chapter l, pages l-2fL Prentice Hall, Englewood Cliffs NJ, 1978.
842 Bibhography

[Fre941 ME. Frerking. Digital Signal Proces:>.ing in Communication Systems. Van Nostrand Reinhold,
NewYork.NY. 1994.

[Gab87] R.A. Gabel and R.A. Roberts. Signals. and Linear Systems. Wiley. New York NY. 1987.

[C<ar80J P. Garde. AUpass crossover systems. Journal of the Audio Engineering Sociery, 28:575~584,
September 1980.

[C<as85l L. Gaszi. Explicit formulas for lattice wave digital filters. IEEE Trans. on Circuits and System:;-,
CAS-32:68-88, January 1985.

[Goe58] G. Goertzel. An algorithm for evaluation of finite trigonometric series. American Mathemat-
ical Monthly, 65:34-35, January 1958.

[Goe82J D. Goedhart, R.J. Van de PlasS{;he, and E.F. Stik.voort. Di.g:ital-to-.analogconversionin playing
a compact disc. Philip.~ Technical Review, 40(6): 174-179, 1982.

[Go168] B. Gold and K.L. Jordan. A note on digital filter synthesis. Proc. iEEE, 56:1717~1718,
October 1%8.

[Gol69a] B. Gold and K.L. Jordan. A direct search procedure for designing finite-duration impulse
response filters. IEEE Trans. Audio and Electroacoustics. AV-17·33-36, March 1969.

[Gol69bl B. Gold and C.M. Radar. Digital Processing of Signals. ML'Ciraw-Hill, New Yorlc, NY, 1969.

[Gon87] R.C. Gonzalez and P. Wintz. Digital/mage Processing. Addison-Wesley, Reading M.A•. 1987.
[Goo77] D,J. Goodman and M.J. Carey. Nine digital filters for decimation and interpolation. IEEE
Trans. on Acoustics. Speech, and Signal Processing, ASSP-25: 121-126. April 1977.
[01".J73} A.H. Gray Jr. and J.D. MarkeL Digital lattice and ladder filter synthe">is. IEEE Trans. on AudW
and EJectroacoustic.s. AU-21:491~500, December 1973.

[Had91] RA. Haddad and T.W. Parsons. Digital Signal Processing: Theory, Applications, and Hard-
ware. Computer Science Press, New York NY, 1991.
[Ham89} R.W. Hamming. Digital Fillers. Prentice Hall, Englewood Cliffs NJ, 3rd edition, 1989.
[Hay70] S.S. Haykin and R. Carnegie. New method of synthetising linear digital filters based on
convolution integraL lEE Proc.• ll7:1063~l072, June !970.

[Hay99] S.S. Haykin and B. VanVeen. Signal and Systems. Wiley, New York NY, 1999.

[Hee8.2j 1 P J. Heemskerk and K.A.S.Immink. Compact disc: System as-pects and modulation. Philips
T.echnical Review, 40(6): 157-165, 1982.

[Hel68] H.D. Helms. Nonrecursive digital filters: Design method for achieving specifications on
frequency response. IEEE Trans. on Audio and Elecuvacou.~tics, AU-16:336-342, September
1%8.

[H~n83 J D. Henrot and C.T. Mullis. A modular and orthogonal digital filter structure for parallel
processing. In Proc. IEEE Jntema.tional Cunfen:nceAcoustics. Speech. and Signal Processing,
pages 623-626, I 983.
Bibliography 843

(Her70J 0, Hernnann and H.W. SchUssler. Design ofnonrecursi\"e digital filters wi.lh minimum phase.
Electronics Letters. 6:329--330, 1970.

{H~Ilj 0. Herrmann. On the approximation problem i:n non recursive digital filter design. IEEE Trans.
Circuit Theory, CT-18:411-413, 1971.

0. Hcmnann, LR. Rabiner, <Jnd D.S.K. Chan. Pnu.:tical design rules for optimum finite
impulse response lowpass digital filterf> Bell System Tech. J., 52:769-799. 1973.

W.E. Higgins and D.C. Munson Jr. Optimal and suboprimai error spectrum shaping for cascade
form digital filters. IEEE Tran.s. on Circuit.\ ami Systems, CAS-31:429-437, May 1984.

fH:r731 K. Hirano, T Saito. S. Nishimura, a11d S.K. Mitra. Time-sharing realization ofButrerwonh
digital filters. ln Monograph nftht' Cir.~uit.~ and System~ Gmup. Institution of Eleclronic and
Communication Engineers (Japan), l"o. CST 73-59, December 1973. {In Japanese).

[H:cr74j K. Hirano, S. Nishimura. and S.K. Mitra. Desjgn of digital notch filters. IEEE Trarrs. on
Circuits mul S:rstems. CAS-2; :540-546, July i974.

[Hog81 j E.B. Hogenauer. An economic a] class of digital filter::; for dedmat:Jon and interpolation. JEEE
Trans. Acoustics, .':.""peech, and Signal Processing. ASSP~29:155-162. Aprill981.

[H:;iX7_l C-C. Hsiao. Polyphase filter matrix for rational s.;mpling rate conversions. In Proc IEEE
International Conference on Acoustics. Speech, and Signal Pmcosing, pages 2l!3-2176.
Dalla'> TX. April 1987.

[Hub89] D.M. Huber and R.A. Runstci:t. Modem Recording Techniques. HowardW Sams, Indianapolis
IN, 3rd edition, 1989.

{Hwa79J K. Hwang. Compwer Arithmetric: Principles, Archite>..·ture and Designs. Wiley. New York
NY, 1979.

IIEEE85] ln:>titute of Electrical and Electronic Engineers. IEEE Standard for Binary Floating-Point
Arithmetic, 1985.

[1fe93J E. C. lfe-achor and B.W. Jervis. Digital Signal Proces.sing: A Practical Approach. Addison-
Wesley. Reading MA. 1993.

[ITU84j International Telecomn:unicalion Union. CCJITRedBook, volume VI. Fascicle Vl.l, October
1984.

LB. Jackson. An analysis oflimil cycles due to mulriplicative rounding in recursive digital fil-
ters. In Prm·. lthAll.ertvn Conference on Circuit and System Theory, pages 69--78, Monticello
IL. l%9_

rJac70aj LB. Jackson. On the interact::.on of roundoff noise and dynamic range in digital filters. Bell
System Tt:chnical Journal, 49:159-184. February 1970.
fJac70bj L..B. Jackson. Roundoff-noise anafysis for fix.ed-point digital filters realized in cascade or
para;JeJ form. IEEE Trans. on Audio and Electroacou.vtics, AU-18:107-122.. June 1970.

fJ.ac91j LB. Jackwn. Signals, ,))stems, and Transforms. Addison-Wesley, Reading MA, 1991.
[Jac961 LB. Jackson. Digital Filters end Signal ProcesJing. K.luwer, Bostun MA, 3rd edition. !996.
844 Bibliography

!Jai89] A.K. Jain. fUndamentals of Digital/mage Processing. Prentice Hatl, Englewood Cliffs KJ,
1989.

[Jar88] P. Jru-ske, Y. Neuvo. and S.K. Mitra. A simple awoach to the design of FIR digital filters
with variable characteristics. Signal Processing. 14:313--326, 1988.

[lay74! N .S. Jayant. Digital coding of spe-.ech waveforms. Proc. IEEE, 62:611-632, May 1974.

[Jay84) N.S. Jayant and P. Knoll. D1gital Coding ofWilveforms. Prentice Hall, Englewood Cliffs NJ,
1984.

[Jen9l] Y-C. Jenq. Digital convolutwn algorithm for ptpeiining multip:-ocessor system. lEE£ Trans.
on Computers, C-30:966-973. December !99t.

[Jia97] z. Jiang and A.N. Willson, Jr. Efficient digital filtering architectures using pipelin-
ing/interleaving. IEEE Trans. on Circuits and Systems, Part H. 44: I J0--119, February 1997.

J.D. Johnston. A filter family designed for use in quadrature mirror filter banks. In Proc. IEEE
International Conference on Acoustics_ Speech. and Sigr..al Pnx:e.~sing, pages 291-294,Aprit
1980.
[Joh89j J..R_ Johnson. lntn>duction to Digital Signal Processing. Prentice HaJJ, Englewood Cliffs NJ,
1939.
[Jos99j Y.V. Joshi and S.C. Dutla Roy. Design of IIR multiple notch filters based on all-pass filters.
IEEE Trans. on Circuirs and Systems, Pan JL 46:134--,38, February 1999.

[Jurl!Jj R.K. Jurgen. Detroit bets on electronics to stymie Japan. lEEE Spectrum, 18:29-32. July
1981.
[Ka;66j J.F. Kaiser, Digital filters. In F Kuo and J.F Kaiser, editors, System Analysis by Digital
Computers, chapter 7, Wiley. New York NY• .1966.
{Kai74] J.P. Kaiser. Nonrecu-rsive digital filter design uslng the / 0 -sinh window function. ln Proc.
1974 IEEE lnternatiortal SymposiiJ.m. otl Circuits and Systems, pages 20-23. San Francisc-o
CA, April1974.

[Kai77] J.F. Kaiser and R.W. Hamming. Sharpening the response of a symmetric nonrecursive filter
by multipie use of the same filter. IEEE Trans. on Acoustics, Speech, and Signal Processing,
ASSP-25:415--422, October 1977.

fKai80J ]_F. Kaiser.On a simple algorithm to calculate the 'energy' of a signaL Proc. IEEE interna-
tional Coriference on Acoustics, Speech. ar<..d Signal Processing, pages 381-384, Albuquerque
NM. April I 98-0.
[Kam86] JJ. Vander Kam. A digital "decimating" filter for analog-to-digital conversion ofhi-fi audio
sigm::ls. Philips Technical Review, 42:230--238, 1986.
[Kan71] EP.F. Kan and J.K. Agganval. Error analysis in digital filters employing floating point arith-
metic. IEEE Trans. on Circuit Theory, CT-18-:678-686, November 1971.
[K.ul!3] K Karplus and A. Strong. Digital synthesis. of plucked-string and drum timbres. Computer
Music Journal, 7:43--55. Summer 1983.
Bibliography 845

[Kel62] J L. Kcllv Jr. and C. Lochbaum. Speech svnthesis_ln Proc. Stockholm Speech Communication
Seminar: Stockholm, Sweden, Septembir 1962. R-oyal Institute {>fTechnology.
[Kin72J N. Kingsbury. Second-order recursive digital filter dement for poles near lhe unir cirde and
the real-:: axis. Electronics Letters, 8: 155-156. March 1972.

[Knu69] D.E. Knuth. The An of Computer Programming: Vviume 2- Seminumerical Algorithms.


Addison-Wesley, Re.ading MA. 2nd editi.on, 1969.

[Kol77 J D.P. Kolba and T.W. Parks. A prime flK-'tor FFf algorithm using high speed convolution. IEEE
Trans. on Aroustics, Speech. and Signal Pnwessing, ASSP-25:281-294, Augusf 1977.

[Kol91 J R.D. Kolipillai, T.Q. Nguycn,.and P.P. Vaidyanathan. Some resulls in the 1heory of cross-talk
free transmultiplex.ers. IEEE Tmn!i. on Signal Proassinr;, 39:2174-2183, October 1991.

[Kor93] I. Koren. Computn Arithmetic Algorithms. Prentice Hall, Englewood Cliffs NJ. 1993.

T.P. Krauss, L Shill"e, and J. Little. Signal Procex.ring Toolbox for use ••:ith MATLAB. The
Mathworks Inc. Sourh Natick MA, 1994.

(Kurn93J R. KumaresatL Spectral analysi1>. In S.K Mitra .and J.F. Kaiser. editors, /7(l!ldbookj0r Digital
5igrwJ Processing, chapter 16, pages 1143-1242. Wiley-lnterscience. New York NY, 1993.

(LagS 11 R Lagadec and H.O. Kunz. A universal digital sampling frequency converter tOr digital audio.
In Proc. IEEE /ntf."mational Conference onAcmntics, Speech, and Signal Processing. pages
595-598,AtlantaGA,Aprill98l.
[Lag82] R. Lagadec, D. Pelloni, and D. Weiss. A 1-ch.:nne!, l6-t"lit digital sampling rate converter for
professional dig.it.al audio. In Proc. IEEE lnternn.tirmal Conference on A~·oustks. Speech, and
Signal Processing, pages 93-96, April 19R2.
[Lak96] T.L Laakso, V. Valimiki, M. K.arjalainen. and U K. Laine. Splitting the unit delay. JEEESignol
Processing Magazine, 13:30--60, January 19%.

[Lar93] L.E. Larson and G.C. Ternes. Signal conditioning and interfai..""e circuit.<::. In S.K. Mitra and
J.F. Kaiser, editors. Handbook for Digital _\'ignal Processing, chapter !0, pages 677-720.
Vliley-Interscience. New York NY, !993.

[Lar99] J. Laroche. A modified lattice strw::iure with plea'">ant ~aling properties. IEEE Trans. on
Signal Processing, 47:3423-3425, December 1999.
[Lat98] B.P. Lathi. Signals Processing and Linear Systems. Berkley-Ca."tJ"lbridge, Carmichael CA,
1998.

[Law78] V.B. Lawrence and K.V. Mina. Control of limit cycle oscillations in second-urd.er recursive
digital filters by constrained random quantiza:ion. IEEE Trans. on Acoustics, Spuch, and
Signal Processing, ASSP·26: 127-! 34. February 1978.
[Ler83J E.L Lerner. Electronically synthesized music. IEEE Spectruni, 17:46-51, June 1983.
IL~v47J N. Levinson. The Wiener R1\.fS criterion in tilter des1gn and prediction. J. Math. Phys.,
25:261-278, 1947.

[Lim90J J.S. Lim. Two-Dimensional .)"iwwl and Image Pmn•ssin;:;;. Prentice HalL Engle\1.-'00d Cliffs
NJ, 1990.
Bibhography

[Lin~?) K5. Lin, edimr. Dig ira! Signal Pmcesstn,; Applications with the TMS321J Family. Prentice
HalL Englewood Cliffs NJ, ;937.

fLin96J 1-S. Lin and S.K. Mitra. Overlapped block digital fillering. IEEE Trar.s. on Circuits and
_)\·stem:>. II: .4nalog and Digita! Signal Prm_·e.uing. 4}:586-59€1. August 1996_

1Liuf:9J B. Liu and T. Kaneko. Error analysis of digital filters realized in floating-point arithmetic.
Proc IEEE. 57:1735-1747, {)ctober 1969.
G.R. Lockwood, P-C. Ll. M. O'Donnell, and F.S. Foster. Optimizing the mdiation pattern
d ;;fYJrs.e petiodJc linear array<;. IEEE Trans. on Ultrasonics Fermelectncs, and Fn·quency
G:mtml. 41:7-14, January 1996.

(L:m73] J.L Long and T.N. Trick. An absolute bound on limit cycles. due to roundoff errors 1n digital
fiiten.. JEEE Trans. on: Audio and Electroacoustics, AU-21:27-30. February 1973.

[Lilt91l A. Luthraand G. Raj.an. SampJingrateconversionof vidoo i>;gna!s. SMPTEJ., pages869-879,


November JWL
H. LUtkepohL Introduction to Multiple Time Series Analysis. Springer-Verlag. New York NY
1991.

U. Madhow. Blind adaptive interference suppre~sion fo.rdirect-~quenceCDMA. Proc.IEEE,


86:2049---2068, October 199&.

[M,rn82j A. Mahanta. R.C. Agarwal, and S C. Dutta Roy. FJR filter structures having low sensitivity and
roundoff noise. IEEE Trans. mrAcoustics, Speech, and Signal Processing, ASSP-30:913-920,
December !982

[Mak75J 1. Makhoul. Linear prediction: A tuto:ial review. Proc. IEEE, 62:561-580, April 1975.

[MarS?J S_L. Marple, Jr. Digital Specrra} An.al_¥sis with Applications. Prentice Hall, Englewood Cliffs
NJ, 1987

[Ma<92l A. Mar,editor. Digitu! S1gnal Proce.~.>ing Applications Using the ADSP-2IOO Family. Prentice
Hall, EnglewCXld Cliffs NJ. 1992.
{Ma..,60J S.J_ Mason and H.J. Zmtmcrma.n. Eleclrrmic Ctrcuits, Si{;nais and Systems. pages 122-123.
Wiley, New York. NY, 1%0.

[Mee76] K MeerkOJter. Realization of limit cycle-free second-order digital filters. In Proc. 1976 IEEE
lnternafional Sympnvium on CircuiH and Svstems, pages 295-298, 1976.

D.G. MesseNchmitt. Echo Glr:cellation in speech and data transmission. IEEE J. on Selected
Areas m Communications, SAC-2:283-297, March 1982.

[M,k92l N. Mikami, M. KolMya,.hi. andY Tokoyama. A new DSP-orieuted algorithm for the calcufa-
ticn of the ~quare-root using a nonlinear digital fil!<!L IEEE Trans_ on Acoustics, Speech and
Signal Pmcosing, 40:1663-1669, July 1992.

fMil78l WL Mills, C.T. Mullis, and R .A. Robem. Digital filter realizations without overflow oscilla-
tions. IEEF Trans. on Acou.Hics. Speah. ar.d Signal PmcessinJ;, ASSP-26:334--338, August
1978.
BibJiography 847

[.M.in85] F. Mintzer. Filters for distortion-free twG-band multimte!ilter bank>- iEEE Trans. on Acoustics,
Speech, and Signal Processing, ASS?·33:616--630, June 1985.

[Mit71a! S.K. MitrJ.. On reciprocal digit.altwu-pairs. Pmc. IEEE (L€tte,.-s}, 61: 1647--1648. ~ovember
!9Ti

LMit73b] S.K. Mitra and R.J. Sherwood. Digitai Iaddernetwork:s. IEEE Trans. on Audi!i and Eledroa·
o.rusrics, AL'-21 :30-36. February 1973.

[Mil74aJ S.K. Mitra and K. Hirano. Digital allpass networks. IEEE Trans. on Ctrcuits and System~.
CAS-21:68S-700. !974.

[Mi174bl S.K. Mitra, K. Hirano. and H. Sakaguchi.A simp-le method of computing the inpmquantiza!ion
;;.nd the mulliplicat"ion round-off error- in digital filter... IEEE Trans. onA.cous!ics, Speed1,
and Signal Pnu:e.B·ing, ASSP-22:326---329, October 1974.

[Mit74c] S.K. Mitra antl R.J. Sherwood. E'ltir:Jatioo of pole-zero di~placemcnts of a digilal filter due
tocoeflicienl quantization. I EF.E TrarL'i. on Cm:uits and s_vaems. CAS-21 : l ! 6---124, January
1974.

LMit75] S.K. Mitra. K. Hirano, and K. Furuno. Digital <";ine-cosine generator. In Proc. Second Flo·
renee /ntanationa.l Confereuce on Digital Signal Processing, pages 142-149, Florence, Italy,
September 1975

[Mit77aJ S.K. Mitra and C.S. Bunm._ A ~impJc efficient method for the analysis. of structures of digiti!l
and analog system:;. Archivfiir Eleklrot.;";hnik u.nd Obertran;?um;stechuik, 31 :3:'>-36, 1977.

[Mit77b] S.K. Mitra, P.S. Kama!, and D.C. Huey Cascaded lat'jce rcaJi?..-.tion of digital filters_ Inter-
national Journal on Circuit Theory a.'ld Applirations, 5·3-11, 1977.

fMit7?cj S.K. Mitra, K. Mundai. andJ. S?.czu:oak. An alternate parallel realization of d1gitaJ transfe-r
functions. Proc. IEEE (LeucrsJ, 65:577-578, April 1977,

[Mit80j S.K. Mitru. An fntmductiun to Digital and Analog lniq;rated Circuit:;, and Application~.
Harper and Row. Ne\V York NY, 1980.

[Midl?J S.K. Mirra, K. Hirano. and K. :nen\a-Ababio. Theory and apphcatium; of ail-digilal N -path
filters. IEEE Trans. on Circl.!its and Svstum. C' AS-34: £045-1052. September 1987.

fMit90aj S.K. Mitra, K. Hirano, S. Nishimura. and K. Sugahara. Des:ign nf digital bandpa~~bandstop
digital filters with tuuable d:aracteristics~ Frequenz, 44:1 i 7-121, March.!April I 990.

[Niit90bJ S.K. J<..fitr.t, Y. Nenvo, and H. Roivainen_ Design and implementation of recursive digital
filters with variable cha.racterisii::s. International Journal on Circuit ThRnry andApplication}·,
IR:l07-ll9, 1990.

[Mit'HJ S.K. Mitra. A. Mahalanobis, andT Saramald. A generalized structural subb,md decomposition
ofHR filters and its application in effic1ent FIR filter design and implementation. IEEE Trans,
on Circuits and Systems II: Analog und Digital Signal Prot._·essing, 40:363-374, June {993.

[Mit~al S.K Mitra and A. Makur. Warped discrete Fourier transform. Proc lEEEWorkshupon Digital
Signal Processing, Bryce UT, Augu<>t 199&.
848 BibUography

LY\it98b] S.K. Mitra and H. Babic. PartiuJ-frJctiun e,;pansioo of mtiomli <..-transfonm•. Electronics
Lt>tiCrs, 34:1726, 3 September 1998.

[Moo77J J.A. Moorer. Signa! processing aspec:s of c-omputer mu-.ic: A .swvey. Proc. IEEE, 65:1108-
1137, August 1977.
[~Aoo791 JA. Moorer. About this reverberation busines.s. Computer Music Journal. 3(2): 13-28, 1979.

[~lun811 D.C. Munson, Jr., and B. Liu. Narrowband recur~ive filters with error spectrum .shaping. IEEE
Trans. on Circuits and System>. CAS-28:160--163, February 1981.
[Naw88J S.H. Naw.ab andT.F. QuatierL Short-time Fourienransform. ln J.S. Lim and A.V. Oppenheim,
editors.AdvancedTopics in Signal Processing,, chapter6. Prentice Halt, Englewood Cliffs NJ,
1988.

[NeuS4aj Y Neuvo and S.K Mitra. Complementary IIR digital filters. Jn Proc. JEEE International
Symposium on Circuits alfd Systems, pages 234--237, Montreal, Canada, May 1984.
[Neu84bl Y. Keuvo, C-Y. Dong, and S.K. Mitra. Interpolated fi n!teimpuise responsefilters.J£E£Traw..
on Acoustics, Speech. ami Signal Proassing, ASSP-32:563-57{}, June i9R4.

~Oet75! G. Oetken, T.W. Parb., and H.W. Schi.Js..<Jer. New resulls in the design uf digital interpolators.
Jt'EE Trans. on Acouwics, Speech, and Signal Pmce'ising, ASSP-23:301-309, June 1975.

[Opp75J A.V. Oppenheim and R.W. Schafer. Digiwl Signal Processing. Prentice Hall. Englewood
Oiffs NJ, 1975.

[Opp76] A.V. Oppenheim, W.FG. Mecklenbriiuker, and R.M. Mersereau. Variable cutoff linear phase
digital filters. IEEE Trans. on Circuits and Systems, CAS-23: 19'l-203, 1976.

[Opp78j A V. Oppenheim, editor_ Applications of Digital Signal Processing. Prentice Hall, Englev,.ood
Cliffs NJ, 1978.

[Opp83i A.V Oppenheim and A.S. Willsky. Si[inals and Svsrems. Prentice Hall, Englewood Cliffs NJ,
1983.

A.V. Oppenheim and R.W. Schafer. /)iscrete-Tim~· Signal Pmceuing. Prentice Hall, Eng!e-
W30d Cliff> NJ, 1989.

1Dli96J SJ. Orfanidis. lntrodut:tion to Signal Processing. Prentice HaiL Englewood Cliffs NJ, 1996.
[01m6l! J.FA. Ormsby. Design of numerical filters with appli-cations to missile data processing.
Journal ofACM. 8:440-466, July 1961.

fPan65J P.F. P..mler. ,\.1odulation, Noise, arul.S'pec!ral Analysis. McGraw-Hill, New York NY, 1965.

[Pap651 A. Papou!i~. Probability_. RmuWm Van·abln, mtd Storhastic Processes. McGraw-Hill, New
Ycrk NY, 1965.

[Pap9Dl P Papamichali~. Digital Signal Processing Applications ~~-ilh thf' TM5320 Familv. Thcorv,
Algorithms, and fmpl~mcntations. volume 3. Prentice Ha~I. Engle\-linod Cliffs NJ, .i 9iJO. -

lPar60i E. Panen. Modern Probability Theory and Its Applicmions, Wiley. New York NY, J 960.
Hibriograph-y 849

[Par12] T.W. Parks and JJ!.McCleUan. Chebyl>hev approximation for noorecursi';edigital filter<> with
linear phase. JEEE Trans. on Circuit Thevry, CT-19: 189-194, 1972.

fPar87] T.W. Parks and C.S. Bunus. Digital Filter Design. Wiley, New York ~Y, 19~7.

[Pat80J R.K. Patney a..'ld S.C. Dutta Roy. A different look at round-off noise in digital filters. iEEE
Trans. on Circuits and Systems, CAS-27:59-62. January 1.9&0.

fPee87] P.Z. Peeblei.. Probability. Rm1dom Vtiriables. c.nd Random Signal Prinopfe~. McGraw-Hill,
New York NY. 2nd edition, i987.

I Peiti8] S-C. Pci and C -C. Tseng. A comb filter desigo usmg fractional-~mple dday. l~"EE Trans. on
Cirn.tit and Systems, 45:649--653, June 1998.

(FeiSOj A. Pelcd and A. Ruiz. Frequency doma.m data transmission u-;iog reduced cvmpulational
complexity aJgorithm. ln Proc. IEEE lnternmiona1 Confen:nce on Ac,>usric.~·, Speech and
Signal PrrKes:sing, pages 964-967, Denver CO. April 1980.
fPor97J B. Porat. A Course in Digital Siwwl PrincipleL Wiley, New York NY, 1997.

[PouS7 J K. Poulton, J.J. Corcoran, and T. Hornak. A !-GHz 6-bil ADC system. IEEE.!. on Solid-State
Cin·uits, SC-22:962-970, Decembt:r 1987.

LPri80J O.H. Pritchard. A CCD comh filter for color TV receiver picture enhancement. RCA Review,
41:3-28, 1980.

[Pru92j J.G. Proak.is and D.O. Manolakis. Digim! Siwwl Pnxessint;: Principles. Algonthms ami
Applications. Prentice Hall, Engle\\'ood Cl.iffs NJ, 2nd edition, J992.

LRab69J LR. Rabiner, R.W. Schafer, ;md C.M. Rader. The chirp-z transform algorithm. JEEE Trans.
on Audio lmd E!ectroacow:tics, AU-17:86-92.June l%9.

jRab7Jj L.R. Rabiner. Appmximate design rc:ationships for lowpa.,s FIR lhgital filler'>. IEEE Trml:>.
m1 Audw und Eit'i'troacouuirs. AV-21 :4.';;6-460. JQ7J.

lRab74<~l L.R. R.abit;er am.! R.W. Schafer. On the behavior of minimax. relative error FIR digir::l differ-
entiators. Bell System Technical Jourm.Il. 53:333-362, February 1974.

LRab74bj L.R. Rabiner and R.W. Schafer. On tlk: behavior of minimax relative error FIR di.gnal Hilben
transformers. Bell System Technimt Journal, 53:363-394, February 1974.

IRlb751 L R. Rabiner and B. Gold. 1heury and /l.pplicatJOn of Dixital Signal Pmces.~ing. Prentice
Hall, Englewood Chff<: NJ, 1975.

[Rili781 L.R. Rabiner and R.W. Schafer. Digital Processing of Speech Signa{~. Prentice Hall, Engle-
wood Cliffs NJ, 1978.

[Ram&4J T. Ramstad. Digital methods. fer conversion between arbitrary :.ampling frequencies. iEEE
Trans. A.cmtsrics, Speech and Signal Processing, ASSP-32:577-591, June 1984.

[Ram89! V Ramach;mdran. Determination of discrete lransfer funt.ti,m from ih real (or imaginary) parts
on the unit cirde. IEEE Trans. Acuust:.cs, Speech and Signal Processing, ASSP-37:440-442,
March 1989.
850 Bibliography

[Rao84] S.K. Rao and T. KaHath. Orthogonal digital filters for VLSJ imp!ementarlon. IEEE Trans. on
Circuits and Systems, CAS-~ 1:933-945, 1984.

[Regb7a} P.A. Regalia, S.K. Mitra, and J. Fada\'i. Implementation of real digital filters using complex
arithmetic. iEEE Trans. on Circuits and Systems, CAS-34:345-353, April 1987.

[Reg87b] P.A. Regalia and S.K. Mitra. Tunable digital frequency response equalization filter;,. lEE'£
Trans. Acoustics, Spea.:h, and Signal Processing, ASSP-35: l .18-120, February 1987.

[RegS7c] P.A. Regalia and S.K. Mitra. A class of magnitude complementary low:b.}leaker crossovers.
IEEE Tram. on Acoustics, Speech, and Signal Processing, ASSP-35: 1509-1515, November
1987.
P.A. Regalia, S.K. Mirra, and P.P. Vaidyanathan. The digital allpass network: A versatile
slgnal processing building block. Proc. IEEE, 76: J9-37, January 1988.

P.A. Regalia. Special filter design;;_ In S.K. Mitra and J.F. Ka~ser. edit-ors, Handbook for
Digital Signal Processing, chapter 1.3, pages 967-9HO. Wiley-lntersdence, Ne\>.• York NY,
1993.
M. Re.nfors and T. Saramti.ki. Recursive n-th band digital fillers, Parts 1 and IL IEEE Trans.
on Circuits urul System.-., CAS-34:24-51, January 1987.

[R::c86] E.R Richardson and N.S. Jayant. St:bband coding with adaptive prediction for 56 kbits/s
audio. IEEE Trans. A<"Otlstics., Speech, and Signal Pmassing, ASSP-34:691--696, August
1986.

E.A. Robinson and S. Treitel. Geophysical Signal Analysis. Prentice Hall, EnglewooJ Cliffs
NJ, 1980.
E.A. Robinson. A historical perspect1ve of spectrum ctctimation. Proc. IEEE, 70:885-907,
1982..

[Rn);75] J.P. Rossi. Digitaltelev1sion ;mage enhancement. SMPTE J., 84:545-551, July 1975,

[RotR3j J.H. Rothweiler. Polyphase quadranre Jilters, .a new subband cc>ding technique. In Prac.
IF.F.F: lntemational Conferer.ce an Acous.tics, Speech and Signal Processing, pagt;s !980-
1983, Boston MA, April 1983.
[S:en67} l,W. Sandberg. Floating-point-roundoff accumulation in digital filter realization. Bell System
1f>chnical Jounml, 46:1775-1791, October I. 967.

[Sar93l T. Saramtiki. Finite impulse response filter design. In S.K. Mitra ~nd J.F. Kaiser, editors.
Handbook for DiRital Signal .0 rocessing, chapter 4. pages 155-278. Wiley-lnterscient;e, New
Yerk NY, 199.).

[Sch62i M.R. St;hroeder. Natural sounding artificial reverbcmtion. Journal of the Audio Engineering
Society. 10:219-223, 1%2.
fSctl721 H.W. &hlhsler. On structure~ for nonrecursive digital filters. Arcluv fUr Electrotedmik. und
Oberstragunstechnik, 26:255~258, June 1972.

jSch75l M. Schwart7. and L. Shaw. Stxnal Pmce.vsing: Discrete Spectral Analysis, Iktet·tion, and
Estimation. McGraw-HilL New York NY, 1975.
Bibiiographj• 851

(Sch9I] R. Schreier. Noise-Shaped Coding. PhD thesi~. I;nivcr.>ity of Toronto, Toronto Canada, 199L

[Sel%] I.W. Selsnick. M. Lang, and C. S_ Burrus. Constrair.ed lcasl square design of FIR filters
without specified transition bands. IEEE Trans. on Signa{ Processin.g. 44: 1879-1892, August
t996.
LSel98] I.W. Selsnick, M. Lang, and C. S. Burrus. A modified algorithm for cons.trained leas! square
design of multiband FIR filters without specified transition bands. IEEE Trans. on Signal
Processing, 46:497-501, February 1998.

LSha8l] A.F. Shack~l. Micmproces.son; and tbe MD. JEEE Spectrum, 18:45-49, April 1981.

rshe95] K. Shenol. Digital Signal Processing i>1 Telecommunication. Prentice Hall, Englewood Clifl.-s
NJ, 1995.

[Sko62l M.L Skolnik. introduction to RadarSysk:ms. McGraw-Hill. New Yod: NY, 1962.

1Skw65] J.K. Skw:rzynsk:i. Desi1;n Theory and Data for Electrical Filters. Van Nostrand Reinhold,
New Yock NY, 1965.

[Slu84j R.J. Sluyter. Digitization of speech. Philips Technical Re-,.iew, 41 (7/8):201-223, 1983-84.

(Smi84] M.J.T. Smith and T.P. Barnwell lll. A procedure fur designing exact reconstruction filter banks
for tree-structured subband coders. In Proc. IEEE Cor.f. on Acoustic~, Speech, and Signal
Processing. pages 27.1 1-27.1.4, San Diego. CA, March 1984.

[Spa200Ul S.M. Spangenberg, L Scott. S. McLaughlin, G.J.R. Povey, D.G.M. Cruicksffimk, and
P.M. Graot. An FFT-based approach for fast acquisition in spread spectrum communication
s.ystems. Wirelo:ss Personal Communications, 13:27-55. May 2000.

[Sta94j H. Stark and J.W. Woods. Probability Random Pmcesses, and Estimation Theory for Engi-
neers Prentice Hall, Englewood Cliffs ~J, 2nd edition. 1994.

[Ste93] K. Steiglttz. Mathematical foundaticms of signal processing. In S.K Mitra and J.F. Kaiser,
editors, Handbvokfor Digital Signal Processing, chapter 2, pages 57-99. Wiley-[nterscience,
New York NY, 1993.

!Ste96l K. Stciglitz. A Digital Signal Processing Primer. Addison Wesley, Menlo Park C'A, 1996.

[Sto66J T.G. Stockham Jr. High speed convolution and correlation. 1966 Spring Joint Computer
Conference, AFJPS Proc, 28:229-23j, 1966.

[S.ro94 j G. Stoyanov and H. Clausen. A comprative study of first-order digital al1pass filter sectiom.
Frequcn.z, 48:221-226, September-October 1994.

fStuRBl R.D. Strum and D.E. Kirk First Pdr.cipln uf Discrete Systems and Signal Pmces~•ing.
Addison-Wesley. Reading MA. 19H8.
[Swe96J W. Sweldens. The lifting scheme: A custom-design construction of bionhogonal wavelets.
Applied and ComputatioJUil Harw.onic Analysis, 3: 186-200, 1996.

{Szc75J J. Szczupak ami S.K. Mitra.. Detection, location, and removal of delay-free loops in a digital
filter configuration. IEEE Trans. Acnu.nics. Speech, and Signal Processing, ASSP-23:558-
562. December 1975.
Bibliography

[Szc88-J J. Szczupak, S.K. Mitra, and J. Fadavi-Ardekani. A computer-based synthesis method of


structurally LBR digital allpa:.s networks. IEEE Trans. on Circuits and Systems, CAS-35:755-
760, 1988.
[Tem73J G.C. Ternes and S.K Mitm, editors. Modern Filter Thi"Of)' and Design. Wiley, New York NY.
1973.
[Tem77J G.C. Ten1e~ and L\\'_ LaPatm. lntmd!lction to Circuit Svnthesis and Desi:;?n. McGraw-Hill
New York NY !9?7.

rne92j C.W. Therrien. Di.vo<'rt' Random Sixnals ami Stalistiod Signal Prot.:essing. Prentice Hall,
Englewood C'hffs NJ. 1992.

[llo771 T. Thong and B. Liu. Error spectrum s.haping in narr{.>whand rerun.ive digital filter~. IEEE
Truns. on Acoustics, :.~w-ech, and Sign£Il Pmcessinf?, ASSP-25:200-203. 197T

[ThuOO] S. Thurnhofer. Two-dm>ensional Teager filter.,. in S.K. Mitra and G. Sit:uranza_ editors,
Nonlmear lmaKf' Pn.>cessinr;, chaprer 6. Academic Press, New York NY, 2{)(){},

fTom8!: W.J. Tompkins andJ.G. \Veh-,ter. editors. Defign vf<Hicmcompu!f:r-Based Medicalln.-.Jrn-


mentation. Prentice Hall, Englewood Cliff:. NJ. I Y8 I.
J.M. Tribolet. Seismic A.ppliauinns ~f Homotnt•rphit: Signal Pnx:e.~::.ing. Prentice Hall. En-
glewood Cltffs NJ, 1979.

[Unv75] Z. Unver and K. AbJulJah. A tighter pmctical bound on quamizanm1 errors IG second-order
digital fillers with complex cnnjug~te pole.-;. IEEE Trans. ort Circuits and Systems, CAS-
22.632-633, July 1975.

IUrk58J H. Urkol>Vitt:. An extension to the tht'<:Jry of airborne moving target indicator,.. IRE Tram.
ANE-5:210--214, December 1958,

{Vai841 P.P. Vwdyanathan and S.K. Mitra. Low ~sshand sensitivity dif:i<al tiltcrs: A generalized
viewpoint and s.ymhesis procedure,. P"oc. IEEE. 72-4-04--H_"~, 198-4.
[Vai85a] PP, Vaidyanathan. The doubly -;:erm.m;;_kU Jossk•-;" dibrita! t\\'1.1-pa1r in digital filtering. !F.EE
Tran.,. on C!rcuiH tmd 5i_v.HRnu. CAS-32: 197-2(10. I '185.

[Vai85b] P.P. Vaidyanrtthan and S.K. Mitra. Very low-sensitivity FlR filter implementation using"struc-
tuml passivity" concept. IEEE Trans. on Circuits and Systems. CAS-32:36i1---364, April 1985.
[Vai85~] P.P. Vaidyanathlm. On power-cumplememary FIR lilt cr. IEFE Trans. on Circuits and Sy:dems,
CAS-12: 1308-I 310. December 1985.

fVai86a! P.P. Vaidyanathan, S.K. Mitra, anJ Y. Neuvo. A new approach to the realization of low
sensitivity IIR digital filters, IEEE Trans. on Aroustics, Speech, and Signal Proce:;;sing,
ASSP-34:350--361, April 1986.

[Vai86b1 P.P. Vaidyanath1m. Pa..;;:.,ive ca."cadt.xl lattice s.tructures for low-sensitivity FIR filter design,
with application to filter bank>. IEEE Trans. on Circuitsa.nd Systems, CAS-3.1:1045-!064,
November 1986
[Vai87al P.P. VaJUyan:ttlmn and T.Q. Nguyen. El<]:Cnfilter~: A new appruach to least-squares FIR filter
<kslgn anJ. applic<1tiun'o lm:lud:ll£ Nyqui'ot rHters. IEEE Tmns. on Circuits and Systems. CAS-
34:11-23. January 1987.
Bibliography 853

[Vai37bJ P.P. Vaidyanathan and T.Q. ~guyen. A 'TRICK' for the design cf FIR half-band filters. IEEE
Trans. on Circuits and Systems, CAS-34:1 l-23, January 1987.

[Vai87c1 P.P. Vaidyanathaa. Low-noise and low-sensitivity digital tillers. In D. F. Elliot, editor, Hand-
book of Digital Signal Prvn"ssing, chapter 5. Academic Press, New York NY, 1987.

[Vai&?d] P.P. Vaidyanathan. Qaadrature mirro: filter banks., M-band extension and perfect recDnstruc-
tion techniques. IEEE ASSP Magazine, 4:4--20, 1987.

[VaiS7ej P.P. Vaidyanathan and S.K.. Mitra. A unified structural interpretation and tutorial review of
1;tability test procedures for iinear systems. Proc. IEF.£, 15:478-497, April 19&7.

1Vai87f) P.P. Vaidyanathan, P.A. Regalia., and S.K. Mitra. Design of doubly-complementary IIR digital
filters using a ;;:iog.lecomplex ai!pass filter, with multira.te applications.lEEETram. on Circuits
and Systems, CAS-34:378-389. April 1987.

i'lai88a] P.P. Vaidyaoathan and P.Q. Ho.aog, Lamce structure~ for optimal design and robust Jmplemen-
tation ofrwo-charmel perfect recons!ruction Q!\.1F banb. IEEE Trans_ on Acoustics, Speech,
and Signal Processing. ASSP-36:81-84, January 1988
[Vai88b} P.P. Vaidyanathan and S.K. Mitrn. Po:yphase networks, block digital filtering, LPTV systems,
and alias-free QMF hank.">: A unified approach based on pseudo-circulants. IEEE Trans. on
Acoustics. Spt'ech, and Signal Processing, ASSP-36:381-391, March 1988.

[VaiS9] P.P. Vaidyanathan. T.Q. Nguyen, Z. Doganata, andT. Saramiiki. Improved technique for design
of perfect reconstruction FIR QMF filter hanks wi:th 1ossle'>s polyphase matrices. IEEE Trans.
on Acoustics. Speer::h, and Signal Pmcn;ing, ASSP-37:1042-1046, July !989.

fVai90j P.P. Vaidyanathan. Multintle digital filters, filter bank;;_ polyphase networks. and apphcations:
A tutoriaL Proc. IEEE, 78:56--93, January 199<}

lVai93l P.P. Vaidyanathan. Muitirau S.>·suw1:> tmd Filta Bonb. Prentice HaH, Englewood Chtl'; NJ.
1993.

[Vet&R I M. Veuerli_ Running FIR and HR filtering using mult1rate filter hanks. IEEE Trans. on
Acoustics. Speech mul Signlll Processing. ASSP-36:381-391. May 198&.

!VetS9J M. Vetterli and D_ LeGal!. Perfect reconstruction filter bunks: Scme properties and factoriza-
Tion. lEE£ Tram. on Acouslir.~, Speech and Signal Processing, 37: l057-l07l. July t989.

f\'et95 j M. Vetterh and J. Kovacevic lVl1vel.cu and Sub band Coding. Prentice HaE, Englewood Cliffs.
NJ. 19Y5.

!Vit95] A.J. Vitcrbi. Prindplc.<: of Spread Spe·ctrum Cnmmunicalion. Addison-Wesky. Reading MA,
1995.

[Vla69j J. Vlach. Compweri::_ed Approximation and Svnthesi,\· of Linear l•.fenv·ork.1·. \Vi ley, New York
NY, 1969.

[Wei69a] C.J. Wein:stein and A.V. Oppenheim. A o;.,'Ompari~on l'f roundoff noise in fil(ed point and
floating point digital filter reahwtions. Pmc. IEEE, 51:1 HO-J JRJ, June 1%9.
[Wci69bj C.J. Weinstein. Roundoff noise in floating point iast Fourier transform computation, !HE:£
Trans. on Audio a1UI Electroacoustics, AU- 17:209-215, September !969.
854 Bibiiography

[We167J P.D. Welch. The m;e of fast Fourier transform for the eBtimation of power spectrn: A method
based on rime averaging over short modified periodog:mns. iEEE Trans. on Audio and Elec-
troacoustics, AU-1.'5:70-73, 1967.

(Wel6Y] P.D. Welch. A fixed-point fa.st Fourier transform error analysis. IEEE Trmu. on Audio mld
Electroacoustics, AU- 17:153-157, June 1969.

[Whi58) W.O. White. Synthesi~of comb filters. Proc. National ElecJronic:, Conferenco', pages: 279-285.
195"8-.

{Whi71J S.A. White. New method of synthesizing linear digital filtero;. based on convolution integml.
IE£ Proc. (Corr.), 118:348, February 1971.

[Wid56l B. Widruw. A study of rough ampiitude quantization by mt<lrts of NyqUist samplmg theory.
IRE Trans. on Circuit Hteary, CT-3:266--276, Dtx:ember I 956.
[Wid61) B. Widrow . Statistical .analysis of amplitude-quantized sampled-data systetrr.;. AlEE Tran~-­
(Appl.lndustry) .. 8L555-568,January 196!..

fWii95J A. Wiliiarr.s andEJ. Taylor. Electronic Filter Designer'.~ Handh<F!k, McGraw-Hill. New York
l\"Y. 3rd edition, 1995.
[Wor89] J..M . Worham . Sound Recording Handbouk. Howard W. Sams, Im..lianapoli'> IN, 1989 .

[Yan&2] G-T Yan and S . K. Mitra. Modified coupled form dig.ital-fillerstructures. Pmc. IEEE (Letters},
70:762-763, July 1982.

(Yd96J D. YeJ!in and E. Weinstein. Multichannel signal separation: Melhod;, and analysis. lEE£
Trans. on Acoustics. Speech and Sif(nal Processing, ASSP-44: 106---i 18, January 19':16.

T-H. Yu,S . K . Mitra,and H. Babic. Design oflinear-phaseFTR notch filters. Sadhana, 15: !33-
155, November 1990.
[Zie83J R..E. Ziemar, W . H . Tranter, aJid D.R. Fannin. Signals and S)sh:ms: Continuous and Discrete.
Macmillan. NY, 1983 .

lZve67j AJ. Zverev. Haruflrovk of Filt>'T Symhetis. Wiley. New York NY. 1967.
I Index
Am. see Analog-to-digital (A'D) 1ov.•pas...."-to-highpass transformation, 33\
Accumulator. 63. 67, 71. 77, 87 Type t Chebyshev Jowpas:s, 317
Adder, 45, 360 Type 2 Chebyshev Jowpass, 3 J S
Addition u~ngAiATLAB,32l~J28
fixed-point, 556 Analog signals
floating-poim. 56 t definition. 2
Aliasing, 59, 62, 299, 667 digital processing of, 37.43
time-domain, 139 mathematical representation of, 2
Aliasing distortion, 309 operations on. 3~12
Alternation theorem, 464 Analog-to-digital {AID}converter, 38, 299,
Amplitude change function, 508 338-344,600
Amplitude response, 226 full--scale range, 601
Type 1 FlR filter, 228, 462 granular noise, 602
-r)·pe 2 RR filter, 229, 462 overload noise. -602
1-ype 3 FIR filter, 230, 463 oversampling, 813~822
Type 4 FIR filter, 230, 463 quantization noise
Analog comparator, 339 analysis of, 600--611
Anlllog filter model, 600
Butterworth lowpass, 315 saturation noise. 602
order of, 316 signal-ro-quantization noise ratio. 603
elliptic Jowpass effect of input scaling on. 605
order of. 320 Analytic signal
Type 1 Chebyshev Iowpass analog, 6
order of, 318 generation using a Hilbert transformer, 6
Type 2 Chebyshev Iowpa~<> discrete-time, 794
order of. 319 generation of, 794-798
Analog filter design generation using a Hilbert trar;sformer,
bandpass. 332 795
bandstop, 334 Anti-alia"ing filter, 299, 600
Butterw<Jrth approximation, 315-317 design, 335
Chebyshev approximation. 317-319 Aperture function, 827
elliptic approximation, 3!9-320 AR, see Autoregressive {AR)
elliptic Jowpass, 319 ARMA, see Autoregress-ive moving-average
highpass, 331 (ARMA)
hnear-pha.o;;e approximation, 321 Attenuation function, 205, 3 I4
lowpa'!S, 313-329 Autoregressive (AR_l model, 776
specifications, 313 Autoregressive moving-average (ARMA)
lowpass-to-bandpas~ transformation, 332 model. 776
lowpass-to-bandstop trnn:sfurmation, 334

I 855
85{) Index

Backward difference operator, 501 Clutter, 236


Bandwidth Codec, 755
3-dB, 238 Coefficient quantization effects
3-dB notch, 240 analysis of, 58&--597
Ba;;:eband, 302 analysis using MATLAB, 589-593
Bessel filter, 321 estimation of pole-zero displacements,
Bessel polynomial, 321 593-597
BIBO. see Bounded-input, bounded-output in FIR filters
(BIBO) analysis of, 597-599
Bilinear transformation, 430 Complementary solution, 81
Binary number Continuous-time Fourier transform (CTFf}, 6,
canonic signed-digit representation, 556 300,352
fixed-point representation, 553 Continuous-time signal, see also Analog
floating-point representation, 553 signals, 2
hexadecimal representation, 556 Convolution sum, 72
negative block, 419
ones'-complement representation, 555 Correlation computation
sign-magnitude representation, 554 using MATLAB, 90
two's-complement representation, 555 Cosine-modulated filter bank
offset binary representation, 555 L-channel. 730--734
representation of negative, 554 prototype lOW?USs filter design, 73l-734-
resolution of a, 553 Critica1 sampling, 304
signed digit representation, 555 Cross-energy density spectrurr,, 128
Binary point, 552 Crossover frequency, 250
Bit. 552 Cross-power spectrum, 178, 270
sign, 554 Cross-spectral density. see Cross-power
BR see Transfer function, bounded-real (BR} '!><drum
Butter!ly computation, 543 Cross-talk, 421
Butterworth polynomial, 316 CI'FT, see Continuous-time Fcurier transform
Byto, 553 (CTFr)
Cutoff frequency. 222
Cauchy's residue theorem, 167 3;!B,214,237.238,250,315
Cauer filter, see Analog filter design, ellipt:c
lowpass D/A, see Digital-to-analog {D/A)
Causality condition Data smoothing
in terms of impulse response, 80 polynomial fitting approach, 507
Cepstrum. 198 Spencer's fonnu1a, 507
Characteristic polynomial, 81 Decimation, 48
Chebyshev approximation. 464 Decimator, 671, 821
Chebyshev criterion, 460, 471 cascaded integrator oomb (CIC}, 821
Chebyshev polynomial, 317,355,413,456,465 computational complexity comparison,
Chitp z-transfonn, 197 676
Circulant matrix, 146 computationally efficient realization,
pseudo-, 746 686-688
Circular convolution, 143 multistage design, 680--684
matrix form of, 146 Deconvolution, 255
using the DFT, 145 Delay equalizer, 246
Circular shift, 140 Demodulation. 7
Circular time-reversal, 143
,,_ 857

DFT, see Fast Fourier transfonn (FFf) signal-to-noise ratio in low-order,


algorithms 625-<i29
DFTmatrix, 133 tapped cascaded-lattice structure
DIP, see Fast Fourier transform (I-Ff) realization. 391
algorithms integrator, 504
Difference equation, 80 lowpa:-;s.
block. 420 first-order F1R, 234
Differentiator first-order IlR, 236
frequency response of ideal digital , 449 frequency response of ideal, 222
impulse response of ideal digital, 449 impulse response of ideal, 121, 225. 448
Digital filter low-sensitivity, 629-635
allpass requirements for, 630
maximally-flat group delay, 505 multilevel
bandpal>s frequei1C)' response cf, 448
frequeru:y res_pcJnse of ideal, 222 impulse response of, 448
impulse response of ideal, 448 notch, 240, 283
second-order JIR, 238 frequency response of Ideal, 505
bandstop output noise power spectrum, 606
frequency response of ideal, 222 output noise variance, 606
impulse response of ideal, 448 passive, 233
second-order liR. 239 power-symmetric, 252
comb, 241 Type 1 FlR, 2:27
luminance and chrominance 'JYpe 2 FIR 228
COEiponents separation u>oing, 243 TYpe 3 FIR, 229
conjugate quadratic, 252 Type 4 F1R, 230
droop compensation, 350 zero-phase, 224, 231, 286
finite impulse response (FJR}, 222 implementation scheme, 224
clutter removal using, 236 Digita1 filter bank, 696--700
extrn-rippJe, 465 analysis, 696 -
fractional delay, 505 synthesis, 696
impulse response with smooth uniform OFf, 696-700
transition, 459 polyphase implementation, 698-700
hnear-ph.ase condition. 226 Digital filter design
luminance and chrominance basic approaches, 426
components separation using, 249 computer-aided, 460-468
realizatlon of low-sensitivity, 633-635 finite impulse response (FIR)
television aperture correctjon, 297 am?litude sharpening approach, 508
highpass constrained least-square method,
first-order FIR, 235 470--472
first-order IJR, 237 equiripple linear-phase, 461-468
frequency response of ideal, 222 using MATLAB, 479-489
impul~ response of ideal, 448 frequency sampling approach, 508
infinite impulse response (JIR), 222 impulse invariance method. 499
cascaCed-lattice structure realization, imerpola!ed finite impulse response
390 approach, 513
clutter removal using, 285, 296 least integral-squared error method. 447
multiple notch, 502 least-mean-square error method, 468
parallel allpass realization, 401---405 least-squares error method, us:ing
realization of low-sensitivity. 631-633 MATLAB, 494-497
858 tndex

lowpass-to-lowpass transformation, 581 block diagram representation, 3.59-363


order estimation, 428 canonic. 363
order estimation for Dolph-Chebys.hev cascade form HR. 370
window-based., 456 computability condition, 518
order estimation for Kaiser computational complexity, 408
v.'indow-based, 457 delay-free-loop problem, 362
order estimation using MATLAB, 476 Farrow, 693
prefilter-equalizer method, 514 finite impulse response (FIR), 364--368,
step-respon& i:~variance method, 50 l 395---400
tunable, 5&>--568 cascaded lattice realization using
using MATLAB, 476---497 MA TI.AB, 398
window-based, using MATLAB, 489--494 block, 419
windowed Fourier series method, cascade fonn, 365
446-460 cascaded lattice. 31/S-400
infinite impulse response (UR) direct form, 364
bandpass, 438-440 linear-phase, 367
bandpass-to-bandpass transformalion, nested form,413
443 polyphase, 365, 686
bandstop, 440--441 power-symmetric cascaded lattice, 399
bilinear transformation method, simulation of cascaded lattice. using
430-441 MATLAB, 532
highpass. 438 simulation of direct fonn, using
highpas.s-to-highpass transformation, MA.TLAB, 530
443 Taylor, 413,581
Jowpass, 435--436 infinite impulse response (IIR), 368-378.
lowp~s-to-band~s transformation, 389-395,401-405
444 block, 420
Iowpass-to-bandstop transformation, direct form l, 369
444 direct form 11. 370
lowpass-to-highpass transformation, 444 lattice, 384
Iowpass-to-lowpass tram;fonnation, 442 optimwn ordering and pole-zero pairing
order estimation, 427, 4n of cascade form. 621---625
second-Qf"der notch, 434 parallel f0l111 I, 372
tunable, 562-565 parallel form n. 372
using MATLAB, 472--476 scaling of cascade fonn, 617-619
lowpass specifications, 423 simulation of cascade fonn, 527
using M.t.TLAB, 472 simulation of cascaded lattice, using
weighted error function, 460 MATLAB, 532
Digital filter structure simulation of dffect form, 524
aHpas.s, 3?8-387 tuna:b(e first-order, 387
cascaded lattke realization, 382--387 tunable second-order, 38S
delay-~>haring in cascaded, 417 limit cycle free. 644
realization using multiplier extraction matrix representatwn, 5 I)
approach, 379 noncanonic, 363
Type 1 first-order, 380 nonnal form, 645
Type 2 second-order, 380 realization using MATLAB, 374--378
Type 3 second-order, 380 simulation and v~fication using MATLAJl,
analy~is of, 361 52."L535
basic building blocks, 360 tapped delay line, 364
Index 859

lr.msversal FIR, 364 length of a, 43


verification method, 520 infinite-length, 43
Digital filter structures lowpass, 124
equivalent, 363 mathematical representation of. 2
via transpose operation, 363 sample of a. 42
D.igital FM stereo genecation, 790--793 time-domain representation of a, 42-44
Digital signal, 2, 43 Discrete-time system, 44,63
Digital sine-cosine generator, 405--4{}8. backward difference, 77
Digital-to-analog (DIA) converter, 38, 299, causal,69
344-348 classification of a. 67-70
oversampling, 822-826 finite--dimens:onal LTI, &0, 203
Digital two-pair, 259 output calculation, 00
chain matrix, 260 finite impulse response (FIR). 86
interconnection -schemes, 260 infinite impulse response (llR). 87
transfer matrix. 260 interconnection schemes, 76
Discrete co~ine transform, 195 cas;cade connection, 76
Discrete Fourier series, 185 parallel connection,n
Discrete Fourier transform (DFT) linear, 67
bin number, 755 linear rime-invariant {LTI). 69
computation, 535-552 difference equation representation, 80
using Goertzel's algorithm, 535, 755 frequency response of, 215
using MATLAB, 134-136 time-domain characterization, 71-75
computational complexity of, l33 lossless, 70
definition, l3 I noncausat 69, RO
generntized, 191 nonlinear, 68
inverse, t31 nonrecursive, 87
matrix relations, 133 passive-. 70
properties, 140 recursive, 87
relation with DTFT, 131 stable, 70
Discrete Hartley transform, I 96 structurally bounded, 630
Discrete multitone transmission, 807-& ll structurally passive, 630
Discrete-time Fourier transform (DTFT), 301 time-invariant, 69
computation using MATLAB, 128-129 Discrimination parameter, 315
computation using the DFT, T39 OfT, see Dtx:imati-on-ln-time (DIT)
definition, 118 Down-sampler. 48, 69,661
frequency sample of, LJl frequency-domain characterization, 667
from DFTby interpolation, f38 time-domain characterization, 661
inverse, 120 Down-sampling. 48
linear convolution u;;.ing, 130 Droop, 338. 349
mean-square convergence condition, 121 DTFr, ue Discrete-time Fourier
properties. I 24 Transform {DTFf)
sampling the, 13& DTMF, see Dual-tone muJtifrequency
uniform convergence condition, 120 (DTMF) signal
Discrete-time 1mpulse, ~ee Sequence, unit Dual-tone muftifrequency (DTIVfF) signal, 30,
sample 753
Discrete-time signal, 2 detection, 538. ?53-758
band-limited, 124 Dynamic range scaling, 614--629
bandpass, 124 t.:2-hound, 616
finite-length, 43 £=-bound, 615
860 Index

absolute bound, 615 Function approximation, 568-571


general rule, 616
using M_,.,_TLAB, 619--621 Gain function, 205
Gibbs phenomenon, 122,449--452,461,471
EcJ10 cancellation, 35 Gray-Markel method. 391
Ei!~n function, 204 realization using MATLAB, 392
Ekctronic music synthesis., 33-35 Group delay, 213, 277
Energy density spectrum, 125
Energy signal, 53 Hadamard matrix, 743
Ergodic signal, 104 Hadamard transform, 197
En'Or+spectrum shaping, 637 Half-band filter, 701
Expander, see Up-sampler complex, 795
design of FIR, 796
Fast Fourier transform (FFT) algorithms, design of IIR, 798
53&-552 Hilbert tram;fonn
MATLAB implementation, 549 discrete. 184
decimation-in-frequency (DlF), 547-548 Hilbert transformer, LO
decimation-in-time (DIT), 539-546 digital, 795
inverse DFf computation, 549 design, 796
mixed-radix, 545 relation wilh half-band filter, 795
spli.t-radix, 549, 576 single-sideband modulation using,
PDM, see Frequency-division multiplexing 799-800
FFf. see Fast Fourier transform (FFT) frequency response of ideal analog, 6
algorithms frequency response of ideal digital, 449
Filter impulse response of ideal analog, 6
decimation, 671 impulse response of ideal digital, 449
spe<:ifications, 672 Hold circuit
digital, 209 first-order, 356
for fractional sampling rate alteration, 674 zero-order. 348
interpoiation. 671 Hurwitz polynomial, 294
specifications, 672
med:ian, 109 IDFI', ue Discrete Fourier transf-orm (DFI'),
niDving-average, 64, 70, 87, 88. 153,205, Inverse
234, 235, 271,820 IDFT matrix, 133
frequen;::y response, 206 IIR, see Discrete-time system
transfer function, 2 I 7 Imaging, 664
recursive running sum (RRS), 88, 82~ Impulse response, 70
running sum, 88 analytic calculation of. 83
FIR, see Discrete-time system calculation using MATLA», 84
FM stereo, 32 from transfer function, 216
Fordng function, 81 In-place computation, 544
Fourier spectrum. 1 18 Initial condition, 64
FreJ:Juency-division multiplexing {FDM), 9, 32, Initial value theorem, 191
790 Input sequence, 44, 63
Frequency response, 205 Interpolation, 48
mmputation using MATLAB, 205 ideal bandlimited, 307
geometric interpretstion, 219 Lagrange method, 195,466, 505,691-694
relation with impulse response, 205 linear, 66
FretJuency warping, 431 Interpolator, 71, 80, 87,671
Index 861

computationally efficient realization, conv,75,91, 149,376


686-<>88 COS, 59
fractional rate, 692--694 decimate, 676
multistage design, 684 deconv, 256
Inverse system. 76, 253 dftmtx, 134
direct2, 525
Kuhn-Thcke-r conditions. 472 ellip, 324. 473
ellipap, 324
Lag, 89 ellipord, 324, 473
Lagrange multiplier, 470 exp, 59
Lagrange polynomial, 691 fft, 134,549,761
Lagrangian, 470 fftfilt, 153,529
LBR, see Transfer function, lossless filter, 84. 172, 270, 524
bounded-real (LBR) filtfilt, 224, 531
Lea1c:age. 551, 760 firl, 270,479,490
least signi1kant bit (LSB), 552, 585 fu2,49!
Least-squares criterion, 461 fircls. 495
Limit cycles, 639-646 fircls1, 495
granular, 639--642 firls. 494
overflow, 643 freqs, 325
suppression using random rounding, 646 freqz, 128, 140, 205, 474
Linear convolution. 72 gfft. 538, 755
overlap-add method, 151-153 grpdelay, 239
implementation using MATLAB, 529 hamming,490
overlap-save method, 154--155 banning. 490
using the: DFf, 149-155 ifft, 134, 549
Loss function, 205,314 impinv. 510
LSB, see Least-significant bit (LSB) impz, 85, 172
Lth-band filter, 700-705 interp, 676
design of linear-phase FIR. 702-704 kaiser, 490
I.:rr, see Discrete-time system kaiserord, 47&
latc2tf, 393
M-file
iatcfilt, 532
a2dR 589. 640,643 lp2bp, 333
a2dT, 589, 643 lp2hp, 331
besselap, 324 lpc, 779
besself, 324
make_bank, 733
blackman, 490
opt_filter, 732
buttap,322
poly2rc, 266, 386. 398
butter, 322, 473
psd,774
buttord, 322, 473
rand, 64, 92, 571
cheblap, 322
randn, 116, 270, 571, 773
cheblord, 323, 473
remez, 479
cheb2ap, 323
remezord, 477
cheb2ord, 323,473
resarnple, 676, 678
cbebwin, 490
residue, 376, 609
cheby l, 323, 473
residuez, 169,376
cheby2, 323, 473
roots, 374
comp_fiat, 733
sin, 59
862 Index

sine, 225 high-frequency shelving. 786


sos2tf, 625 low-frequency shelving, 786
s.os2zp, 625 tunable second-order equalizer, 789
specgmm, 769 flanging effect generation, 783, 784
strucver, 523 frequency-domain operations, 7.84-790
tf21atc, 392, 398 phasing effect generation, 784
tf2zp, 163 reverberation generation. 780-783
tfe, 270 time-domain operations, 780-784
unwrap, 128, 206
XCOIT, 92 N-path filter, 747
zp2sos, 163,374.624 Nonuniform discrete Fourier transform, 194
zp2tf, J 63. 322, 474 Normal equations, 469
zplane, 164, 2l8 Notch bandwidth, 434
MA, see Moving-average (MA) model Notch frequency, 240, 434, 505
Magnitude function, 118 Numerical integration
Magnitude response, 205 rectangular method. 113. 504
Mngnitude spectrum, 118 Simpson's method, 279
MHtched filter. 274 trapezoidal method, 87, 279,431,504
in spread spectrum communication, 274 Nyquist condition, 302
Maximany flat, see Butterworth jJOiynomial Nyquist filter, see Lth-band filter
Maximum passband attenuation. 425 Nyquist frequency, 302
Minimax criterion, see Chebyshev criterion Nyquist rate. 304
Minimum stopband attenuation, 314, 424
Mirror-image symmetry. 244 Output noise variance
Modulation, 7 algebr-.:..ic computation of, 606--608
amplitude, 7 computation using MATLAB. 608--6ll
double-sideband (DSB), 9 Output response
double-sideband suppressed carrier calculation using MATLAB, 84
(DSB-SC}. 7, 212 Output sequence, 44, 63
quadrature amplitude, :1 1 Overflow,-558
single-sideband (SSB), JO handling of. 562
Modulation matrix, 716,723 saturation, 562
Modulator, 45 two's-<:omplement, 562
Modulo operation, 140 Oversampling, 304
Most-significant bit (MSB), 552. 585 Overshoot, Ill
Moving-average (MA) model, 776
MSB, see Most-significant bit (MSB) Parallel-to-serial com·erter, 4 I 9
Multichannel signal, 42 f Paraunitary matrix, 719
Multiplication Parks-McClellan algorithm. 461, 479, 796
fixed-point. 559 Partial-fraction expansion, 168
Booth's algorithm, 560 using MATLAB, 169
floating-point. 561 Particular solution, 81
Multiplier, 46, 360 Pa!>sband, 222,313,424
Mu!ltirate system Passband edge frequency. 313, 424
cascade equivalences., 669 Peak passband ripple, 314.424
fractional delay, 742 Periodic -convolution, 1'84
Musical sound processing, 780--790 Periodic impulse train, 123, 30i
chorus. effect generation., 7&4 Fourier series representation, 302
filter
Index

Feriodogram perfect reconstruction , 724


analysis of random signals using, 772-775 perfect reconstruction condition, 727
avcruging, 774 polyphase representation, 725-729
Bartlett estimate, 774 multilevel, 734--738
definition, 772 equal passband width!>, 734-736
estimate computation using MATLAB, 774 unequal pru.sband widths. 737-738
modified, 772 two-channel, 705-721
Welch c&timate, 774 alias-free condition, 707
Phase delay, 212 alias-free FlR. 710-713
Phase function, I 18 aJias-free HR, 713-714
principal value, 118 an alias-free realization, 709
Phaserespon~e.205 analysis of, 707
Phase spectrum, U8 biortbogonal, 721
Phase umvrapping, 118, 206 computationally efficient realization,
Pi:ck-off node, 46, 360 709
Pipelin:ing/interleaving technique, 748 magnitude preserving, 708
Pole interlacing property, 403 orthogonal, 716-721
Pole-zero pairing rule, 623 paraunitary, 720
Polynomial perfect ree.onstruc[ion, 708
antimirror-image, 231 pe-rfect reconsnuction FJR, 714-721
degree, 159 phase preserving, 708
mirror-image, 23 !, 244 Quantization
Polyphase component matrix, 726 of a fixed-point number, 585
Polyphase components, 684 quantization error range, 588
Polyphase decomposition, 365, 684-686 rounding error, 586
structural interpretation of, 685 truncation error, 585
1}'pe I, 686 of a floating-point number, 587
Type n, 686 relative error. 587
p,:JWer density spectrum, 177 relative error range, 588
P<JWer signal, 53 Quantization level. 553
Power spectral density, 269 Quantization process and errors, 584-588
Power spectrum, see Power density spectrum Quantization step, 585
Power-symmetric condition, 252, 399 Quantized boxcar signal, 2
Precedence graph, 519
Prime factor algorithm, 578 Random proc-ess, see Random signal
Probability density function, 95 Random signal, 2, 95
Gaussian, 97 autocorrelation of a, 10 I
joint, 98 autocovariance of a, 101
unifonn,97 average power of a, 177
Probability dio;tribution function, 95 discrete-time processing of, 267-272
joint. 98 mean value of a, 100
mean·square value of a, l 00
Q:'viF, see Quadrature-mirror filter (QMF) bank_ power in a, 104
Quadrature frequency, 709 realization of a 95
Quadrature-mirror filter (QMF) bank statistical properties of a, 100--102
L-channel, 722-729 transfonn-domain representation, l76- f79
alias-free condition, 723-725 variance of a, 100
analysis of. 722 white, l79
::uagnitude-pre&ecving, 724 wide-sense .stat;onary {WSS), 102
864 Index

Random signals anti-causal, 44, 161


cross-correlation function of. l02 antisymmetric, 51
cross-covariance function of. 102 aperiodic, 51
Random variable, 95 aperiodic autocorrelation, 268
mean value of a, 96 autocorrelation, 89
moment of a. 96 properties of. 90
standard deviation of a, 96 average power Qf a, 52
statistical properties of a. 95-100 bounded, 53
uniformly dlstributed causal,44, 161
mean valuo:: of a, 97 classification of a, 49-53
variance of a, 97 conjugate-antisymmetric, 49
variance of a, 96 conjugate-symmetric, 49
Re<.:X)(Istruction filter, 299 cross-correlation, 89
design, 348-35{) delay estimaiion from, 91
Reflection coefficient, 390 properties of, 90
Region of convergence (ROC), 157 delaying of a, 46
Relaxed system, 83 down-sampling of a, 48
Remez exchange algorithm, 465 energy of a, 52
Re.;.idue, 168, 193 even, 49
Reverberation, 28 exponential. 55
ROC, see Region of convergence (ROC) generation using MAnA~ 59
Round-off error left-sided, 44
in FFT algorithms. 646---649 modulation of a, 45
product odd,49
analysis model, 613 periodic, 51
analysis of. 611-614 fundamental period of a, 52
reduction using error-feedback, 635-638 period determination of a, 93
period of a, 51
SIH, see Sample-and-hold (SIH) clrcuit periodic conjugate-antisyrrunetric, 51
Sample-and-hold (SIH) circuit, 38, 299, 337, periodic conjugate-symmetric, 51
600 right-sided, 44
averaging in, 351 sampling rate aberation of a, 47
Sampled-data signal, 2 scalar multiplication of a. 45
Sampling frequency, 42, 300 sinusoidal. 55
angular, 302 amplitude of a, 55
Sampling interval, 42 angular frequency of a, 55, 58
Sampling period, 42, 300 frequency of a, 58
Sampling rate alteration fundamental period of a, 57
using MATLAB, 676 high-frequency, 58
SampJing rate alteration ratio, 47 low-frequency, 58
Sampling rate converter phase of a. 55
arbitrary-rate, 690--695 square-summable, 53
digital audio, 811-813 symmetric, 51
ideal, 690 time-reversal of a, 46
Sampling theorem, 62, 302 time-shifting of a, 46
Schwartz inequality, 273 two-sided, 44
Sequence, see also Discrete-time signal unit sample, 54
ahoolutely summable, 53 unit step, 54
advancing of a, 46 up-sampling of a. 48
Index

Sequences bounded-input, bounded-output (BIBO).


addition of, 45 70
operations on, 44-48 in terms of c-haracteristic equation root
product of. 45 locations, 86
Serial-to-parallel converter, 419 in terms of impulse response. 78
Shift-invariant. see Discrete-time system, time in terms of transfer function pole locations,
invariant 220
ShorHime Fourier transform (STFT), 767-770 Stability test
computation using MATLAB, 769 algebraic procedure, 263-266
sampling in the time and frequency Stability triangle, 262
dimensions, 767 Steady-state response, 207. 20&
speech signal analysis using, 770 Step response, 70
Sideiobe, 451 STFT, see Short-time Fourier transform (STFT)
Sigma-delta quantization scheme, 814 Stopband, 222,313.424
Signal, see ai.so specific signal Stopband edge frequency. 313, 424
band-limited, 302 Structural subband decomposition, 742
bandpass, 310 System identification, 256-259
sampling of, 310
deterministic, 2, 94 TDM, see Time-division multiplexing (TOM)
diesel engine, 15 Telephone dialing, 30
electrocardiography (ECG}, l2 Three-pulse canceler, 236
electroencephalogram (EEG), 14 Time constant, 289
image, 19 Time-dependent Fourier transform, .see
lowpass, 310 Short-time Fourier transform (STFf)
musical sound, 17 Time-division multiplexing (fDM), 39
narrowband, 212 Tune-invariant, see Discrete-time system
sampled-data. 43 Toeplitz matrix, 778
seismic, 14 Total solution, 81
time-series, 18 Transfer function, 216
-Signal flow-graph, 518 all pass, 244
Signal processing applications, 22-37 prope-rties, 246
Sow:d record;ng, 22-30 boun<ied-real (BR), 232.430, 630
Sparse antenna array design, 826---829 distortion, 707, 724
Spectral analysis, 758-779 frequency response from, 2l8
of nonstationary signals, 764-770 linear-phase
of random signals., 771-779 types of, 226--232
autoregressive tAR} model-based, linear-pha'le FIR
778-779 zero locations of, 231
nonparometric method, 772-775 lossless. bounded-real (LBR), 233
parametric-model-based, 776-779 maximum-phase, 248
of sinusoidal signals, 758-764 minimum-phase, 248
Spectral factorization, ?17 of a finite-dimensional IIR filter, 216
S:t:>ectrogram, 767 of an FIR filter, 216
narrowband, 769 pole, 217
wideband, 769 scaling of a. 430
Speech signal, 15 types of, 222-233. 243-253
subband coding of, 800-803 zero, 217
Square-error criterion, 470 Transfer functions
Stability condition allpass-complementary, L"''
ae:e Index

complementary, 248-253 inverse, 167


delay-complementary, 248 by partial-fraction expansion, 167
doubly-complementary, 250, 401 formula. 167
magnitude-complementary, 252 using MATLAB, ] 72
power-complementary, 250, 401 via long division, 171
Tr.msient response, 208 Parseval's relation, 174
Transition band, 223 pole of a, 159
'ftmsition ratio, 314 power-series expansion of, 171
Thmsmu1tiplexer, 803-807 properties, 173
1\l•iddle factor, 577 rational, 159
Two-pulse canceler, 236 ROC of a, 159-166
zero of a, I 59
Urldersampling, 304 Zero- input response, 8 3
Unit circle, 157 Zero-padding, 44, 134, 145
Unit delay. 46, 360 Zero-phase filtering. 531
Urtit sample response. see Impulse restxmse Zero-phase filtering,
Up--sampler, 48, 66, 69, 660 simulation using MATLAB, 531
frequency-domain characterization, 664 Zero-phase response. see Amplitude response
time-domain characterization, 660 Zero-state response. 83
Uf~sampling, 48

Warped discrete Fourier transform, 509


Wave digital filter, 629
White noise, 179
Wiener-Khintchine theorem. 177
Window
Bartlett, 507
Blackman. 452
properties, 454
Dolph-Chebyshev, 456
Hamming, 452
properties, 454
Hann,452
properties. 454
Hanning, :see window, hann
Kaiser, 456
main lobe width, 453
rectangular, 451
properties, 454
relative sidelobe level, 453
Word,553
WoJdleogth,. 338, 553, 585
WSS, see Random signal, wide-sense stationary
(WSS)

Ynh~VValkerequation, 778
z-tnmforrn
definition, 157

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy