Note For Signals and Systems
Note For Signals and Systems
Stanley Chan
University of California, San Diego
2
Acknowledgement
This class note is prepared for ECE 101: Linear Systems Fundamentals at the Uni-
versity of California, San Diego in Summer 2011. The program is supported by UC
San Diego Summer Graduate Teaching Fellowship.
I would like to give a special thank to Prof. Paul H. Siegel for sharing his hand
written notes, which became the backbone of this class note. I also want to thank
Prof. Truong Q. Nguyen for proof-reading and providing useful feedbacks. Part of
the LATEXmanuscript is prepared by Mr. Lester Lee-kang Liu and Mr. Jason Juang.
The textbook used for this course is Oppenheim and Wilsky, Signals and Systems,
Prentice Hall. 2nd Edition.
Stanley Chan
La Jolla, CA.
August, 2011.
Contents
1 Fundamentals of Signals 5
1.1 What is a Signal? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Review on Complex Numbers . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Basic Operations of Signals . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Periodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Even and Odd Signals . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Impulse and Step Functions . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Continuous-time Complex Exponential Functions . . . . . . . . . . . 22
1.8 Discrete-time Complex Exponentials . . . . . . . . . . . . . . . . . . 24
2 Fundamentals of Systems 27
2.1 System Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 System Properties and Impulse Response . . . . . . . . . . . . . . . . 37
2.4 Continuous-time Convolution . . . . . . . . . . . . . . . . . . . . . . 41
3 Fourier Series 43
3.1 Eigenfunctions of an LTI System . . . . . . . . . . . . . . . . . . . . 43
3.2 Fourier Series Representation . . . . . . . . . . . . . . . . . . . . . . 47
3.3 Properties of Fourier Series Coefficients . . . . . . . . . . . . . . . . . 54
3
4 CONTENTS
6 Sampling Theorem 83
6.1 Analog to Digital Conversion . . . . . . . . . . . . . . . . . . . . . . 83
6.2 Frequency Analysis of A/D Conversion . . . . . . . . . . . . . . . . . 84
6.3 Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.4 Digital to Analog Conversion . . . . . . . . . . . . . . . . . . . . . . 90
7 The z-Transform 95
7.1 The z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.2 z-transform Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.3 Properties of ROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7.4 System Properties using z-transform . . . . . . . . . . . . . . . . . . 104
Chapter 1
Fundamentals of Signals
5
6 CHAPTER 1. FUNDAMENTALS OF SIGNALS
z = x + jy,
or in polar form as
z = rejθ .
Figure 1.1: A complex number z can be expressed in its Cartesian form z = x + jy, or in its polar
form z = rejθ .
1.3. BASIC OPERATIONS OF SIGNALS 7
A complex number can be drawn on the complex plane as shown in Fig. 1.1. The
y-axis of the complex plane is known as the imaginary axis, and the x-axis of the
complex plane is known as the real axis. A complex number is uniquely defined by
z = x + jy in the Cartesian form, or z = rejθ in the polar form.
Example. Convert the following complex numbers from Cartesian form to polar
form: (a) 1 + 2j; (b) 1 − j.
Example. In Fig. 1.2, the left image shows a continuous-time signal x(t). A time-
shifted version x(t − 2) is shown in the right image.
8 CHAPTER 1. FUNDAMENTALS OF SIGNALS
Example.
Example.
For the signal x(t) shown in Fig. 1.5, sketch x(3t − 5).
Example.
For the signal x(t) shown in Fig. 1.6, sketch x(1 − t).
10 CHAPTER 1. FUNDAMENTALS OF SIGNALS
Decimation.
Decimation is defined as
yD [n] = x[M n], (1.5)
Expansion.
Expansion is defined as
(
x Ln , n = integer multiple of L
yE [n] = (1.6)
0, otherwise.
1.4 Periodicity
1.4.1 Definitions
Definition 1. A continuous time signal x(t) is periodic if there is a constant T > 0
such that
x(t) = x(t + T ), (1.7)
for all t ∈ R.
Signals do not satisfy the periodicity conditions are called aperiodic signals.
Example. Consider the signal x(t) = sin(ω0 t), ω0 > 0. It can be shown that
x(t) = x(t + T ), where T = k ω2π0 for any k ∈ Z+ :
2π
x(t + T ) = sin ω0 t + k
ω0
= sin (ω0 t + 2πk)
= sin(ω0 t) = x(t).
x(t) = x(t + T )
3π 3π
⇒ ej 5 t = ej 5 (t+T )
3π
⇒ 1 = ej 5 T
3π
⇒ ej2kπ = ej 5 T , for somek ∈ Z+ .
⇒ T = 10 3
. (k = 1)
For (b), we let x[n] = ej3πn/5 . If x[n] is a periodic signal, then there exists an integer
N > 0 such that x[n] = x[n + N ]. So,
x[n] = x[n + N ]
3π 3π
⇒ ej 5 n = ej 5 (n+N )
3π
⇒ ej2kπ = ej 5 N , for somek ∈ Z+
⇒ N = 10k3
⇒ N = 10. (k = 3).
πt2 πn2
(a) cos 8
(b) cos 8
πt2 πn2
Figure 1.9: Difference between x(t) = cos 8 and x[n] = cos 8 . Note that x(t) is aperiodic,
whereas x[n] is periodic.
On the other hand, x[n] is periodic, with fundamental period N0 = 8. To see this,
consider the periodicity condition x[n] = x[n + N ], which becomes:
cos π(n + N )2 /8 = cos πn2 /8 ,
8
(n + N )2 = n2 + (2πk),
π
or
n2 + 2nN + N 2 = n2 + 16k,
implying
2nN + N 2 = 16k,
for some k ∈ Z. Next, we want to find an N such that 2nN + N 2 is divisible by 16
for all n ∈ Z. Now we claim: N = 8 satisfies this condition, and no smaller N > 0
does.
Setting N = 8, we get
2nN + N 2 = 16n + 64,
which, for any n ∈ Z, is clearly divisible by 16. So N = 8 is a period of x[n]. You can
check directly that, for any 1 ≤ N < 8, there is a value n ∈ Z such that 2nN + N 2 is
not divisible by 16. For example if we consider N = 4, we get
2nN + N 2 = 8n + 16,
mN ≡ 0 ( mod N0 ).
Example 1: N0 = 8, m = 2, then N = 4.
Example 2: N0 = 6, m = 4, then N = 3.
1.5. EVEN AND ODD SIGNALS 15
and it is odd if
x(−t) = −x(t). (1.10)
and odd if
x[−n] = −x[n]. (1.12)
Remark: The all-zero signal is both even and odd. Any other signal cannot be both
even and odd, but may be neither. The following simple example illustrate these
properties.
Figure 1.10: Illustrations of odd and even functions. (a) Even; (b) Odd; (c) Neither.
16 CHAPTER 1. FUNDAMENTALS OF SIGNALS
Proof. Define
x(t) + x(−t)
y(t) =
2
and
x(t) − x(−t)
z(t) = .
2
Clearly y(−t) = y(t) and z(−t) = −z(t). We can also check that x(t) = y(t)+z(t).
Terminology: The signal y(t) is called the even part of x(t), denoted by Ev{x(t)}.
The signal z(t) is called the odd part of x(t), denoted by Odd{x(t)}.
et + e−t
Ev{x(t)} = = cosh(t).
2
et − e−t
Odd{x(t)} = = sinh(t).
2
Similarly, we can define even and odd parts of a discrete-time signal x[n]:
x[n] + x[−n]
Ev{x[n]} =
2
x[n] − x[−n]
Odd{x[n]} = .
2
It is easy to check that
x[n] = Ev{x[n]} + Odd{x[n]}
then y[n] is even and z[n] is odd if and only if y[n] = Ev{x[n]} and z[n] = Odd{x[n]}.
1.6. IMPULSE AND STEP FUNCTIONS 17
Therefore,
x[n] + x[−n] = (y[n] + z[n]) + (y[n] − z[n]) = 2y[n],
x[n]+x[−n] x[n]−x[−n]
implying y[n] = 2
= Ev{x[n]}. Similarly z[n] = 2
= Odd{x[n]}.
The converse is trivial by definition, as Ev{x[n]} must be even and Odd{x[n]} must
be odd.
Shifting Property
P
∞
Since x[n]δ[n] = x[0]δ[n] and δ[n] = 1, we have
n=−∞
∞
X ∞
X ∞
X
x[n]δ[n] = x[0]δ[n] = x[0] δ[n] = x[0],
n=−∞ n=−∞ n=−∞
and similarly
∞
X ∞
X
x[n]δ[n − n0 ] = x[n0 ]δ[n − n0 ] = x[n0 ]. (1.16)
n=−∞ n=−∞
Representation Property
Using the sampling property, it holds that
x[k]δ[n − k] = x[n]δ[n − k].
Summing the both sides over the index k yields
∞
X ∞
X ∞
X
x[k]δ[n − k] = x[n]δ[n − k] = x[n] δ[n − k] = x[n].
k=−∞ k=−∞ k=−∞
This result shows that every discrete-time signal x[n] can be represented as a linear
combination of shifted unit impulses
∞
X
x[n] = x[k]δ[n − k]. (1.18)
k=−∞
Figure 1.13: Representing of a signal x[n] using a train of impulses δ[n − k].
where Z ∞
δ(t)dt = 1.
−∞
Sampling Property
x(t)δ(t) = x(0)δ(t). (1.19)
To see this, note that x(t)δ(t) = x(0) when t = 0 and x(t)δ(t) = 0 when t 6= 0.
Similarly, we have
x(t)δ(t − t0 ) = x(t0 )δ(t − t0 ), (1.20)
for any t0 ∈ R.
Shifting Property
The shifting property follows from the sampling property. Integrating x(t)δ(t) yields
Z ∞ Z ∞ Z ∞
x(t)δ(t)dt = x(0)δ(t)dt = x(0) δ(t)dt = x(0). (1.21)
−∞ −∞ −∞
Representation Property
The representation property is also analogous to the discrete-time case:
Z ∞
x(t) = x(τ )δ(t − τ )dτ, (1.23)
−∞
1 1
(a) x(t) = 12 e− 2 t (b) x(t) = 12 e 2 t
Re{x(t)} = A cos(ω0 t + θ)
Im{x(t)} = A sin(ω0 t + θ).
We can think of x(t) as a pair of sinusoidal signals of the same amplitude A, ω0 and
phase shift θ with one a cosine and the other a sine.
The second claim is the immediate result from the first claim. To show the first claim,
we need to show x(t+T0 ) = x(t) and no smaller T0 can satisfy the periodicity criteria.
2π
jω0 (t+ |ω )
x(t + T0 ) = Ce 0|
= Cejω0 t e±j2π
= Cejω0 t = x(t).
Figure 1.16: Periodic complex exponential function x(t) = Aert ejω0 t (A = 1, r = −1/2, ω0 = 2π).
1. When |α| = 1, then x[n] = |C| cos(Ω0 n + θ) + j|C| sin(Ω0 n + θ) and it has
sinusoidal real and imaginary parts (not necessarily periodic, though).
2. When |α| > 1, then |α|n is a growing exponential, so the real and imaginary
parts of x[n] are the product of this with sinusoids.
3. When |α| < 1, then the real and imaginary parts of x[n] are sinusoids sealed by
a decaying exponential.
x[n + N ] = x[n], ∀n ∈ Z.
This is equivalent to
This means that we can limit the range of values of Ω0 to any real interval of length 2π.
The periodicity in frequency applies, of course, to the periodic complex exponential
signals, so we have a different notion of low and high frequencies in the discrete-time
setting.
Chapter 2
Fundamentals of Systems
Remarks:
27
28 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
2.1.2 Invertible
Definition 13. A system is invertible if distinct input signals produce distinct output
signals.
In other words, a system if invertible if there exists an one-to-one mapping from the
set of input signals to the set of output signals.
1. To show that a system is invertible, one has to show the inversion formula.
2. To show that a system is not invertible, one has to give a counter example.
Example 1.
The system y(t) = (cos(t) + 2)x(t) is invertible.
Proof. To show that the system is invertible, we need to find an inversion formula.
This is easy: y(t) = (cos(t) + 2)x(t) implies that (by rearranging terms)
y(t)
x(t) = ,
cos(t) + 2
which is the inversion formula. Note that the denominator is always positive, thus
the division is valid.
Example 2.
The system y[n] = x[n] + y[n − 1] is invertible.
Example 3.
The system y(t) = x2 (t) is not invertible.
Proof. To show that a system is not invertible, we construct a counter example. Let
us consider two signals
x1 (t) = 1, ∀t
x2 (t) = −1, ∀t.
Clearly x1 (t) 6= x2 (t), but (x1 (t))2 = (x2 (t))2 . Therefore, we have found a counter
example such that different inputs give the same output. Hence the system is not
invertible.
30 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
2.1.3 Causal
Definition 14. A system is causal if the output at time t (or n) depends only on
inputs at time s ≤ t (i.e., the present and past).
Examples.
1. y[n] = x[n − 1] is causal, because y[n] depends on the past sample x[n − 1].
2. y[n] = x[n] + x[n + 1] is not causal, because x[n + 1] is a future sample.
Rt
3. y(t) = −∞ x(τ )dτ is causal, because the integral evaluates τ from −∞ to t
(which are all in the past).
4. y[n] = x[−n] is not causal, because y[−1] = x[1], which means the output at
n = −1 depends an input in the future.
5. y(t) = x(t) cos(t + 1) causal (and memoryless), because cos(t + 1) is a constant
with respect to x(t).
2.1.4 Stable
To describe a stable system, we first need to define the boundedness of a signal.
Definition 15. A signal x(t) (and x[n]) is bounded if there exists a constant B < ∞
such that |x(t)| < B for all t.
Definition 16. A system is stable if a bounded input input always produces a bounded
output signal. That is, if |x(t)| ≤ B for some B < ∞, then
|y(t)| < ∞.
Example 1.
The system y(t) = 2x2 (t − 1) + x(3t) is stable.
Proof. To show the system is stable, let us consider a bounded signal x(t), that is,
|x(t)| ≤ B for some B < ∞. Then
|y(t)| = |2x2 (t − 1) + x(3t)|
≤ |2x2 (t − 1)| + |x(3t)| , by Triangle Inequality
≤ 2|x2 (t − 1)| + |x(3t)|
≤ 2B 2 + B < ∞.
Therefore, for any bounded input x(t), the output y(t) is always bounded. Hence the
system is stable.
2.1. SYSTEM PROPERTIES 31
Example 2.
P
n
The system y[n] = x[k] is not stable.
k=−∞
P
Proof. To show that the system y[n] = nk=−∞ x[k] is not stable, we can construct a
bounded input signal x[n] and show that the output signal y[n] is not bounded.
2.1.5 Time-invariant
Definition 17. A system is time-invariant if a time-shift of the input signal results
in the same time-shift of the output signal. That is, if
x(t) −→ y(t),
x(t − t0 ) −→ y(t − t0 ),
for any t0 ∈ R.
Example 1.
The system y(t) = sin[x(t)] is time-invariant.
32 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
Now, we have to check whether y1 (t) = y(t − t0 ). To show this, we note that
Example 2.
The system y[n] = nx[n] is not time-invariant.
Proof. To show that the system in not time-invariant, we can construct a counter
example. Let x[n] = δ[n], then y[n] = nδ[n] = 0, ∀n (Why?). Now, let x1 [n] =
x[n − 1] = δ[n − 1]. If y1 [n] is the output produced by x1 [n], it is easy to show that
2.1.6 Linear
Definition 18. A system is linear if it is additive and scalable. That is,
for all a, b ∈ C.
2.2. CONVOLUTION 33
Example 1.
The system y(t) = 2πx(t) is linear. To see this, let’s consider a signal
Example 2.
The system y[n] = (x[2n])2 is not linear. To see this, let’s consider the signal
where y1 [n] = (x1 [2n])2 and y2 [n] = (x2 [2n])2 . We want to see whether y[n] =
ay1 [n] + by2 [n]. It holds that
However,
y[n] = (x[2n])2 = (ax1 [2n] + bx2 [2n])2 = a2 (x1 [2n])2 + b2 (x2 [2n])2 + 2abx1 [n]x2 [n].
2.2 Convolution
2.2.1 What is Convolution?
Linear time invariant (LTI) systems are good models for many real-life systems, and
they have properties that lead to a very powerful and effective theory for analyzing
their behavior. In the followings, we want to study LTI systems through its charac-
teristic function, called the impulse response.
To begin with, let us consider discrete-time signals. Denote by h[n] the “impulse
response” of an LTI system S. The impulse response, as it is named, is the response
of the system to a unit impulse input. Recall the definition of an unit impulse:
(
1, n=0
δ[n] = (2.1)
0, n 6= 0.
34 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
P
∞
because δ[n − k] = 1 for all n. The sum on the right hand side is
k=−∞
∞
X
x[k]δ[n − k]
k=−∞
Therefore, equating the left hand side and right hand side yields
∞
X
x[n] = x[k]δ[n − k] (2.3)
k=−∞
In other words, for any signal x[n], we can always express it as a sum of impulses!
2.2. CONVOLUTION 35
Next, suppose we know that the impulse response of an LTI system is h[n]. We want
to determine the output y[n]. To do so, we first express x[n] as a sum of impulses:
∞
X
x[n] = x[k]δ[n − k].
k=−∞
For each impulse δ[n − k], we can determine its impulse response, because for an LTI
system:
δ[n − k] −→ h[n − k].
Consequently, we have
∞
X ∞
X
x[n] = x[k]δ[n − k] −→ x[k]h[n − k] = y[n].
k=−∞ k=−∞
This equation,
∞
X
y[n] = x[k]h[n − k] (2.4)
k=−∞
• Convolution is true only when the system is LTI. If the system is time-varying,
then ∞
X
y[n] = x[k]hk [n − k].
k=−∞
2. Shift
Let’s compute the output y[n] one by one. First, consider y[0]:
∞
X ∞
X
y[0] = x[k]h[0 − k] = x[k]h[−k] = 1.
k=−∞ k=−∞
P∞
Note that h[−k] is the flipped version of h[k], and k=−∞ x[k]h[−k] is the multiply-
add between x[k] and h[−k].
2.3. SYSTEM PROPERTIES AND IMPULSE RESPONSE 37
To calculate
P∞ y[1], we flip h[k] to get h[−k], shift h[−k] go get h[1−k], and multiply-add
to get k=−∞ x[k]h[1 − k]. Therefore,
∞
X ∞
X
y[1] = x[k]h[1 − k] = x[k]h[1 − k] = 1 × 1 + 2 × 1 = 3.
k=−∞ k=−∞
2.3.1 Memoryless
A system is memoryless if the output depends on the current input only. An equivalent
statement using the impulse response h[n] is that:
Proof. If h[n] = aδ[n], then for any input x[n], the output is
∞
X
y[n] = x[n] ∗ h[n] = x[k]h[n − k]
k=−∞
∞
X
= x[k]aδ[n − k]
k=−∞
= ax[n].
So, the system is memoryless. Conversely, if the system is memoryless, then y[n]
cannot depend on the values x[k] for k 6= n. Looking at the convolution sum formula
∞
X
y[n] = x[k]h[n − k],
k=−∞
we conclude that
h[n − k] = 0, for all k 6= n,
or equivalently,
h[n] = 0, , for all n 6= 0.
This implies
y[n] = x[n]h[0] = ax[n],
where we have set a = h[0].
2.3.2 Invertible
Theorem 5. An LTI system is invertible if and only if there exist g[n] such that
Proof. If a system S is invertible, then x1 [n] 6= x2 [n] implies y1 [n] 6= y2 [n]. So there
exists an injective mapping (one-to-one map) S such that y[n] = S(x[n]) = h[n]∗x[n].
Since f is injective, there exists an inverse mapping S −1 such that
S −1 (S(x[n])) = x[n],
Conversely, if there exist g[n] such that h[n] ∗ g[n] = δ[n], then for any x1 [n] 6= x2 [n],
we have
and y1 [n] 6= y2 [n]. Taking the difference between y1 [n] and y2 [n], we have
Since x1 [n] 6= x2 [n], and g[n] 6= 0 for all n, we must have y1 [n] 6= y2 [n]. Therefore,
the system is invertible.
2.3.3 Causal
Theorem 6. An LTI system is causal if and only if
Proof. If S is causal, then the output y[n] cannot depend on x[k] for k > n. From
the convolution equation,
∞
X
y[n] = x[k]h[n − k],
k=−∞
40 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
we must have
h[n − k] = 0, for k > n, or equivalently h[n − k] = 0, for n − k < 0.
Setting m = n − k, we see that
h[m] = 0, for m < 0.
Conversely, if h[k] = 0 for k < 0, then for input x[n],
∞
X ∞
X
y[n] = h[k]x[n − k] = h[k]x[n − k].
k=−∞ k=0
2.3.4 Stable
Theorem 7. An LTI system is stable if and only if
∞
X
|h[k]| < ∞.
k=−∞
P∞
Proof. Suppose that k=−∞ |h[k]| < ∞. For any bounded signal |x[n]| ≤ B, the
output is
X∞
|y[n]| ≤ x[k]h[n − k]
k=−∞
X∞
= |x[k]| · |h[n − k]|
k=−∞
∞
X
≤B· |h[n − k]| .
k=−∞
Example.
The continuous-time convolution also follows the three step rule: flip, shift, multiply-
add. To see an example, let us consider the signal x(t) = e−at u(t) for a > 0, and
impulse response h(t) = u(t). The output y(t) is
Case A: t > 0:
Z ∞
y(t) = x(τ )h(t − τ )dτ
−∞
Z ∞
= e−aτ u(τ )u(t − τ )
−∞
Z t
= e−aτ dτ
0
1
= 1 − e−at .
−a
42 CHAPTER 2. FUNDAMENTALS OF SYSTEMS
Case B: t ≤ 0:
y(t) = 0.
Therefore,
1
y(t) = 1 − e−at u(t).
a
Memoryless.
An LTI system is memoryless if and only if
Invertible.
An LTI system is invertible if and only if there exist g(t) such that
Causal.
A system is causal if and only if
Stable.
A system is stable if and only if
Z ∞
|h(τ )|dτ < ∞.
−∞
Chapter 3
Fourier Series
The objective of this chapter is to identify a family of signals {xk (t)} such that:
1. Every signal in the family passes through any LTI system with only a scale
change (or other simply described change)
xk (t) −→ λk xk (t),
where λk is a scale factor.
2. “Any” signal can be represented as a “linear combination” of signals in their
family.
X∞
x(t) = ak xk (t).
k=−∞
43
44 CHAPTER 3. FOURIER SERIES
Suppose that x(t) = est for some s ∈ C, then the output is given by
Z ∞
y(t) = h(t) ∗ x(t) = h(τ )x(t − τ )dτ
−∞
Z ∞
= h(τ )es(t−τ ) dτ
−∞
Z ∞
= h(τ )e dτ est = H(s)est = H(s)x(t),
−sτ
−∞
The function H(s) is known as the transfer function of the continuous-time LTI sys-
tem. Note that H(s) is defined by the impulse response h(t), and is a function in s
(independent of t). Therefore, H(s)x(t) can be regarded as a scalar H(s) multiplied
to the function x(t).
From the derivation above, we see that if the input is x(t) = est , then the output is
a scaled version y(t) = H(s)est :
Suppose that the impulse response is given by h[n] and the input is x[n] = z n , then
the output y[n] is
∞
X
y[n] = h[n] ∗ x[n] = h[k]x[n − k]
k=−∞
X∞
= h[k]z n−k
k=−∞
X∞
n
=z h[k]z −k = H(z)z n ,
k=−∞
where we defined
∞
X
H(z) = h[k]z −k ,
k=−∞
and H(z) is known as the transfer function of the discrete-time LTI system.
3.1.3 Summary
In summary, we have the following observations:
The result implies that if the input is a linear combination of complex exponentials,
the output of an LTI system is also a linear combination of complex exponentials.
More generally, if x(t) is an infinite sum of complex exponentials,
∞
X
x(t) = ak e s k t ,
k=−∞
then ∞
X
x[n] = ak H(zk )zkn .
k=−∞
2. In any finite interval of time x(t) is of bounded variation; that is, there are no
more than a finite number of maxima and minima during any single period of
the signal.
3. In any finite interval of time, there are only a finite number of discontinuities.
For this class of signals, we are able to express it as a linear combination of complex
exponentials:
X∞
x(t) = ak ejkω0 t .
k=−∞
Given a periodic signal x(t) that is square integrable, how do we determine the Fourier
Series coefficients ak ? This is answered by the following theorem.
is given by Z
1
ak = x(t)e−jkω0 t dt.
T T
3.2. FOURIER SERIES REPRESENTATION 49
RT
The term 0
ej(k−n)ω0 t dt can be evaluated as (You should check this!)
Z
1 T j(k−n)ω0 t 1 if k = n
e dt = (3.1)
T 0 0 otherwise
This result is known as the orthogonality of the complex exponentials.
Example 1. Sinusoids
Consider the signal x(t) = 1+ 12 cos 2πt+sin 3πt. The period of x(t) is T = 2 [Why?] so
the fundamental frequency is ω0 = 2π T
= π. Recall Euler’s formula ejθ = cos θ +j sin θ,
we have
1 1 j3πt
x(t) = 1 + ej2πt + e−j2πt + e − e−j3πt .
4 2j
50 CHAPTER 3. FOURIER SERIES
Therefore, the Fourier series coefficients are (just “read off” from this equation!):
1 1 1
a0 = 1, a1 = a−1 = 0, a2 = a−2 = , a3 = , a−3 = − ,
4 2j 2j
and ak = 0 otherwise.
x[n] = x[n + N ],
P∞
and assume that x[n] is square-summable, i.e., n=−∞ |x[n]|2 < ∞, or x[n] satisfies
the Dirichlet conditions. In this case, we have
is given by
1 X
ak = x[n]e−jkΩ0 n .
N
n=hN i
P
Here, n=hN i means summing the signal within a period N . Since a periodic discrete-
time signals repeats every N samples, it does not matter which sample to be picked
first.
Example.
Let us consider the following signal shown below. We want to determine the discrete-
time F.S. coefficient.
52 CHAPTER 3. FOURIER SERIES
2N1
X
1
= e−jkΩ0 (m−N1 ) , (m = n + N0 )
N m=0
2N
1 jkΩ0 N1 X1 −jkΩ0 m
= e e .
N m=0
Since
2N1
X 1 − e−jkΩ0 (2N1 +1)
e−jkΩ0 m = ,
m=0
1 − e−jkΩ0
it follows that
1 jkΩ0 N1 1 − e−jkΩ0 (2N1 +1)
ak = e , (Ω0 = 2π/N )
N 1 − e−jkΩ0
1 e−jk(2π/2N ) [ejk2π(N1 +1/2)/N − e−jk2π(N1 +1/2)/N ]
=
N e−jk(2π/2N ) [ejk(2π/2N ) − e−jk(2π/2N ) ]
1 sin[2πk(N1 + 1/2)/N ]
= .
N sin( πkN
)
For k = 0, ±N, ±2N, . . ., we have
2N1 + 1
ak = .
N
2. Time Shift:
x(t − t0 ) ←→ ak e−jkω0 t0
x[n − n0 ] ←→ ak e−jkΩ0 n0
To show the time shifting property, let us consider the F.S. coefficient bk of the
signal y(t) = x(t − t0 ).
Z
1
bk = x(t − t0 )e−jω0 t dt.
T T
3.3. PROPERTIES OF FOURIER SERIES COEFFICIENTS 55
6. Parseval Equality:
Z ∞
X
1
|x(t)|2 dt = |ak |2
T T k=−∞
1 X X
|x[n]|2 = |ak |2
N
n=hN i k=hN i
Let us begin our discussion by reviewing some limitations of Fourier series represen-
tation. In Fourier series analysis, two conditions on the signals are required:
1. The signal must be periodic, i.e., there exist a T > 0 such that x(t + T ) = x(t).
R
2. The signal must be square integrable T |x(t)|2 dt < ∞, or satisfies the Dirichlet
conditions.
In this chapter, we want to extend the idea of Fourier Series representation to aperi-
odic signals. That is, we want to relax the first condition to aperiodic signals.
where x(t) = x(t + T ). The Fourier Series coefficients of x(t) are (check yourself!)
F.S. 2 sin(kω0 T1 )
x(t) ←→ ak = .
kω0 T
If we substitute ω = kω0 , then
2 sin(ωT1 )
ak =
wT ω=kω0
57
58 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
Pictorially, (4.1) indicates that the normalized Fourier series coefficients T ak are
bounded by the envelop X(ω) = 2 sin(ωT
ω
1)
, as illustrated in Fig. 4.1.
When T increases, the spacing between consecutive ak reduces. However, the shape
of the envelop function X(ω) = 2 sin(ωT
ω
1)
remains the same. This can be seen in Fig.
4.2.
Figure 4.2: Fourier Series coefficients of x(t) for some T ′ , where T ′ > T .
In the limiting case where T → ∞, then the Fourier series coefficients T ak approaches
the envelop function X(ω). This suggests us that if we have an aperiodic signal, we
can treat it as a periodic signal with T → ∞. Then the corresponding Fourier series
coefficients approach to the envelop function X(ω). The envelop function is called
the Fourier Transform of the signal x(t). Now, let us study Fourier Transform more
formally.
4.2. FOURIER TRANSFORM 59
Step 1.
We assume that an aperiodic signal x(t) has finite duration, i.e., x(t) = 0 for |t| > T /2,
for some T . Since x(t) is aperiodic, we first construct a periodic signal xe(t):
x
e(t) = x(t),
for −T /2 < t < T /2, and x
e(t + T ) = x
e(t). Pictorially, we have
Step 2.
Since x
e(t) is periodic, we may express x
e(t) using Fourier Series:
∞
X
x
e(t) = ak ejkω0 t (4.2)
k=−∞
where Z
1
ak = e(t)e−jkω0 t dt.
x
T T
The Fourier Series coefficients ak can further be calculated as
Z T
1 2
ak = e(t)e−jkω0 t dt
x
T −T 2
Z T
1 2
= x(t)e−jkω0 t dt, x
e(t) = x(t), for − T /2 < t < T /2
T −T
Z 2
1 ∞
= x(t)e−jkω0 t dt, x(t) = 0, for |t| > T /2.
T −∞
If we define Z ∞
X(jω) = x(t)e−jωt dt (4.3)
−∞
60 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
X∞ X∞
1 jkω0 t 1
x
e(t) = X(jkω0 )e = X(jkω0 )ejkω0 t ω0 . (4.5)
k=−∞
T k=−∞
2π
Step 3.
Now, note that xe(t) is the periodic padded version of x(t). When the period T → ∞,
the periodic signal x
e(t) approaches x(t). Therefore,
x
e(t) −→ x(t), (4.6)
as T → ∞.
X∞ Z ∞
1 jkω0 t 1
lim X(jkω0 )e ω0 = X(jω)ejwt dw. (4.7)
ω0 →0
k=−∞
2π −∞ 2π
The two equations (4.3) and (4.8) are known as the Fourier Transform pair. (4.3) is
called the Analysis Equation (because we are analyzing the time signal in the Fourier
domain) and (4.8) is called the Synthesis Equation (because we are gathering the
Fourier domain information and reconstruct the time signal).
To summarize we have
2. Periodic Signal: If the signal x(t) is periodic, then we do not need to construct
x
e(t) and set ω0 → 0. In fact, ω0 is fixed by the period of the signal: If the period
62 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
of x(t) is T0 , then ω0 = 2π
T0
. Now, since x(t) is periodic, we can apply Fourier
Series analysis to x(t) and get
∞
X
x(t) = ak ejkω0 t , (4.9)
k=−∞
Z " ∞
#
∞ X
X(jω) = ak ejkω0 t e−jωt dt
−∞ k=−∞
∞
X Z ∞
jkω0 t −jωt
= ak e e dt
k=−∞ −∞
X∞
= ak 2πδ(ω − kω0 ).
k=−∞
Here, the last equality is established by the fact that inverse Fourier Transform
4.3. RELATION TO FOURIER SERIES 63
Z ∞
−1 1
F {2πδ(ω − kω0 )} = 2πδ(ω − kω0 )ejwt dw
2π −∞
Z ∞
= δ(ω − kω0 )ejkω0 t dw = ejkω0 t .
−∞
Figure 4.5: Fourier Transform and Fourier Series analysis of a periodic signal: Both
yields a train of impulses. For Fourier Transform, the amplitude is multiplied by a
factor of 2π. For Fourier Series coefficients, the separation between each coefficient is
ω0 .
64 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
4.4 Examples
Example 1.
Consider the signal x(t) = e−at u(t), for a > 0. Determine the Fourier Transform
X(jω), its magnitude |X(jω)| and its phase ∢X(jω).
Z ∞
X(jω) = x(t)e−jωt u(t)dt
Z−∞
∞
= e−at e−jωt dt , u(t) = 0, whenever t < 0
0
−1 −(a+jω)t ∞
= e 0
a + jω
1
= .
a + jω
The magnitude and phase can be calculated as
1 ω
−1
|X(jω)| = √ ∢X(jω) = − tan .
a + ω2
2 a
Example 2.
Consider the signal x(t) = δ(t). The Fourier Transform is
Z ∞
X(jω) = x(t)e−jωt dt
Z−∞
∞
= δ(t)e−jωt dt = 1.
−∞
Example 3.
Consider the signal x(t) = ejω0 t . We want to show that X(jω) = 2πδ(ω − ω0 ). To
4.4. EXAMPLES 65
Example 4.
Consider the aperiodic signal
1 |t| ≤ T1
x(t) =
0 |t| > T1
The Fourier Transform is
Z ∞
X(jω) = x(t)e−jωt dt
−∞
Z T1
−1 −jωt T1 sin ωT1
= e−jωt dt = e −T
= 2 .
−T1 jω 1 ω
Example 5.
Let us determine the CTFT of the unit step function u(t). To do so, we apply CTFT
and get
Z ∞ Z ∞ ∞
−jωt jωt 1 jωt
U (jω) = u(t)e dt = e dt = − e .
−∞ 0 jω 0
1 jωt
That is, we have to evaluate the function jω e at t = 0 and t = ∞. However,
the evaluation at t = ∞ is indeterminate! Therefore, we express y(t) as a limit of
decaying exponentials
u(t) = lim e−at u(t).
a→0
Then applying CTFT on both sides,
1
U (jω) = lim F e−at u(t) = lim
a→0 a→0 a + jω
a − jω
= lim
a2 + ω 2
a→0
a ω
= lim −j 2 .
a→0 a2 + ω 2 a + ω2
66 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
while Z ∞ ∞
a −1 ω
dω = tan = π, ∀ a ∈ R.
−∞ a2 + ω 2 a −∞
Therefore,
a
lim = πδ(ω),
a→0 a2 + ω2
and so
1
U (jω) = + πδ(ω).
jω
2. Time Shifting
3. Conjugation
x∗ (t) ←→ X ∗ (−jω)
5. Time Scaling
1 jω
x(t) ←→ X( )
|a| a
6. Parseval Equality
Z ∞ Z ∞
2 1
|x(t)| dt = |X(jω)|2 dω
−∞ 2π −∞
7. Duality The important message here is: If x(t) ←→ X(jω), then if another signal
y(t) has the shape of X(jω), we can quickly deduce that X(jω) will have the
shape of x(t). Here are two examples:
8. Convolution Property
h(t) ∗ x(t) ←→ H(jω)X(jω)
Proof: Consider the convolution integral
Z ∞
y(t) = x(τ )h(t − τ )dτ
−∞
Our objective is to determine h(t) and H(jω). Applying CTFT on both sides:
( N ) (M )
X dk y(t) X dk x(t)
F ak =F bk .
k=0
dtk k=0
dtk
70 CHAPTER 4. CONTINUOUS-TIME FOURIER TRANSFORM
Example 1.
Consider the LTI system
d2 y(t) dx(t)
2
+ 4y(t) + 3y(t) = + 2x(t).
dt dt
Taking CTFT on both sides yields
Thus, h(t) is
1 1
h(t) = e−t u(t) + e−3t u(t).
2 2
Example 2.
If the input signal is x(t) = e−t u(t), what should be the output y(t) if the impulse
response of the system is given by h(t) = 12 e−t u(t) + 12 e−3t u(t)?
1 jω+2
Taking CTFT, we know that X(jω) = jω+1 , and H(jω) = (jω+1)(jω+3) . Therefore,
the output is
jω + 2 1 jω + 2
Y (jω) = H(jω)X(jω) = = .
(jω + 1)(jω + 3) jω + 1 (jω + 1)2 (jω + 3)
1
• Step 1: Pad the aperiodic signal x(t) to construct a periodic replicate x̃(t)
• Step 2: Since x̃ (t) is periodic, we find the Fourier series coefficients ak and
represent x̃(t) as
X∞
x̃ (t) = ak ejkω0 t .
k=−∞
1
We are interested in aperiodic signals, because periodic signals can be handled using continuous-
time Fourier Series!
73
74 CHAPTER 5. DISCRETE-TIME FOURIER TRANSFORM
By defining
Z ∞
X (jω) = x (t) ejωt dt, (5.1)
−∞
If we define
∞
X
jω
X(e ) = x[n]e−jωn , (5.4)
n=−∞
then ∞
1 X 1
ak = x[n]ejkω0 n = X(ejkω0 ). (5.5)
N n=−∞ N
Therefore,
Z
1
x[n] = X(ejω )ejωn dω. (5.7)
2π 2π
76 CHAPTER 5. DISCRETE-TIME FOURIER TRANSFORM
Now, let us consider the continuous-time Fourier Transform (we want to check whether
X(jω) = X(j(ω + 2π))!):
Z ∞ Z ∞
−j(ω+2π)t
X(j(ω + 2π)) = x(t)e dt = x(t)e−jωt (e−j2π )t dt.
−∞ −∞
(a) (ej2π )n = 1 for all n, because n is integer. (b) (ej2π )t 6= 1 unless t is an integer.
and consequently,
X(j(ω + 2π)) 6= X(jω).
1. Periodicity:
X(ej(ω+2π) ) = X(ejω )
2. Linearity:
ax1 [n] + bx2 [n] ←→ aX1 (ejω ) + bX2 (ejω )
3. Time Shift:
x[n − n0 ] ←→ e−jωn0 X(ejω )
4. Phase Shift:
ejω0 n x[n] ←→ X(ej(ω−ω0 ) )
78 CHAPTER 5. DISCRETE-TIME FOURIER TRANSFORM
5. Conjugacy:
x∗ [n] ←→ X ∗ (e−jω )
6. Time Reversal
x[−n] ←→ X(e−jω )
7. Differentiation
dX(ejω )
nx[n] ←→ j
dω
8. Parseval Equality
∞
X Z
1
2
|x[n]| = |X(ejω )|2 dω
n=−∞
2π 2π
9. Convolution
10. Multiplication
Z
1jω
y[n] = x1 [n]x2 [n] ←→ Y (e ) = X1 (ejω )X2 (ej(ω−θ) )dθ
2π 2π
5.5 Examples
Example 1.
Consider x[n] = δ[n] + δ[n − 1] + δ[n + 1]. Then
∞
X
jω
X(e ) = x[n]e−jωn
n=−∞
X∞
= (δ[n] + δ[n − 1] + δ[n + 1])e−jωn
n=−∞
X∞ ∞
X ∞
X
= δ[n]e−jωn + δ[n − 1]e−jωn + δ[n + 1]e−jωn
n=−∞ n=−∞ n=−∞
−jω jω
=1+e +e = 1 + 2 cos ω.
5.5. EXAMPLES 79
To sketch the magnitude |X(ejω )|, we note that |X(ejω )| = |1 + 2 cos ω|.
Example 2.
Consider x[n] = δ[n] + 2δ[n − 1] + 4δ[n − 2]. The discrete-time Fourier Transform is
H(ejω ) = 1 + e−jω .
Example 3.
Consider x[n] = an u[n], with |a| < 1. The discrete-time Fourier Transform is
∞
X ∞
X 1
X(ejω ) = an e−jωn = (ae−jω )n = .
n=0 n=0
1 − ae−jω
80 CHAPTER 5. DISCRETE-TIME FOURIER TRANSFORM
Next, let us draw the magnitude |X(ejω )|. To do so, let’s consider
1 1
X(ejω )2 = X(ejω )X ∗ (ejω ) = ·
1 − ae −jω 1 − aejω
1
=
1 − a(e−jω ejω ) + a2
1
=
1 − 2a cos ω + a2
2 2
Case A. If 0 < a < 1, then |X(ejω )| achieves maximum when ω = 0, and |X(ejω )|
achieves minimum when ω = π. Thus,
n 2 o 1 1
max X(ejω ) = 2
=
1 − 2a + a (1 − a)2
n 2 o 1 1
min X(ejω ) = 2
= .
1 + 2a + a (1 + a)2
2 2
Case B: If −1 < a < 0, then |X(ejω )| achieves maximum when ω = π, and |X(ejω )|
achieves minimum when ω = 0. Thus,
n o 1 1
jω 2
max X(e ) = 2
=
1 + 2a + a (1 + a)2
n 2 o 1 1
min X(ejω ) = 2
= .
1 − 2a + a (1 − a)2
5.7 Appendix
Geometric Series:
N
X 1 − xN +1
xn = 1 + x + x2 + . . . + xn = ,
n=0
1−x
∞
X 1
xn = 1 + x + x2 + . . . = , when |x| < 1.
n=0
1−x
82 CHAPTER 5. DISCRETE-TIME FOURIER TRANSFORM
Chapter 6
Sampling Theorem
Sampling theorem plays a crucial role in modern digital signal processing. The the-
orem concerns about the minimum sampling rate required to convert a continuous
time signal to a digital signal, without loss of information.
83
84 CHAPTER 6. SAMPLING THEOREM
where T is the period of the impulse train. Multiplying x(t) with p(t) yields
xp (t) = x(t)p(t)
X∞
= x(t) δ(t − nT )
n=−∞
∞
X
= x(t)δ(t − nT )
n=−∞
X∞
= x(nT )δ(t − nT ).
n=−∞
Pictorially, xp (t) is a set of impulses bounded by the envelop x(t) as shown in Fig.
6.2.
Figure 6.2: An example of A/D conversion. The output signal xp (t) represents a set
of samples of the signal x(t).
We may regard xp (t) as the samples of x(t). Note that xp (t) is still a continuous-time
signal! (We can view xp (t) as a discrete-time signal if we define xp [n] = x(nT ). But
this is not an important issue here.)
This means that the frequency response of the impulse train p(t) is another impulse
train. The only difference is that the period of p(t) is T , whereas the period of P (jω)
is 2π
T
.
Shown in Fig. 6.3 are the frequency response of X(jω) and P (jω) respectively. To
perform the convolution in frequency domain, we first note that P (jω) is an impulse
train. Therefore, convolving X(jω) with P (jω) is basically producing replicates at
every 2π
T
. The result is shown in Fig. 6.4.
Figure 6.4: Convolution between X(jω) and P (jω) yields periodic replicates of
X(jω).
the period 2π
T
reduces! In other words, the impulses are more packed in frequency
domain when T increases. Fig. 6.6 illustrates this idea.
Figure 6.7: When T is sufficiently large, there will be overlap between consecutive
replicates.
Therefore, in order to avoid aliasing, T cannot be too large. If we define the sampling
rate to be
2π
ωs = ,
T
then smaller T implies higher ωs . In other words, there is a minimum sampling rate
such that no aliasing occurs.
Figure 6.8: Meanings of high sampling rate v.s. low sampling rate.
Here, let us assume that the signal x(t) is band-limited. That is, we assume X(jω) = 0
for all |ω| > W , where W is known as the band-width.
Figure 6.9: Left: A band limited signal (since X(jω) = 0 for all ω > |W |.) Right: A
band non-limited signal.
ωs > 2W,
2π
where ωs = T
.
6.3.1 Explanation
Suppose x(t) has bandwidth W . The tightest arrangement that no aliasing occurs is
shown in Fig. 6.10
2π
In this case, we see that the sampling rate ωs (= T
) is
ωs = 2W.
90 CHAPTER 6. SAMPLING THEOREM
ωs > 2W.
6.3.2 Example
Suppose there is a signal with maximum frequency 40kHz. What is the minimum
sampling rate ?
Answer :
Since ω = 2πf , we know that the max frequency (in rad) is ω = 2π(40 × 103 ) =
80 × 103 π (rad). Therefore, the minimum Sampling rate is: 2 × (80 × 103 π), which is
160 × 103 π (rad) = 80kHz.
Figure 6.12: Schematic diagram of recovering x(t) from xp (t). The filter H(jω) is
assumed to be an ideal lowpass filter.
Then ∞
1X
Xp (jω) = X(j(ω − kωs )).
T −∞
As shown in the top left of Fig. 6.13, Xp (jω) is a periodic replicate of X(jω). Since
we assume that there is no aliasing, the replicate covering the y-axis is identical to
X(jω). That is, for |ω| < ω2s ,
Xp (jω) = X(jω).
Now, if we apply an ideal lowpass filter (shown in bottom left of Fig. 6.13):
(
1, |ω| < ω2s ,
H(jω) =
0, otherwise,
then
Xp (jω)H(jω) = X(jω),
for all ω. Taking the inverse continuous-time Fourier transform, we can obtain x(t).
6.4.2 If Xp (t) has aliasing, can I still recover x(t) from xp (t) ?
The answer is NO. If aliasing occurs, then the condition
Xp (jω) = X(jω)
92 CHAPTER 6. SAMPLING THEOREM
Figure 6.13: Left: Multiplication between Xp (jω) and the lowpass filter H(jω). The
extracted output X̂(jω) is identical to X(jω) if no aliasing occurs. By applying
inverse Fourier transform to X̂(jω) we can obtain x(t).
does not hold for all |ω| < ω2s . Consequently, even if we apply the lowpass filter H(jω)
to Xp (jω), the result is not X(jω). This can be seen in Fig. 6.14.
Figure 6.14: If aliasing occurs, we are unable to recover x(t) from xp (t) by using an
ideal lowpass filter.
6.4. DIGITAL TO ANALOG CONVERSION 93
• Method 2: Send signals with narrower bandwidth or limit the bandwidth be-
fore sending :
The z-Transform
x[n] ←→ X(z).
In general, the number z in (7.1) is a complex number. Therefore, we may write z as
z = rejw ,
where r ∈ R and w ∈ R. When r = 1, (7.1) becomes
∞
X
X(ejw ) = x[n]e−jwn ,
n=−∞
95
96 CHAPTER 7. THE Z-TRANSFORM
Figure 7.1: Complex z-plane. The z-transform reduces to DTFT for values of z on
the unit circle.
which is the DTFT of the signal r−n x[n]. However, from the development of DTFT
we know that DTFT does not always exist. It exists only when the signal is square
summable, or satisfies the Dirichlet conditions. Therefore, X(z) does not always
converge. It converges only for some values of r. This range of r is called the region
of convergence.
Definition 22. The Region of Convergence (ROC) of the z-transform is the set of
z such that X(z) converges, i.e.,
∞
X
|x[n]|r−n < ∞.
n=−∞
7.1. THE Z-TRANSFORM 97
Example 1. Consider the signal x[n] = an u[n], with 0 < a < 1. The z-transform of
x[n] is
∞
X
X(z) = an u[n]z −n
−∞
X∞
n
= az −1 .
n=0
P∞ n
Therefore, X(z) converges if n=0 (az −1 ) < ∞. From geometric series, we know
that
∞
X n 1
rz −1 = ,
n=0
1 − az −1
1
X(z) = ,
1 − ax−1
with ROC being the set of z such that |z| > |a|.
Example 2. Consider the signal x[n] = −an u[−n − 1] with 0 < a < 1. The
z-transform of x[n] is
∞
X
X(z) = − an u[−n − 1]z −n
n=−∞
−1
X
=− an z −n
n=−∞
X∞
−n n
=− a z
n=1
∞
X n
=1− a−1 z .
n=0
Therefore, X(z) converges when |a−1 z| < 1, or equivalently |z| < |a|. In this case,
1 1
X(z) = 1 − = ,
1 − a−1 z 1 − az −1
with ROC being the set of z such that |z| < |a|. Note that the z-transform is the
same as that of Example 1. The only difference is the ROC. In fact, Example 2 is
just the left-sided version of Example 1!
n n
1 1
x[n] = 7 u[n] − 6 u[n].
3 2
The z-transform is
X∞ n n
1 1
X(z) = 7 −6 u[n]z −n
n=−∞
3 2
X∞ n X∞ n
1 −n 1
=7 u[n]z − 6 u[n]z −n
n=−∞
3 n=−∞
2
1 1
=7 −6
1 − 31 z −1 1 − 12 z −1
1 − 23 z −1
= .
1 − 31 z −1 1 − 21 z −1
For X(z) to converge, both sums in X(z) must converge. So we need both |z| > | 13 |
and |z| > | 12 |. Thus, the ROC is the set of z such that |z| > | 21 |.
5. x[−n] ←→ X( z1 )
7.2. Z-TRANSFORM PAIRS 101
6. x∗ [n] ←→ X ∗ (z ∗ )
Example 6. Consider the signal h[n] = δ[n] + δ[n − 1] + 2δ[n − 2]. The z-Transform
of h[n] is
H(z) = 1 + z −1 + 2z −2 .
Example 7. Let prove that x[−n] ←→ X(z −1 ). Letting y[n] = x[−n], we have
∞
X ∞
X
−n
Y (z) = y[n]z = x[−n]z −n
n=−∞ n=−∞
X∞
= x[m]z m = X(1/z).
m=−∞
Example 8. Consider the signal x[n] = 13 sin π4 n u[n]. To find the z-Transform,
we first note that
n n
1 1 jπ 1 1 −j π
x[n] = e 4 u[n] − e 4 u[n].
2j 3 2j 3
The z-Transform is
∞
X
X(z) = x[n]z −n
n=−∞
X∞ n X∞ n
1 1 j π −1 1 1 −j π −1
= e 4z − e 4z
n=0
2j 3 n=0
2j 3
1 1 1 1
= 1 π − 1 π
2j 1 − 3 e 4 z −1 2j 1 − 3 e−j 4 z −1
j
1
√
3 2
z −1
= 1 j π4 −1 π .
(1 − 3
e z )(1 − 13 e−j 4 z −1 )
102 CHAPTER 7. THE Z-TRANSFORM
Property 2. DTFT of x[n] exists if and only if ROC includes the unit circle.
Proof. By definition, ROC is the set of z such that X(z) converges. DTFT is the z-
transform evaluated on the unit circle. Therefore, if ROC includes the unit circle, then
X(z) converges for any value of z on the unit circle. That is, DTFT converges.
Property 4. If x[n] is a finite impulse response (FIR), then the ROC is the entire
z-plane.
Property 5. If x[n] is a right-sided sequence, then ROC extends outward from the
outermost pole.
Property 6. If x[n] is a left-sided sequence, then ROC extends inward from the
innermost pole.
Proof. Let’s consider the right-sided case. Note that it is sufficient to show that if a
complex number z with magnitude |z| = r0 is inside the ROC, then any other complex
number z ′ with magnitude |z ′ | = r1 > r0 will also be in the ROC.
Now, suppose x[n] is a right-sided sequence. So, x[n] is zero prior to some values of
n, say N1 . That is
x[n] = 0, n ≤ N1 .
Consider a point z with |z| = r0 , and r0 < 1. Then
∞
X
X(z) = x[n]z −n
n=−∞
∞
X
= x[n]r0−n < ∞,
n=N1
7.3. PROPERTIES OF ROC 103
A(z)
Property 7. If X(z) is rational, i.e., X(z) = B(z) where A(z) and B(z) are poly-
nomials, and if x[n] is right-sided, then the ROC is the region outside the outermost
pole.
where pk is the k-th pole of the system. Using partial fraction, we have
σi
r X
X Cik
X(z) = .
i=1 k=1
(1 − p−1
i z)
k
Each of the term in the partial fraction has an ROC being the set of z such that
|z| > |pi | (because x[n] is right-sided). In order to have X(z) convergent, the ROC
must be the intersection of all individual ROCs. Therefore, the ROC is the region
outside the outermost pole.
For example, if
1
X(z) = 1 −1 ,
(1 − 3
z )(1 − 21 z −1 )
then the ROC is the region |z| > 21 .
104 CHAPTER 7. THE Z-TRANSFORM
h[n] = 0, n < 0.
Therefore, h[n] must be right-sided. Property 5 implies that ROC is outside a circle.
where there is no positive powers of z, H(z) converges also when z → ∞ (Of course,
|z| > 1 when z → ∞!).
7.4.2 Stablility
Property 9. A discrete-time LTI system is stable if and only if ROC of H(z)
includes the unit circle.
Proof. A system is stable if and only if h[n] is absolutely summable, if and only if
DTFT of h[n] exists. Consequently by Property 2, ROC of H(z) must include the
unit circle.
Property 10. A causal discrete-time LTI system is stable if and only if all of its
poles are inside the unit circle.
Examples.