0% found this document useful (0 votes)
8 views

ee123-notes

Uploaded by

m3152146c8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ee123-notes

Uploaded by

m3152146c8
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

EE123 Course Notes

Anmol Parande

Spring 2020 - Professor Miki Lustig

Disclaimer: These notes reflect EE123 when I took the course (Spring 2020). They
may not accurately reflect current course content, so use at your own risk. If you
find any typos, errors, etc, please raise an issue on the GitHub repository.

Contents

1 The DFT 3
1.1 Convolution and the DFT . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Circular Convolution . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Linear Convolution with the DFT . . . . . . . . . . . . . . . . 5
1.1.3 Block Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Decimation in Time . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Decimation in Frequency . . . . . . . . . . . . . . . . . . . . . 8

2 Spectral Analysis 8
2.1 Short Time Fourier Transform (STFT) . . . . . . . . . . . . . . . . . . . 9
2.2 Discrete STFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Time-Frequency Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.1 Discrete Wavelet Transform . . . . . . . . . . . . . . . . . . . . 13

3 Sampling 13

1
3.1 Ideal Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Nyquist Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.2 Discrete Time Processing of a Continuous Time Signal . . . . 16
3.1.3 Continuous Time Processing of Discrete Time Signals . . . . . 17
3.1.4 Downsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.5 Upsampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Multi-Rate Signal Processing . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.1 Exchanging Filter Order During Resampling . . . . . . . . . . 19
3.2.2 Polyphase Decomposition . . . . . . . . . . . . . . . . . . . . . 19
3.3 Practical Sampling (ADC) . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.1 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Practical Reconstruction (DAC) . . . . . . . . . . . . . . . . . . . . . . 22

4 Filtering 23
4.1 Transform Analysis of LTI Systems . . . . . . . . . . . . . . . . . . . . 23
4.2 All Pass Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Minimum Phase Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Generalized Linear Phase Systems . . . . . . . . . . . . . . . . . . . . 25
4.5 Filter Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5.1 Windowing Method . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5.2 Optimal Filter Design . . . . . . . . . . . . . . . . . . . . . . . 27

2
1 The DFT

Whereas the CTFT takes a continuous signal and outputs a continuous frequency
spectrum and the DTFT takes a discrete signal and outputs a continuous, periodic
frequecy spectrum, the Discrete Fourier Transform takes a discrete finite signal and
outputs a discrete frequency spectrum. This is useful for signal processing because
we cannot store infinite signals in a computer’s memory.

Definition 1 For a length N finite sequence {x[n]}0n−1 , the Discrete Fourier Transform of
the signal is a length N finite sequence {X[k]}0n−1 where
N −1

X
X[k] = x[n]e−j N kn
n=0

One way to interpret the DFT is in terms of the Fourier series for a disrete periodic
signal x̃[n] = x[((n))N ] where the ((n))N = n mod N . Recall that the coefficient of
the kth term of the Fourier Series is
N −1
1 X 2π
ak = x[n]e−j N kn
N n=0

Notice that the ak of the Fourier Series are the DFT values except scaled by a factor
of N . In other words, if we extend a finite signal periodically, then the DFT and
the DTFS are the same up to a constant scale factor. This gives an intuitive inverse
DFT.

−1
Definition 2 For a length N finite sequence {X[k]}N 0 representing the DFT of a finite
N −1
perioidc signal {x[n]}0 , the inverse DFT is given by
N −1
1 X 2π
x[n] = X[k]ej N kn
N k=0

Notice that the DFT and the IDFT are very similar in form. It turns out that the
IDFT can be expressed as a DFT of X ∗ [k]. Namely
1
IDF T {X[k]} = DF T {X ? [k]}?
N
Further intuition for the DFT comes by relating it to the DTFT. Suppose we have a
finite signal x[n] which is 0 for n < 0 and n > N − 1. The DTFT of this signal is

X N
X −1
−jωn
X(ω) = x[n]e = x[n]e−jωn
n=−∞ n=0

3

Suppose we sample the DTFT at intervals of N
k, then the kth sample is given by
  N −1
2π X 2π
X[k] = X k = x[n]e−j N kn
N n=0

Thus we can think of the DFT as a N evenly spaced samples of the DTFT. One im-
portant point to notice is that while the DTFT is often centered around 0 (meaning
it is plotted over a range from −π to π), because we are summing from 0 to N-1 in
the DFT, the DFT coefficients are centered around π, and thus they are plotted on
a range fo [0, 2π − 2π
N
]

1.1 Convolution and the DFT

1.1.1 Circular Convolution

When the DFT coefficients of two signals are multiplied, the resulting coefficients
describe a circular convolution of the original two signals.

x[n] ~ y[n] ↔ X[k]Y [k]

Definition 3 A circular convolution between two finite sequences is given by


N
X −1
x[n] ~ y[n] = x[m]y[((n − m))N ]
m=0

The mechanics of the circular convolution are the same as that of the regular con-
volution, except the signal is circularly shifted as shown in fig. 1. A circular con-

4 4
x(n) x(((−n))N )
3 3

2 2

1 1

2 4 2 4

Figure 1: A circular shift

volution is equivalent to a periodic convolution over a single period.

4
1.1.2 Linear Convolution with the DFT

Because multiplying DFT coefficients performs a specific case of convolution, we


can compute a linear convolution using the circular convolution. Suppose we have
two finite signals {x[n]}L−1
0 and {h[n]}P0 −1 The linear convolution of these two sig-
nals will be length L + P − 1, so in order to take an IDFT and get L + P − 1 samples,
we need to take at least N ≤ L + P − 1 points.

1. Pad each vector to length L + P − 1


2. Compute X[k]H[k]
3. Take the Inverse DFT

If N is smaller than L − P + 1, the result is akin to aliasing in the time domain. To


see why, consider that the DFT coefficients are essentially the DTFS coefficients of
the periodic extension of x[n] (denote x̃[n]).

X
x̃[n] = x[n − rN ]
r=−∞

If we compute the DTFT of each periodic extension, then


Y (ejω ) = X(ejω )H(ejω )
and the IDTFT of this will be

X
ỹ[n] = y[n − rN ].
r=−∞

Notice that if N is not large enough, then these copies will be overlapping (a.k.a
aliasing). Since the DFT is just sampling the DTFT, the circular convolution will
represent the true convolution so long as the copies don’t overlap.

1.1.3 Block Convolutions

In a discrete time system, the input signal might have a very long length, making
it impractical to be stored in a computer’s memory or to compute the DFT of it all
at once (especially if we have a real-time system). Thus to compute the output of a
digital filter (with impulse response of length P ), we need to compute the DFT in
blocks shorter than the signal.
The first method of block convolution is the overlap-add method.

1. Decompose x[n] into nonoverlapping segments of length L


(
X x[n] rL ≤ n ≤ (r + 1)L
x[n] = xr [n] xr [n] =
r
0 else.

5
2. Since convolution is linear,
X
y[n] = x[n] ∗ h[n] = xr [n] ∗ h[n].
r

3. Zero pad xr [n] and h[n] to length N ≥ L + P − 1 to prevent time-domain


aliasing
4. Compute the DFTs, multiply them, and take the inverse DFT.
5. The neighboring outputs overlap in the last P − 1 points, so add the overlap-
ping sections together to get the final output

The other method of block convolution is the overlap-save method.

1. Divide x[n] into sections of length L such that each section overlaps the pre-
vious by P − 1 points
xr [n] = x[n + r(L − P + 1) − P + 1] 0≤n≤L−1

2. Zero pad xr [n] and h[n] to length N ≥ L + P − 1 to prevent time domain


aliasing.
3. Compute the DFTs, multiply the coefficients, and compute the inverse DFT.
4. The first P − 1 samples of the output will be incorrect, so we can discard
them.

(
X xr [n] ∗ h[n] P − 1 ≤ n ≤ L − 1
y[n] = yr [n−r(L−P +1)+P −1] yr [n] =
r=0
0 else

1.2 FFT

The DFT gives us an easy way to do convolutions. Unfortunately naiively, comput-


ing it is an O(N 2 ) operation because we must sum together N elements to compute
N different coefficients. Thankfully, there is a fast algorithm which can compute
the DFT in O(N log N ) time so we can compute convolutions quickly.

Definition 4 The fast fourier transform is an algorithm which computes the DFT effi-
ciently in O(N log N ) time.

It works by exploiting properties of the Nth roots of unity.

Definition 5 The Nth roots of unity are the complex roots of WNN = 1.
2πk
WNk = e−j N

6
The roots of unity have the following properties.

Theorem 1 The Nth roots of unity are conjugate symmetric.

WNN −kn = WN−kn = (WNkn )?

Theorem 2 The Nth roots of unity are periodic in N.


k(n+N ) (k+N )n
Wkn = WN = WN

N
Theorem 3 When squared, the Nth roots of unity become the 2
th roots of unity.

WN2 = W N
2

Using theorems 1 to 3, we can take two approaches to the FFT: decimation in time,
which splits x[n] into smaller subsequences, and decimation in frequency which
splits X[n] into smaller subsequences.

1.2.1 Decimation in Time

The idea here is too break x[n] into smaller subsequences. We assume that N is a
power of 2 for simplicity.
N
X −1 X X
X[k] = x[n]WNkn = x[n]WNkn + x[n]WNkn
n=0 n even n odd

We let n = 2r and n = 2r + 1.
N N
2
−1 2
−1
X X k(2r+1)
X[k] = x[2r]WN2rk + x[2r + 1]WN
r=0 r=0
N N
−1
2 2
−1
X X
= x[2r]W Nrk + WNk x[2r + 1]W Nkr
2 2
r=0 r=0

These are just the DFTs of the even and odd elements of the signal!

∴ X[k] = G[k] + WNk H[k]

7
N
Both G and H are 2
periodic, and notice that
k+ N 2π N
WN 2
= e−j N (k+ 2 ) = −WNk .
This means once we compute G[k] and H[k] we can compute X[k] easily because
   
k N k N
X[k] = G[k] + WN H[k] X k+ = G[k] − WN H[k] for k ∈ 0,
2 2
We can continue this relationship recursively downwards. Once we get too N = 2,
we can represet this as a simple butterfly operation:
X[0] = x[0] + x[1] X[1] = x[0] − x[1].

1.2.2 Decimation in Frequency

The decimation in frequency approach is very similar to the decimation in time


approach except instead we split the frequency components
N N N
2
−1 2
−1   2
−1   
X
2rn
X N 2r (n+ N
2 ) rn
X N
X[2r] = x[n]WN + x n+ WN = WN x[n] + x n +
n=0 n=0
2 2
n=0
2
N
2
−1   
X N
X[2r + 1] = W Nrn x[n] − x n +
2
n=0
2

2 Spectral Analysis

x(t) x[n] v[n] y[n]


C/D x H(ejΩ )

w[n]

Figure 2: A Digital Signal Processing System

In real-world DSP systems, we are often converting a continuous time signal into a
discrete one via sampling (as shown in fig. 2). Because input is constantly stream-
ing in, we can’t process all of it at once, especially for real-time applications. That
is why we instead process blocks of length L at a time. This is accomplished by
multiplying our sampled signal by a window function w[n].
All window functions are real, even, and finite. This means they have real and
symmetric DTFTs. The most simply window is a box window (a sinc in the fre-
quency domain). When the signal is multiplied by a window, it amounts to a
periodic convolution in the frequency domain.

8
Definition 6 A periodic convolution of the two spectra X(ejΩ ) and W (ejΩ ) is given by
Z
jΩ 1
V (e ) = X(ejΩ )W (ej(Ω−ω) )dω
2π h2πi

This periodic convolution means our choice of window function has an impact on
our ability to resolve frequencies in the frequency domain.

1. If W (ejΩ ) has a wide “main lobe” at the DC frequencies, then the spectrum
of V (ejΩ ) will be blurred

2. If W (ejΩ ) has large “side lobes” at non DC frequencies, then spectral leakage
occurs because larger frequencies start bleeding into lower ones.

Another factor which impacts our ability to resolve frequencies in frequency do-
main is the length of the window. Because an L point DFT samples the DTFT at L
points, taking a larger window will resolve the DTFT better. If we don’t want to
increase the window length (because doing so would increase the latency of our
system), we can zero pad after windowing because zero padding has no impact on
the DFT besides sampling the DTFT at more finely spaced samples.

2.1 Short Time Fourier Transform (STFT)

By looking at the DFT of a signal x[n], we only get the frequency information across
the entire duration of the signal. Likewise, just by looking at x[n], we get no fre-
quency information and only temporal information. The STFT is a tool to see both
at once.

Definition 7 The short-time fourier transform localizes frequencies in time by creating a


spectrogram, an image which shows what frequencies are occuring at what time.

X
X[n, ω) = x[n + m]w[m]ejωm
m=−∞

The result is discrete on the temporal axis and continuous on the frequency axis. Comput-
ing the STFT requires a window function w[n].

Essentially, we slide a window function around x[n] and compute the DTFT at
every time point.

9
2.2 Discrete STFT

Definition 8 The Discrete STFT is the discrete version of the STFT


L−1
X
X[r, k] = x[rR + m]w[m]WNkm
m=0

Our window is of length L, R is how much we shift the window around, and N ≥ L is the
DFT length we are taking.

Just like before, we take our window and slide it around the signal, computing
DFTs at every time point. If N > L, then we are essentially computing a zero-
padded DFT. The DSTFT produces a spectrogram which we can display digitally.

Definition 9 The inverse DSTFT is given by


N −1
1 X
x[rR + m]wL [m] = X[n, k]WN−km
N k=0

As long as the window is never 0 and the windows don’t overlap,

x[n − rL]
x[n] = .
wL [n − rL]

2.3 Time-Frequency Uncertainty

When we compute the spectrogram of a signal, we can think of each coefficient as


”tiling” the time-frequency plane. If we consider the normal N point DFT, each
DFT coefficient is supported by N points in the time domain. Since the DFT sam-
ples the DTFT, it divides the range of [0, 2π] into N segments of width 2π N
. Each
coefficient represents a section of this space, leading to a tiling looking like fig. 3
(for a 5 point DFT). Thinking about the DSTFT, each coefficient is computed using
L points of the original signal. Each coefficient still represents intervals of 2π
N
in the
frequency axis, so it will lead to a tiling which looks like fig. 4. What these tilings
show us is that because we have discretized time and frequency, there is some
uncertainty regarding which times and frequencies each coefficient represents.
We can formalize this idea by considering a general transform. All transforms are
really an inner product with a set of basis functions
Z ∞
Tx (γ) = hx(t), φγ (t)i = x(t)φ?γ (t)dt.
−∞

10
ω


4· 5


3· 5


2· 5


1· 5

Figure 3: Time-Frequency tiling of a 5 point DFT


4· 5


3· 5


2· 5


1· 5

t
N N N N
1· 5
2· 5
3· 5
4· 5

Figure 4: Time-Frequency tiling of the DSTFT

For each γ, Tx (γ) is the projection of the signal onto the basis vector φγ (t). We can
use Parseval’s relationship to see that
Z ∞
Tx (γ) = hx(t), φγ (t)i = x(t)φ?γ (t)dt
−∞
Z ∞
1
= X(jΩ)Φ?γ (jΩ)dΩ
2π −∞
1
= hX(jΩ), Φλ (jΩ)i.

This means that we can think of our transform not only as a projection of the signal
onto a new basis, but also as a projection of the spectrum of our signal onto the
spectrum of our basis function. Remember that projection essentially asks ”How
much of a signal can be explained by the basis”. We can formalize this by looking

11
at the signal in a statistical sense and treat it as a probability distribution.
Z ∞ Z ∞
2 1
mt = t|ψ(t)| dt mΩ = Ω |Ψ(jΩ)|2 dΩ

Z−∞∞ Z−∞∞
1
σt2 = (t − mt )2 |ψ(t)|2 dt σΩ2 = (Ω − mΩ )2 |Ψ(jΩ)|2 dΩ
−∞ −∞ 2π

mt and mΩ are the means of the signal and the spectrum. σt2 and σΩ2 are the vari-
ances. Together, they localize where our signal ”lives” in the time-frequency spec-
trum. The uncertainty principle says

1
σt σw ≥ .
2
This means there is nothing we can do to get completely accurate time resolution
and frequency resolution, and any decisions we make will lead to a tradeoff be-
tween them.

2.4 Wavelets

While the STFT gives us a better picture of a signal than a full-length DFT, one of
its shortcomings is that each coefficient is supported by the same amount of time
and frequency. Low frequencies don’t change as much as high frequencies do, so a
lower frequency needs to be resolved with more time support whereas a fast signal
would requires less time support to resolve properly.

Definition 10 The Wavelet transform finds coefficients which tile the time-frequency spec-
trum with different time and frequency supports using a mother and father wavelet.
Z ∞  
1 ? t−u
W f (u, s) = f (t) √ Ψ dt
−∞ s s

The Wavelet transform essentially makes all of the boxes in fig. 4 different sizes.

Definition 11 The mother wavlet is a scaled bandpass filter Ψ(t) used for the kernel of the
wavelet transform. It must have the following properties:
Z ∞ Z ∞
2
|Ψ(t)| dt = 1 Ψ(t)dt = 0
−∞ −∞

12
We need an infinite number of functins to fully represent all frequencies properly,
but at a certain level, we don’t care about our ability to resolve them better, so
we stop scaling and use a low frequency function Φ(t) to ”plug” the remaining
bandwidth.

Definition 12 The father wavelet is a low frequency function Φ(t) used to “plug” the
remaining bandwidth not covered by the mother wavelet.

2.4.1 Discrete Wavelet Transform

In discrete time, the wavelet transform becomes


N
X −1 N
X −1
ds,u = x[n]Ψs,u [n] as,u = x[n]Φs,u [n]
n=0 n=0

The d coefficients are the detailed coefficients and are computed using the mother
wavelet. The capture higher frequency information. The a coefficients are the
approximate coefficients computed using the father wavelet. They represent lower
frequency information. The time frequency tiling for the DWT looks like fig. 5.
Notice how each wavelet coefficient is supported by a different amount of time

ω
d0,0 d0,1 d0,2 d0,3

d1,0 d1,1

d2,0

a2,0

Figure 5: Time Frequency tiling of wavelets

and frequency. We can choose different mother and father wavelets to describe
our signals depending on how we want to tile the time-frequency plane.

3 Sampling

3.1 Ideal Sampling

In order to work with continuous signals using a computer, we need to sample


them. This means recording the value at particular points of time. During uni-
form sampling, we take samples at an even sampling period Ts so x[n] = xc (nT )

13
(where xc is our continuous signal). This is done by passing the signal through an
Analog-To-Digital converter. From there we can do discrete time processing and
reconstruct our signal by passing it through a Digital-to-Analog converter with
reconstruction period Tr .

Ts Tr

x(t) xd [n] y[n] y(t)


ADC Hd (ejω ) DAC

Figure 6: Uniform Sampling System

We mathematically model sampling as multiplication by an impulse train. Notice


that if we were to take a signal x(t) and multiply it by an impulse train, then we
would get a series of impulses equal to x(t) at the sampling points and 0 every-
where else. We can call this signal xp (t).

X
p(t) = δ(t − kT )
k=−∞

X
xp (t) = x(t)p(t) = x(t)δ(t − kT )
k=−∞

In the Fourier Domain,


1
Xp (jΩ) = X(jΩ) ∗ P (jΩ)


2π X
P (jΩ) = δ(Ω − kΩs )
T k=−∞
Z ∞ ∞
1 1 X
∴ Xp (jΩ) = X(jθ)P (j(Ω − θ))dθ = X(j(Ω − kΩs )
2π −∞ T k=−∞

What this tells us is that the Fourier Transform of our sampled signal is a series of
copies of X(jΩ), each centered at kΩs where Ωs = 2π T
. This is a good model because
we can equivalently write the CTFT of the impulse train sampled signal as
Z ∞ X ∞ ∞
X
Xp (jΩ) = x(t)δ(t − kT ) x(kT )e−jkT Ω .
−∞ k=−∞ k=−∞

Notice that this is just the DTFT of x[n] = x(nT ) if we set ω = ΩT .


∞ ∞  

X
−jωn 1 X ω 2π
X(e ) = x(nT )e = Xp (jΩ)|Ω= Tω = X −k
n=−∞
T k=−∞ T Ts

This means that the DTFT of our signal is just a bunch of shifted copies, and the
frequency axis is scaled so Ωs → 2π.

14
3.1.1 Nyquist Theorem

To analyze this further, we will stay in continuous time. Lets say that our original
signal has the following Fourier Transform. Notice the signal is band-limited by
ΩM . There are two major cases: if Ωs > 2Ωm and Ωs < 2Ωm .

Xp (jΩ)
1

−1Ωm 1Ωm

Figure 7: Example of the spectrum of a bandlimited signal

Case One: Ωs > 2Ωm . As shown in fig. 8, the shifted copies of the original X(jΩ)

Xp (jΩ)
1
T


(−Ωs − Ωm ) −Ωs (−Ωs + Ωm −1Ω
) m 1Ωm (Ωs − Ωm ) Ωs (Ωs + Ωm )

Figure 8: When Ωs > 2Ωm

(shown in blue) do not overlap with each other or with the original copy. If we
wanted to recover the original signal, we could simply apply a low pass filter to
isolate the unshifted copy of X(jΩ) and then take the inverse Fourier Transform.
Case Two: Ωs < 2Ωm Notice how in fig. 9, the shifted copies overlap with the

Xp (jΩ)
1
T


−Ωs Ωs

Figure 9: When Ωs < 2Ωm

original X(ω). This means in our sampled signal, the higher frequency information
is bleeding in with the lower frequency information. This phenomenon is known
as aliasing. When aliasing occurs, we cannot simply apply a low pass filter to
isolate the unshifted copy of X(ω).
When Ωs = 2ΩM , then our ability to reconstruct the original signal depends on
the shape of its Fourier Transform. As long as Xp (jkΩm ) are equal to X(jΩm ) and

15
X(−jΩm ), then we can apply an LPF because we can isolate the original X(jΩ)
and take its inverse Fourier Transform.

Remember that an ideal low pass filter is a square wave in the frequency domain
and a sinc in the time domain. Thus if we allow
 
 T |Ω| < Ωs 
2
Xr (jΩ) = Xp (jΩ) ·
 0 else 
then our reconstructed signal will be
  ∞  
t X t − nT
xr (t) = xp (t) ∗ sinc = X(nT ) sinc .
T n=−∞
T
This is why we call reconstructing a signal from its samples ”sinc interpolation.”
This leads us to formulate the Nyquist Theorem.

Theorem 4 (Nyquist Theorem) Suppose a continuous signal x is bandlimited and we


sample it at a rate of Ωs > 2Ωm , then the signal xr (t) reconstructed by sinc interpolation
is exactly x(t)

3.1.2 Discrete Time Processing of a Continuous Time Signal

As long as the DT system we apply is LTI, the overall CT system will be linear too,
but it will not necessarily be time invariant because sampling inherently depends
on the signal’s timing. If we want to find the overall CT transfer function (ω = ΩT )
of a system like that depicted in fig. 6.
ω
Yd (ejω ) = Hd (ejω )Xd (ejω ) = Hd (ejω )Xp
T
jΩT jΩT
Yp (jΩ) = Yd (e ) = Hd (e )Xp (jΩ)
   
 T |Ω| < Ωs   T H (ejΩT )X (jΩ) |Ω| < Ωs 
2 d p 2
Y (jΩ) = · Yp (jΩ) =
 0 |Ω| ≥ Ωs   0 |Ω| ≥ Ωs 
2 2

Assuming that the Nyquist criteria is met holds,


1
Xp (jΩ) = X(jΩ)
T 
 H (ejΩT )X(jω) |Ω| < Ωs 
d 2
∴ Y (jΩ) =
Ωs
 0 |Ω| ≥ 2

 
 H (ejωT ) |Ω| < Ωs 
d 2
∴ Hsystem =
Ωs 
 0 |Ω| ≥ 2

This shows us that as long as the Nyquist theorem holds, we can process continu-
ous signals with a disrete time LTI system and still have the result be LTI.

16
3.1.3 Continuous Time Processing of Discrete Time Signals

While not useful in practice, it can be useful to model a discrete time transfer func-
tion in terms of Continuous Time processing (e.g a half sample delay).

Tr Ts

xd [n] x(t) y(n) yd [n]


DAC H(jΩ) ADC

Figure 10: Continuous Time processing of a Discrete Time signal

Similar to the analysis of DT processing of a CT signal, we can write the discrete


transfer function in terms of the continuous function. Our continuous signal will
be bandlimited after reconstruction.
(
T Xd (ejω )|ω=ΩT |Ω| ≤ Ω2s
X(jΩ) =
0

This means our reconstructed signal Y (jΩ) = H( jΩ)X(jΩ) is also bandlimited, so


we can say that
Yd (ejΩ ) = H(jΩ)|Ω= Tω X(ejω )

3.1.4 Downsampling

When we downsample a signal by a factor of M , we create a new signal y[n] =


x[nM ] by taking every M th sample. What this means conceptually is that we are
reconstructing the continuous signal and then sampling it at a slower rate M T
where T was the original sampling rate. If xc is the original continuous time signal
and xd is the sampled signal, then the downsampled signal y[n] will be
∞  
jω 1 X ω 2π
y[n] = x[nM ] = xc (nM T ) =⇒ Y (e ) = Xc −k .
M T k=−∞ NT NT

If we re-index and let k = M p + m for m ∈ [0, N − 1], p ∈ Z


M −1
1 X ω−2πm
Y (ejω ) = Xd (ej M ).
M m=0

What this means is to obtain the new DTFT, we need to scale the frequency axis
π
so M → π. To prevent aliasing when this happens, we include an LPF before the
downsample step.

17
x[n] LPF: y[n] = x[nM ]
π ↓M
ωc = M

Figure 11: Downsampling

3.1.5 Upsampling

When we upsample a signal by a factor of L, we are interpolating between samples.


Conceptually, this means we are reconstructing the original continuous time signal
and resampling it at a faster rate than before. First we place zeros in between
samples, effectively expanding our signal.
(  
x Ln n = 0, ±L, ±2L, ...
xe [n] =
0

X ∞
X
−jωn

x[m]e−jωmL = X ejωL

Xe (e ) = xe [n]e =
−∞ m=−∞

Then we interpolate by convolving with a sinc.


∞  
n X n − kL
y[n] = xe [n] ∗ sinc = x[k] sinc
L n=−∞
L

In the frequency domain, this looks like compressing the frequency axis so π → Lπ
and then taking a low pass filter. The gain of L is used to scale the spectrum so it

x[n] LPF: y[n]


↑L ωc = Lπ
Gain=L

Figure 12: Upsampling operation

is identical to if we had sampled the continuous signal at a rate of TL .

3.2 Multi-Rate Signal Processing

In order to resample a signal to a rate T 0 = MLT where T is the original sampling


rate, we can do this by upsampling then downsampling our signal. Notice that we
only need one LPF to take care of both anti-aliasing and interpolation.

18
x[n] LPF: y[n]
↑L ωc = ↓M
π π
min M ,L

Figure 13: Resampling

3.2.1 Exchanging Filter Order During Resampling

Notice that resampling with a very small change wastes a lot of computation. For
example, resampling with T 0 = 1.01T would upsample by 100 and then throw
away most of those samples when we downsample. Thus it would be useful to
exchange the order of operations when resampling to save computation. During

1
↑L H(z) ≡ H(z L ) ↑L

Figure 14: Interchanging an upsampling operation

upsampling, we convolve our filter with a bunch of zeros caused by the expansion.
Convolution with 0’s is a unnecessary, so instead we could convolve with a com-
1
pressed version of the filter. Notice the results will be the same as long as H(z L )
is a rational function, During downsampling, we do a convolution and then throw

1
H(z) ↓M ≡ ↓M H(z M )

Figure 15: Interchanging a downsampling operation

away most of our results. It would be much more efficient to instead compute only
the quantities we need. This is accomplished by downsmapling first and then con-
1
volving. Just like before, the results are only going to be the same if H(z M ) is a
rational function.

3.2.2 Polyphase Decomposition

The problem with interchanging filters is that it is not always possible. Most filters
are not compressible. However, we can get around this issue and still get the effi-
ciency gains of interchanging filter orders by taking a polyphase decomposition of
our filters. First notice that h[n] can be written as a sum of compressible filters.
M
X −1
h[n] = hk [n − k]
k=0

This means if we let ek [n] = hk [nM ], we can utilize the linearity of convolution to
build a bank of filters. Now each of our filters is compressible, so we can switch the

19
4 4 4
h(n) h1 (n) h2 (n)
3 3 3

2 2 2

1 1 1

1 2 3 4 1 2 3 4 1 2 3 4

Figure 16: Example of decomposing a filter: M=2

x[n] E0 (z M )
y[n]
z −1 E1 (z M ) + ↓M

.. ..
. .

z −1 EM −1 (z M )

Figure 17: A filter bank

x[n] ↓M E0 (z)
y[n]
z −1 ↓M E1 (z) +

.. ..
. .

z −1 ↓M EM −1 (z)

Figure 18: Filter bank but with the downsampling done first

order of downsampling and filtering while maintaining the same output. Now for
any filter, we can compute only what we need, so the result is correct and efficently
obtained.

3.3 Practical Sampling (ADC)

Unfortunately, ideal analog to digital conversion is not possible for a variety of


reasons. The first is that not all signals are bandlimited (or there may be noise
outside of the bandwidth). Moreover, computers only have finite precision, so we
cannot represent the full range of values that a continuous signal might take on
with a finite number of bits per sample. The solution to the first issue is to include
a “anti-aliasing” filter before the sampler. The solution to the second issue is to

20
quantize. However, sharp analog filters are difficult to implement in practice. To

xc (t) LPF: xd [n]


C/D Quantizer
ωc = Lπ

Ts

Figure 19: Sampling with quantization

deal with this, we could make the anti-aliasing filter wider, but this would add
noise and interference. If we keep the cutoff frequency the same, then we could
alter part of the signal because our filter is not ideal. A better solution is to do the
processing in Discrete Time because we have more control. We also sample higher
than the Nyquist Rate and then downsample it to the required rate.

xc (t) Digital xd [n]


Analog LPF C/D LPF ↓M Quantizer
π
ωc = M
Ts
M

Figure 20: A practical sampling system with quantization and anti-aliasing

3.3.1 Quantization

If we have a dynamic range of Xm (i.e 2Xm is the length of the range of values
we can represent), then our step between quantized values is ∆ = X2Bm , assuming
we are representing our data as 2’s complement numbers with B bits. We model
the error caused by quantization as additive noise. Our quantized signal x̂[n] is
decribed by
−∆ ∆
x̂[n] = x[n] + e[n] ≤ e[n] ≤
2 2
We do this under the following assumptions:

1. e[n] is produced by a stationary random process

2. e[n] is not correlated with x[n]

3. e[n] is white noise (e[n] is not correlated with e[m])

4. e[n] ∼ U −∆ ∆
 
2
, 2

21
For rapidly changing signals with small ∆, this assumptions hold, and they are
useful in modeling quantization error. Since ∆ = 2−B Xm

∆2 2−2B Xm
2
σe2 = =
12 12
This means our Signal to Noise Ratio for quantization is
 2  
σx Xm
SN RQ = 10 log = 6.02B + 10.8 − 20 log
σe2 σs

What this tells us is that every new bit we add gives us 6dB in improvement. It also
tells us that we need to adapt the range of quantization to the RMS amplitude of
the signal. This means there is a tradeoff between clipping and quantization noise.
When we oversampling our signal, we can further limit the effects of quantization
noise because this noise will be spread out over more frequencies and the LPF
2
will eliminate noise outside the signal bandwidth. This makes σMe the new noise
variance (if we oversample by M ). Thus we can modify the SN RQ equation
 
Xm
SN RQ = 6.02B + 10.8 − 20 log + 10 log M.
σs

This shows that doubling M yields a 3dB improvement (equivalent to 0.5 more
bits).

3.4 Practical Reconstruction (DAC)

In the ideal case, we reconstruct signals by converting them to impulses and then
convolving with a sinc. However, impulses are require lots of power to generate,
and sincs are infinitely long, so it is impractical to design an analog system to do
this. Instead, we use an interpolation like Zero-Order-Hold to pulses and then
filter with a reconstruction filter.

x[n] xr (t)
H0 (jΩ) Hr (jΩ)

Figure 21: Practical Reconstruction

Zero Order Hold Sampled Signal


z }|  { z }| {

T Ω 1 X
Xr (jΩ) = Hr (jΩ) T e−jΩ 2 sinc X(j(Ω − kΩs ))
Ωs T k=−∞

We design Hr (jΩ) such that Hr (jω)H0 (jω) is approximately an LPF.

22
4 Filtering

4.1 Transform Analysis of LTI Systems

LTI filters are characterized by their impulse response. The two broad categories
of LTI systems are those with finite impulse responses (FIR) and those with in-
finite impulse responses (IIR). LTI systems are frequently characterized by linear
constant-coefficient difference equations which look as follows:
N
X M
X
ak y[n − k] = x[n − k].
k=0 k=0

Definition 13 The system function H(z) is the z-transform of the impulse response of the
system. For LCCDE’s, it is a ratio of polynomials in z −1 .
PM −i
QM
k=0 bi z b0 (1 − ck z −1 )
H(z) = PN −i
= Qk=1
N
k=0 ai z a0 k=1 (1 − dk z −1 )

We call ck (the roots of the numerator) the zeros of the system and dk (the roots of the
denominator) the poles of the system.

Definition 14 The Magnitude Response |H(ejω )| desribes how the system will scale a
complex exponential.
|b0 | M −jω
Q
jω k=1 |1 − ck e |
|H(e )| = QN
|a0 | k=1 |1 − dk e−jω |

Definition 15 The Phase Response arg[H(ejω )] describes how the system will shift the
phase of a complex exponential.
M
X N
X
arg[H(ejω )] = arg[b0 ] + arg[1 − ck e−jω ] − arg[a0 ] − arg[1 − dk e−jω ]
k=1 k=1

Definition 16 The Group Delay of the system grd[H(ejω )] tells us how much a complex
exponential will be delayed.
M N
d X
−jω
X

grd[H(e )] = − jω
arg[H(e )] = grd[1 − ck e ] − grd[1 − dk e−jω ]
dω k=1 k=1

23
We can systematically analyze these by drawing a vector from e−jω to each dk or
ck and analyze each one individually. For example, if we look at one pole in the
magnitude response
|1 − dk e−jω | = |ejω − dk | = |vk |.
In general, the effects of poles and zeros on each of these quantities is described by
the following table.

Magnitude Response Phase Response Group Delay


Poles Increase Phase Lag Increase
Zeros Decrease Phase Advance Decrease

These effects are larger when ck or dk are close the unit circle (i.e |ck |, |dk | ≈ 1).

4.2 All Pass Systems

Definition 17 All pass systems are those where |H(ejω )| = k where k is some constant
gain.
Mr M
Y z −1 − dk Yc (z −1 − e?k )(z −1 − ek )
H(z) = k
k=1
1 − dk z −1 k=1 (1 − ek z −1 )(1 − e?k z −1 )

Their Z-transform has the real poles dk cancelled by real zeros, and complex poles
ek cancelled by the conjugates e?k .

Theorem 5 If an All-Pass system is stable, then grd[H(ejω )] > 0 =⇒ Causal and


arg[H(ejω )] ≤ 0 =⇒ Phase Lag.

4.3 Minimum Phase Systems


1
Definition 18 A stable and causal system H(z) whose inverse H(z)
is also stable and
causal is called a Minimum Phase System.

What this means is that all poles are zeros must be inside the unit circle, and the
region of convergence is right sided. Minimum phase systems are called minimum
phase because of all H(z) with the same magnitude response, a minimum phase
system has the minimum phase and the minimum group delay.

Theorem 6 Any stable and causual system can be decomposed into a minimum phase
system and an all-pass system.
H(z) = Hmin (z)Hap (z)

24
This is useful because if a signal undergoes a distortion, we can at least undo the
minimum phase part of it (since Hmin has a guaranteed inverse).

4.4 Generalized Linear Phase Systems

Definition 19 A linear phase system is one with constant group delay.

H(ejω ) = A(ejω )e−jαω =⇒ grd[H(ejω )] = α

Note that A(ejω ) is a real function.

Definition 20 A generalized linear phase system has frequency response given by

H(ejω ) = A(ejω )e−jαω+β =⇒ grd[H(ejω )] = α

If we limit ourselves to using FIR filters, then a GLP system must have either even
or odd symmetry, meaning for some M

h[n] = h[M − n] or h[n] = −h[M − n].

This restricts us to 4 different filter types.

Symmetry M Filter Types Notes


Type I Even Even All
Type II Even Odd Low Pass H(ejπ ) = 0
Type III Odd Even Bandpass H(ej0 ) = H(ejπ ) = 0
Type IV Odd Odd High H(ej0 ) = 0

Because of their symmetry, FIR systems are limited in where their zeros.

Type I, II: H(z) = z −M H(z −1 ) Type III, IV: H(z) = −z −M H(z −1 )

In other words, if a = rejθ is a zero, then a1? is too. We can decompose GLP systems
into a minimum phase, maximum phase, and unit circle system.

25
4.5 Filter Design

The idea of filter design is to take a desired frequency response and design a fil-
ter which has that frequency response. Some frequency responses can only be
described by IIR systems which are impractical for real applications, so we make
various tradeoffs when we design FIR filters to implement in our systems. We also
like our filters to be causal because it makes them usable in real-time systems.
A Mth order causal filter has M + 1 coefficients.

Definition 21 The time-bandwidth product describes how sinc-like a filter looks like.
ωc
T BW = (M + 1)
π

The TBW is also the number of zero-crossings in the impulse response (including
the zeros crossings at the end of the filter). To generate a High Pass filter, we can
design a Low-Pass filter and then modulate it

hhp [n] = (−1)n hlp[n].

We can do the same for a bandpass filter

hbp [n] = 2hlp cos(ω0 n).

4.5.1 Windowing Method

One way to generate a filter which matches a desired frequency response is through
windowing.

1. Choose a desired frequency response (often non-causal and IIR)

2. Window the Impulse Response

3. Module to shift the impulse response to make it casual

The length of the window impacts the transition width (how long it takes to tran-
sition). A longer window means a smaller width. The window type will impact
the ripples in the frequency response. The choice of of window and its sidelobes
impact these magnitudes.

26
4.5.2 Optimal Filter Design

With optimal filter design, we set up constraints to find a Hd (ejω ) which approxi-
mates H(ejω ) based on our optimization requirements. In general, we have some
regions Wc ⊆ [0, π] that we care about and other regions that we don’t care about.
We can first design a noncausal filter H(e ˜jω ) and then shift it to make it causal.
We do this by sampling and discretizing the frequency response to ωk = k πp where
−π ≤ ω1 ≤ ... ≤ ωp ≤ π. We choose P to be sufficiently big and make sure the
ωk ∈ ωc (the region we care about). In a least squares setup, we can solve
   
−jω1 (−N ) −jω1 (N ) jω1
e ... e H(e )
.. . ..
   
argmin ||Ah̃ − b||2
A= . . . ~
b=
. . . .
   
 
h̃    
−jωp (−N ) −jωp (N ) jωp
e ... e H(e )

Other possible optimizations are Weightest Least Squares or Chebyshev Design.


Z π
WLS: min W (ω)|H(ejω ) − Hd (ejω )|2 dω
−π

Chebyshev: min max |H(ejω − Hd (ejω )|


w∈Wc

Another optimization technique is the min-max ripple design where we try and
control the deviations of the filter from the desired response. We can set up a
linear program to do this for us. For example, if we were designing a low pass
filter, we could write the LP

min δ
1 − δ ≤ Hd (ejωk ) ≤ 1 + δ 0 ≤ ωk ≤ ωc
jωk
−δ ≤ Hd (e )≤δ ωc ≤ ωk ≤ π
δ>0

27

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy