0% found this document useful (0 votes)
3 views

report_SP (2)

The document is a midterm report for the Summer of Science program at IIT Bombay, focusing on signal processing topics such as complex analysis, signal classification, system properties, linear time-invariant systems, convolution, Fourier analysis, and Laplace transforms. It details concepts like the residue theorem, impulse response, and properties of convolution, as well as applications in filtering and modulation. The report serves as an educational resource for understanding the foundational principles of signal processing.

Uploaded by

Jayesh Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

report_SP (2)

The document is a midterm report for the Summer of Science program at IIT Bombay, focusing on signal processing topics such as complex analysis, signal classification, system properties, linear time-invariant systems, convolution, Fourier analysis, and Laplace transforms. It details concepts like the residue theorem, impulse response, and properties of convolution, as well as applications in filtering and modulation. The report serves as an educational resource for understanding the foundational principles of signal processing.

Uploaded by

Jayesh Verma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Indian Institute of Technology , Bombay

Summer of Science

Signal Processing

Mentee: Jayesh Verma

Mentor: Richie Shambharkar

Academic Year 2023/24


SoS Midterm Report on Signal Processing

Week 1: Complex Analysis, Basic Properties of


Signals, and Systems

1 Complex Analysis
1.1 Introduction to Complex Analysis
Complex analysis studies functions of complex variables, focusing on analyticity,
where functions satisfy the Cauchy-Riemann equations:

∂u ∂v ∂u ∂v
= , =−
∂x ∂y ∂y ∂x
Key results include the Cauchy Integral Theorem:
I
f (z) dz = 0
γ

and the Cauchy Integral Formula:


I
1 f (z)
f (a) = dz
2πi γ z−a

Residue calculus uses these principles to evaluate complex integrals, particularly


via residues at poles. The field is vital in physics and engineering for solving real
integrals and series expansions.

1.2 Residue Theorem


The residue theorem is a fundamental result in complex analysis, useful for evalu-
ating integrals of analytic functions around closed contours. The theorem states:
Z n
X
f (z) dz = 2πi Res(f, ak ) (1)
γ k=1

where f (z) is a meromorphic function within and on a simple closed contour γ, and
Res(f, ak ) is the residue of f at ak .

2
Figure 1: Residue Theorem

1.2.1 Example
Consider the integral:
ez
Z
dz (2)
|z|=2 z2 + 1
The poles of the integrand are at z = ±i. Only z = i is within the contour |z| = 2.
The residue at z = i is:  z
ei

e
Res , i = (3)
z2 + 1 2i
Thus,
ez ei
Z
2
dz = 2πi · = πei (4)
|z|=2 z +1 2i
This example illustrates how the residue theorem simplifies complex integral evalu-
ations.

2 Basic Properties of Signals


2.1 Signal Classification
Signals are functions that convey information about the behavior of a system. They
can be classified as:

2.1.1 Continuous-Time Signals


Defined for every time t. Examples include:

• x(t) = cos(2πt)

• x(t) = e−t u(t), where u(t) is the unit step function.

3
Figure 2: Continuous and Discrete signals

2.1.2 Discrete-Time Signals


Defined only at discrete times n. Examples include:
• x[n] = δ[n]

• x[n] = u[n], where δ[n] is the Kronecker delta function and u[n] is the discrete
unit step function.

2.2 Prototypical Signals


• Impulse Signal (δ(t) or δ[n]): An idealized signal that is zero everywhere
except at zero, where it is infinitely high such that its integral (or sum) is one.

• Step Signal (u(t) or u[n]): Zero for negative time and one for positive time.

• Exponential Signal: Continuous-time eat or discrete-time an , where a is a


complex constant.

3 Systems and Their Properties


System Properties in Signal Processing
1. Linearity
A system S is linear if it satisfies both additivity and homogeneity (scaling).

• Additivity:
S[x1 (t) + x2 (t)] = S[x1 (t)] + S[x2 (t)]

• Homogeneity:
S[a · x(t)] = a · S[x(t)]

4
• Combined Linearity:
S[a · x1 (t) + b · x2 (t)] = a · S[x1 (t)] + b · S[x2 (t)]

Figure 3: Linearity

2. Time-Invariance
A system S is time-invariant if a time shift in the input signal results in an identical
time shift in the output signal.
• For an input x(t) producing output y(t):
S[x(t)] = y(t)

• For a time shift t0 :


S[x(t − t0 )] = y(t − t0 )

3. Causality
A system S is causal if the output at any time t depends only on the input at the
current or past times, not future times.
• Mathematically:
y(t) = S[x(t)] depends only on x(t′ ), for t′ ≤ t

4. Stability
A system S is stable if bounded inputs produce bounded outputs. This is also known
as BIBO (Bounded Input, Bounded Output) stability.
• If x(t) is bounded, i.e.:
|x(t)| ≤ M for all t
• Then the output y(t) must also be bounded, i.e.:
|y(t)| ≤ N for all t

5
Figure 4: Time-Invariance

5. Memory
A system S has memory if the output at any time t depends on past and/or future
values of the input. It is memoryless if the output at any time t depends only on
the input at that same time.

• Memoryless system:
y(t) = f (x(t))

• System with memory:

y(t) = S[x(τ )], for τ ≤ t (causal) or τ ≥ t (non-causal)

Week 2: Linear Time-Invariant (LTI) Systems and


Convolutions
4 Linear Time-Invariant Systems
4.1 Introduction
Linear Time-Invariant (LTI) systems are pivotal in signal processing. Their behavior
is fully characterized by their impulse response.

4.2 Impulse Response


The impulse response h(t) of a linear time-invariant (LTI) system characterizes the
system’s output when presented with a Dirac delta function δ(t) as input:

h(t) = S[δ(t)]

For any input x(t), the output y(t) can be obtained by convolving x(t) with the
impulse response h(t):
Z ∞
y(t) = (x ∗ h)(t) = x(τ )h(t − τ ) dτ
−∞

6
Figure 5: LTI System

In the discrete-time domain, the impulse response h[n] is defined similarly, where
the system’s output to a discrete delta function δ[n] is:
h[n] = S[δ[n]]
The output y[n] for a discrete input x[n] is given by the discrete convolution:

X
y[n] = (x ∗ h)[n] = x[k]h[n − k]
k=−∞

The impulse response provides a complete characterization of the LTI system,


allowing the prediction of the system’s response to any arbitrary input x(t) or x[n].
The system can be analyzed in the frequency domain by taking the Fourier transform
of the impulse response h(t) to obtain the frequency response H(f ):
Z ∞
H(f ) = F{h(t)} = h(t)e−j2πf t dt
−∞

For discrete-time systems, the discrete-time Fourier transform (DTFT) of h[n]


yields:

X

H(e ) = h[n]e−jωn
n=−∞

The impulse response thus serves as a foundational tool in analyzing and design-
ing LTI systems.

4.2.1 Example
Consider a system with impulse response h(t) = e−t u(t) and input x(t) = u(t). The
output is: Z t
y(t) = e−τ dτ = 1 − e−t (5)
0

7
Figure 6: Impulse Response

where F denotes the Fourier transform, and X(f ) and H(f ) are the Fourier
transforms of x(t) and h(t), respectively.

4.3 Convolution and Its Properties


Convolution is a fundamental operation in signal processing, defined for continuous-
time signals as:
Z ∞
(y ∗ h)(t) = y(τ )h(t − τ ) dτ
−∞

For discrete-time signals, convolution is given by:



X
(y ∗ h)[n] = y[k]h[n − k]
k=−∞

Figure 7: Calculation of convolution

8
4.4 Properties of Convolution
1. Commutativity
Convolution is commutative, meaning the order of the signals does not affect the
result:
(y ∗ h)(t) = (h ∗ y)(t)
(y ∗ h)[n] = (h ∗ y)[n]

2. Associativity
Convolution is associative, allowing grouping of convolutions in any order:
(y ∗ (h ∗ g))(t) = ((y ∗ h) ∗ g)(t)
(y ∗ (h ∗ g))[n] = ((y ∗ h) ∗ g)[n]

3. Distributivity
Convolution is distributive over addition, so the convolution of a signal with the
sum of two signals is the sum of the convolutions:
y ∗ (h + g) = (y ∗ h) + (y ∗ g)
y ∗ (h + g) = (y ∗ h) + (y ∗ g)

4. Identity Element
The delta function δ(t) in continuous time and δ[n] in discrete time acts as the
identity element for convolution:
y(t) ∗ δ(t) = y(t)
y[n] ∗ δ[n] = y[n]

5. Time Invariance
If y(t) and h(t) are time-invariant, their convolution y ∗ h is also time-invariant:
y(t − t0 ) ∗ h(t − t0 ) = (y ∗ h)(t − t0 )
y[n − n0 ] ∗ h[n − n0 ] = (y ∗ h)[n − n0 ]

6. Convolution in Frequency Domain


Convolution in time domain corresponds to multiplication in the frequency domain.
If Y (f ) and H(f ) are the Fourier transforms of y(t) and h(t) respectively, then:
F{y(t) ∗ h(t)} = Y (f )H(f )
F{y[n] ∗ h[n]} = Y (ejω )H(ejω )
These properties make convolution a powerful tool in the analysis and processing
of signals in both continuous and discrete domains.

9
Figure 8: More on convolution

4.5 Applications of Convolution


Convolution is used in various applications, including filtering, system analysis, and
signal reconstruction. For example, in image processing, convolution with a kernel
can blur or sharpen an image.

Week 3: Fourier Analysis of Signals and Systems


5 Fourier Series
5.1 Introduction
The Fourier series allows representation of a periodic signal as a sum of sinusoidal
functions. A periodic signal x(t) with period T can be expressed as:

X
x(t) = ck ejkω0 t (6)
k=−∞

where ω0 = 2πT
is the fundamental angular frequency, and ck are the Fourier coeffi-
cients given by:
1 T
Z
ck = x(t)e−jkω0 t dt (7)
T 0

5.1.1 Example
For a square wave x(t) with period T and amplitude 1, the Fourier coefficients are:
Z T /2  
1 −jkω0 t 2 πk
ck = sgn(t)e dt = sin (8)
T −T /2 πk 2

Thus, the Fourier series is:


4 X 1
x(t) = sin(kω0 t) (9)
π k=1,3,5,...
k

10
Figure 9: Fourier Series

Figure 10: Fourier Transform , Fourier Series , DTFT , DFT

6 Fourier Transform
6.1 Introduction
The Fourier transform generalizes the Fourier series to non-periodic signals, provid-
ing a representation in the frequency domain. The Fourier transform of a continuous-
time signal x(t) is: Z ∞
X(f ) = x(t)e−j2πf t dt (10)
−∞
The inverse Fourier transform is:
Z ∞
x(t) = X(f )ej2πf t df (11)
−∞

6.1.1 Example
For x(t) = e−at u(t) with a > 0, the Fourier transform is:
Z ∞
1
X(f ) = e−at e−j2πf t dt = (12)
0 a + j2πf

11
6.2 Properties of the Fourier Transform
• Linearity: F{ax1 (t) + bx2 (t)} = aX1 (f ) + bX2 (f )

• Time Shifting: F{x(t − t0 )} = X(f )e−j2πf t0

• Frequency Shifting: F{x(t)ej2πf0 t } = X(f − f0 )

• Scaling: F{x(at)} = 1 f

|a|
X a

• Convolution: F{x(t) ∗ h(t)} = X(f )H(f )

7 Applications of Fourier Analysis


7.1 Signal Filtering
In signal processing, filters are used to remove unwanted components from a signal.
The frequency response of a filter is used to design it. For example, a low-pass filter
allows frequencies below a cutoff frequency to pass and attenuates higher frequencies.

7.1.1 Example

A simple RC low-pass filter has a transfer function:

1
H(f ) = (13)
1 + j2πf RC

This filter attenuates high-frequency signals, making it useful in audio processing to


remove high-frequency noise.

7.2 Modulation
Modulation involves varying a carrier signal to transmit information. The Fourier
transform helps analyze the spectrum of modulated signals. For amplitude modu-
lation (AM), a signal x(t) modulates a carrier cos(2πfc t):

s(t) = x(t) cos(2πfc t) (14)

The Fourier transform of s(t) shows the signal components at fc and −fc .

7.2.1 Example

If x(t) is a low-frequency signal, the spectrum S(f ) of the modulated signal s(t) will
have components around fc and −fc , making it suitable for transmission over long
distances.

12
Week 4: Laplace Transforms and System Analysis
8 Laplace Transform
8.1 Introduction
The Laplace transform is a powerful tool for analyzing linear time-invariant systems.
It extends the Fourier transform to complex frequencies and is defined as:
Z ∞
X(s) = L{x(t)} = x(t)e−st dt (15)
0

where s = σ + jω is a complex variable.

8.1.1 Example
For x(t) = e−at u(t) with a > 0, the Laplace transform is:
Z ∞
1
X(s) = e−at e−st dt = , Re(s) > −a (16)
0 s+a

8.2 Properties of the Laplace Transform


• Linearity: L{ax1 (t) + bx2 (t)} = aX1 (s) + bX2 (s)
• Time Shifting: L{x(t − t0 )u(t − t0 )} = e−st0 X(s)
• Frequency Shifting: L{eat x(t)} = X(s − a)
• Differentiation: L{x′ (t)} = sX(s) − x(0)
nR o
t
• Integration: L 0 x(τ ) dτ = X(s)s

9 System Analysis Using Laplace Transform


9.1 Transfer Function
The transfer function H(s) of a system is the Laplace transform of its impulse
response h(t):
H(s) = L{h(t)} (17)
For a system described by a linear differential equation, the transfer function pro-
vides a straightforward way to analyze the system’s behavior.

9.1.1 Example
Consider a system described by the differential equation:
d2 y(t) dy(t)
+ 3 + 2y(t) = x(t) (18)
dt2 dt
Taking the Laplace transform of both sides and solving for Y (s):
s2 Y (s) + 3sY (s) + 2Y (s) = X(s) (19)

13
X(s) X(s)
Y (s) = = (20)
s2 + 3s + 2 (s + 1)(s + 2)
Thus, the transfer function is:

1
H(s) = (21)
(s + 1)(s + 2)

9.2 Poles and Zeros


The poles of a transfer function H(s) are the values of s that make the denominator
zero, and the zeros are the values that make the numerator zero. Poles and zeros
provide insight into the stability and frequency response of the system.

9.2.1 Example
1
For the transfer function H(s) = (s+1)(s+2)
, the poles are at s = −1 and s = −2.
There are no zeros.

10 Stability and Frequency Response


10.1 Stability
A system is stable if all the poles of its transfer function have negative real parts.
1
For example, the system with H(s) = (s+1)(s+2) is stable because both poles have
negative real parts.

10.2 Frequency Response


The frequency response of a system is obtained by evaluating the transfer function
H(s) on the imaginary axis s = jω:

H(jω) = H(s) (22)


s=jω

This provides the system’s response to sinusoidal inputs at different frequencies.

10.2.1 Example
1
For H(s) = (s+1)(s+2)
, the frequency response is:

1
H(jω) = (23)
(jω + 1)(jω + 2)

This can be analyzed to understand the system’s behavior at various frequencies.

14
Week 5: Sampling and Discrete-Time Signals
11 Sampling Theorem
11.1 Introduction
The sampling theorem states that a continuous-time signal x(t) can be completely
represented by its samples if it is band-limited and the sampling rate is at least twice
the maximum frequency of the signal (Nyquist rate).

11.1.1 Example
Consider a signal x(t) with a maximum frequency of 500 Hz. According to the
sampling theorem, the sampling rate should be at least 1000 samples per second
(Hz).

11.2 Aliasing
Aliasing occurs when a signal is undersampled, causing different frequency compo-
nents to become indistinguishable. To avoid aliasing, a low-pass filter (anti-aliasing
filter) is used before sampling.

Figure 11: Low-Pass filter(used for aliasing)

11.2.1 Example
If a 500 Hz signal is sampled at 800 Hz, aliasing will occur, and the sampled signal
will not accurately represent the original signal.

12 Discrete-Time Signals and Systems


12.1 Discrete-Time Signals
A discrete-time signal is a sequence of values obtained by sampling a continuous-time
signal. It is denoted as x[n], where n is an integer.

15
12.1.1 Example

The discrete-time signal obtained by sampling x(t) = sin(2π500t) at 1000 Hz is


1
x[n] = sin(2π500nTs ), where Ts = 1000 is the sampling period.

12.2 Discrete-Time Systems


Discrete-time systems process discrete-time signals. They are described by difference
equations. For example, a simple discrete-time system is:

y[n] = x[n] + 0.5y[n − 1] (24)

12.2.1 Example

For x[n] = δ[n] (unit impulse), the output y[n] is:

y[0] = 1, y[1] = 0.5, y[2] = 0.25, ... (25)

13 Z-Transform
13.1 Introduction
The Z-transform is a powerful tool for analyzing discrete-time signals and systems.
It is defined as:
X∞
X(z) = Z{x[n]} = x[n]z −n (26)
n=−∞

The inverse Z-transform is:


x[n] = Z −1 {X(z)} (27)

13.1.1 Example

For x[n] = an u[n], the Z-transform is:



X 1
X(z) = an z −n = , |z| > |a| (28)
n=0
1 − az −1

13.2 Properties of the Z-Transform


• Linearity: Z{ax1 [n] + bx2 [n]} = aX1 (z) + bX2 (z)

• Time Shifting: Z{x[n − k]} = z −k X(z)

• Frequency Shifting: Z{an x[n]} = X(az)

• Convolution: Z{x[n] ∗ h[n]} = X(z)H(z)

16
14 System Analysis Using Z-Transform
14.1 Transfer Function
The transfer function H(z) of a discrete-time system is the Z-transform of its impulse
response h[n]:
H(z) = Z{h[n]} (29)
For a system described by a linear difference equation, the transfer function provides
a straightforward way to analyze the system’s behavior.

14.1.1 Example
Consider a system described by the difference equation:
y[n] − 0.5y[n − 1] = x[n] (30)
Taking the Z-transform of both sides and solving for Y (z):
Y (z) − 0.5z −1 Y (z) = X(z) (31)
X(z) X(z)z
Y (z) = = (32)
1 − 0.5z −1 z − 0.5
Thus, the transfer function is:
z
H(z) = (33)
z − 0.5

14.2 Poles and Zeros


The poles of a transfer function H(z) are the values of z that make the denominator
zero, and the zeros are the values that make the numerator zero. Poles and zeros
provide insight into the stability and frequency response of the system.

14.2.1 Example
z
For the transfer function H(z) = z−0.5
, there is a zero at z = 0 and a pole at z = 0.5.

15 Stability and Frequency Response


15.1 Stability
A discrete-time system is stable if all the poles of its transfer function lie inside
z
the unit circle in the z-plane. For example, the system with H(z) = z−0.5 is stable
because the pole z = 0.5 lies inside the unit circle.

15.2 Frequency Response


The frequency response of a discrete-time system is obtained by evaluating the
transfer function H(z) on the unit circle z = ejω :

H(ejω ) = H(z) (34)


z=ejω
This provides the system’s response to sinusoidal inputs at different frequencies.

17
15.2.1 Example
z
For H(z) = z−0.5
, the frequency response is:

ejω
H(ejω ) = (35)
ejω − 0.5
This can be analyzed to understand the system’s behavior at various frequencies.

18

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy