AP5152-Advanced Digital Signal Processing
AP5152-Advanced Digital Signal Processing
Question Bank
Prepared By
Mr. M. SELVARAJ, Assistant Professor / ECE
UNIT I - DISCRETE RANDOM SIGNAL PROCESSING
Wide sense stationary process – Ergodic process – Mean – Variance - Auto-correlation and Auto-correlation matrix
- Properties -Weiner Khitchine relation - Power spectral density – filtering random process, Spectral
Factorization Theorem, Finite Data records, Simulation of uniformly distributed/Gaussian distributed white
noise – Simulation of Sine wave mixed with Additive White Gaussian Noise.
PART A (2 Marks)
BT
Q.No Questions Competence
Level
PART A (2 Marks)
BT Competence
Q.No Questions
Level
1 How will you find the Maximum Likelihood estimate BTL 1 Remembering
2 Give the names of special random processes. BTL 1 Remembering
3 What is meant by mean square error? BTL 1 Remembering
4 Compare AR and ARMA BTL 1 Remembering
5 Write the error criterion for LMS algorithm BTL 1 Remembering
6 Draw the spectrum of AR signal modeling.. BTL 1 Remembering
7 Illustrate the maximum likelihood criterion BTL 2 Understanding
8 Demonstrate the properties wiener filter. BTL 2 Understanding
9 Relate between AR and MA process in terms of expressions. BTL 2 Understanding
10 Name any two application of the AR model BTL 2 Understanding
11 How LMS will reduce noise BTL 3 Applying
12 Examine the necessity of estimation and prediction. BTL 3 Applying
13 Build the discrete wiener Hoff equation for signal modeling. BTL 3 Applying
14 Inspect the properties of maximum likelihood criterion BTL 4 Analyzing
15 Illustrate discrete Wiener Hoff equations BTL 4 Analyzing
16 Examine the efficiency of a signal estimator BTL 4 Analyzing
17 How optimum filter can be used in noise cancellation BTL 5 Evaluating
18 Conclude the Yule – Walker method signal estimation. BTL 5 Evaluating
19 Deduce Autoregressive Spectrum estimation using covariance method BTL 6 Creating
20 Design spectrum estimation Autoregressive moving average model. BTL 6 Creating
PART B (13 Marks)
1 Derive the expression power spectral density, auto correlation of ARMA
BTL 1 Remembering
process. (13)
2 How FIR Wiener filter can be used for filtering and prediction(13) BTL 1 Remembering
3 Derive Wiener Hopf equations and the minimum mean square error for the FIR
BTL 1 Remembering
wiener filter(13)
4 Derive Wiener Hopf equations and the minimum mean square error for a non-
BTL 1 Remembering
causal wiener filter(13)
5 i) Outline Wiener Hopf equations and the minimum mean square error for a
causal wiener filter (7)
BTL 2 Understanding
ii) How will you relate maximum likelihood criterion with LMS error criterion.
(6)
6 i) Demonstrate the AR signal modeling with example (10)
BTL 2 Understanding
ii) State about mean square error. (3)
7 Construct Weiner deconvolution for stationary process (13) BTL 2 Understanding
8 i) Inspect the efficiency of the estimator.(7) BTL 3 Applying
ii) Give the difference between parametric and non parametric estimation with
brief note. (6)
9 Compare and contrast maximum likelihood criterion with LMS error
BTL 3 Applying
criterion(13)
10 Analyze the concept of noise cancellation using wiener filter, Explain with
BTL 4 Analyzing
example. (13)
11 Examine the working of various signal modeling techniques. (13) BTL 4 Analyzing
12 State and explain in detail about the parametric estimation using Yule BTL 4 Analyzing
Walker method. (13)
13 i) Evaluate the ARMA random process (7) BTL 5 Evaluating
ii) Explain widrow’s LMS algorithm with neat diagram (6)
14 Estimate an unknown constant x that are corrupted by uncorrelated, zero BTL 6 Creating
mean white noise and has a variance σv2. (13)
PART – C (15 Marks)
1 Obtain Yule – Walker equation for auto regressive type signal modeling BTL 5 Evaluating
2 The estimated autocorrelation sequence of a random process x(n) for lags
k=0,1,2,3,4 are
Rx(0)=2 rx(1)=1 rx(2)=1; rx(3)=0.5rx(4)=0
Estimate the power spectrum in each of the case Creating
i) x(n) is AR(2) process (4)
ii) x(n) is MA(2) process (4) BTL 6
iii) x(n) is ARMA(1,1) process (4)
iv) x(n) contains a single sinusoid in white noise (3)
Consider an MA(q) process that is generated by difference equation
3
Creating
where w(n) is zero mean white noise with zero variance.
i) Find the unit sample response of the filter that generates y(n) from w(n). (7)
BTL 6
ii) Find the autocorrelation, power spectrum of y(n). (8)
4 Elaborately discuss about various special random process techniques. BTL 5 Evaluating
(15)
PART A (2 Marks)
Q.No Questions BT Competence
Level
1 Why are FIR filters used in adaptive filter application BTL 1 Remembering
2 What is adaptive noise cancellation BTL 1 Remembering
3 Define misadjustment of adaptive filter BTL 1 Remembering
4 State the properties of Widrow-Hopf LMS adaptive algorithm BTL 1 Remembering
5 Recall the step size of LMS adaptive filter BTL 1 Remembering
6 Express the LMS adaptive algorithm. State its properties BTL 1 Remembering
7 Demonstrate the need for adaptive filters BTL 2 Understanding
8 Outline about channel equalization BTL 2 Understanding
9 Show how to avoid echo’s in long distance telephonic circuits. BTL 2 Understanding
10 List some applications of Adaptive filters BTL 2 Understanding
11 Illustrate the principle behind LMS algorithm BTL 3 Applying
12 Identify are the advantages of FIR adaptive filters BTL 3 Applying
13 Why LMS is normally preferred over RLS BTL 3 Applying
14 Relate the order of the filter with the step size in LMS adaptive filter BTL 4 Analyzing
15 Write the difference between LMS algorithm and RLS algorithm BTL 4 Analyzing
16 Express the principle of steepest descent adaptive FIR filter BTL 4 Analyzing
17 Explain the advantage of normalized LMS over LMS adaptive filter BTL 5 Evaluating
18 Deduce error function of exponentially weighted RLS BTL 5 Evaluating
19 Develop error function of sliding window RLS BTL 6 Creating
20 Build time constant for the steepest descent FIR adaptive filter BTL 6 Creating
PART B (13 Marks)
1 i) What is direct form FIR adaptive filter? (7)
BTL 1 Remembering
ii) Derive the weight vector update equation of the LMS algorithm. (6)
2 Discuss adaptive noise cancellation using LMS algorithm. (13) BTL 1 Remembering
3 i)Define adaptive echo cancellation. (7)
BTL 1 Remembering
ii) Explain adaptive channel equalization. (6)
4 Obtain Widrow-Hoff LMS adaptation algorithm (13) BTL 1 Remembering
5 Explain steepest descent algorithm for FIR adaptive filter. (13) BTL 2 Understanding
6 Outline the sliding window RLS algorithm. (13) BTL 2 Understanding
7 Demonstrate the RLS algorithm with the exponentially weighted factor. Write
BTL 2 Understanding
down the update equations for the tap-weight vector (13)
8 Develop normalized LMS algorithm and the convergence. (13) BTL 3 Applying
9 Develop the first order adaptive filter and explain about LMS adaptation
BTL 3 Applying
algorithm in detail (13)
10 i) Give analytical note about adaptive filter? (6)
ii)Discuss the minimum MSE criterion to develop an adaptive FIR BTL 4 Analyzing
filter.(7)
11 i) Illustrate the principle of operation of adaptive filter used as a noise
canceller in reconstruction. (7) BTL 4 Analyzing
ii) Compare and contrast exponentially weighted RLS with sliding
window RLS (6)
12 Deduce the Windrow Hoff LMS adaptive algorithm with its properties BTL 4 Analyzing
(13)
13 Discuss in detail about the steps involved in the design of adaptive filters BTL 5 Evaluating
based on steepest descent method (13)
14 Build an adaptive channel equalizer using steepest descent principle(13) BTL 6 Creating
PART – C(15 Marks)
1 Analyze the performance of adaptive channel equalization and adaptive BTL 4 Analyzing
echo cancellation. (15)
2 Suppose that the input to an FIR LMS adaptive filter is a first order AR
process with autocorrelation rx(k)=cα |k| . where c>0 and 0<α<1. Suppose
the step size µ is µ=(1/5λ max).
(i) How does the rate of convergence of the LMS algorithm depend upon
the value of α? (4)
(ii) What effect does the value c have on the rate of convergence? (4) BTL 5 Evaluating
(iii) How does the rate of convergence of the LMS algorithm depend upon
the desired signal d(n)? (7)
3 The first three autocorrelations of a process x(n) are r x(0)=1, r x(1)=0.5,
rx(2)=0.5 Design a two-coefficient LMS adaptive linear predictor for x(n) BTL 6 Creating
that has a misadjustment M=0.05 and find the steady-state mean-square
error. (15)
4 Explain in detail and derive the expressions for Exponentially weighted
RLS. (15) BTL 5 Evaluating