0% found this document useful (0 votes)
14 views

TestExercise 3.ipynb - Colab

Uploaded by

niranth sai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

TestExercise 3.ipynb - Colab

Uploaded by

niranth sai
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

17/07/2024, 10:30 TestExercise 3.

ipynb - Colab

1
2 import pandas as pd
3 import numpy as np
4
5 df = pd.read_excel("TestExercise-3.xlsx")

1 df.head()

OBS INTRATE INFL PROD UNEMPL COMMPRI PCE PERSINC HOUST

0 1960:1 3.99 1.24095 10.03653 3.41845 7.95262 5.70962 1.68419 -11.88896

1 1960:2 3.97 1.41379 6.96248 3.46575 -8.55856 5.06452 1.33094 -9.83803

2 1960:3 3.84 1.51881 4.49681 2.71993 -16.83599 5.55733 0.89195 -31.54321

3 1960:4 3.92 1.93237 1.50624 2.79820 -5.03145 7.77351 0.67636 -18.93082

4 1960:5 3.85 1.82507 -0.11398 1.72552 -12.44240 4.39179 0.33667 -15.15354

a.) Solving for Part a

1 #Regressing INTRATE on all 7 variables and add a constant term too to X


2
3
4
5 y = df["INTRATE"]
6 X = df.drop(["INTRATE"], axis=1)
7 X = X.drop(["OBS"], axis=1)
8
9 #Append a constant term too to X
10 X = pd.DataFrame(np.ones(len(df)), columns=['const'])
11 X = pd.concat([X, df.drop(["INTRATE"], axis=1)], axis=1)
12 X = X.drop(["OBS"], axis=1)

1 X.head()

const INFL PROD UNEMPL COMMPRI PCE PERSINC HOUST

0 1.0 1.24095 10.03653 3.41845 7.95262 5.70962 1.68419 -11.88896

1 1.0 1.41379 6.96248 3.46575 -8.55856 5.06452 1.33094 -9.83803

2 1.0 1.51881 4.49681 2.71993 -16.83599 5.55733 0.89195 -31.54321

3 1.0 1.93237 1.50624 2.79820 -5.03145 7.77351 0.67636 -18.93082

4 1.0 1.82507 -0.11398 1.72552 -12.44240 4.39179 0.33667 -15.15354

1 # Using stats OLS method to regress


2
3 import statsmodels.api as sm
4 X = sm.add_constant(X)
5 est = sm.OLS(y, X).fit()
6 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.639
Model: OLS Adj. R-squared: 0.635
Method: Least Squares F-statistic: 164.5
Date: Wed, 17 Jul 2024 Prob (F-statistic): 1.64e-139
Time: 04:36:45 Log-Likelihood: -1449.2
No. Observations: 660 AIC: 2914.
Df Residuals: 652 BIC: 2950.
Df Model: 7
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2212 0.245 -0.903 0.367 -0.702 0.260
INFL 0.6961 0.062 11.185 0.000 0.574 0.818
PROD -0.0577 0.040 -1.447 0.148 -0.136 0.021
UNEMPL 0.1025 0.097 1.059 0.290 -0.088 0.292
COMMPRI -0.0055 0.003 -1.857 0.064 -0.011 0.000
PCE 0.3444 0.069 4.958 0.000 0.208 0.481
PERSINC 0.2470 0.061 4.077 0.000 0.128 0.366
HOUST -0.0194 0.005 -4.155 0.000 -0.029 -0.010
==============================================================================
Omnibus: 28.142 Durbin-Watson: 0.101
Prob(Omnibus): 0.000 Jarque-Bera (JB): 41.034
Skew: 0.365 Prob(JB): 1.23e-09

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 1/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
Kurtosis: 3.980 Cond. No. 102.
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 #Lets drop UNEMPL as it has the highest p-value among all the parameters and fit again
2
3 X = X.drop(["UNEMPL"], axis=1)
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.638
Model: OLS Adj. R-squared: 0.635
Method: Least Squares F-statistic: 191.7
Date: Wed, 17 Jul 2024 Prob (F-statistic): 1.99e-140
Time: 04:36:59 Log-Likelihood: -1449.7
No. Observations: 660 AIC: 2913.
Df Residuals: 653 BIC: 2945.
Df Model: 6
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2909 0.236 -1.232 0.218 -0.754 0.173
INFL 0.6933 0.062 11.150 0.000 0.571 0.815
PROD -0.0255 0.026 -0.989 0.323 -0.076 0.025
COMMPRI -0.0065 0.003 -2.308 0.021 -0.012 -0.001
PCE 0.3686 0.066 5.618 0.000 0.240 0.497
PERSINC 0.2516 0.060 4.162 0.000 0.133 0.370
HOUST -0.0210 0.004 -4.760 0.000 -0.030 -0.012
==============================================================================
Omnibus: 21.820 Durbin-Watson: 0.104
Prob(Omnibus): 0.000 Jarque-Bera (JB): 30.851
Skew: 0.303 Prob(JB): 2.00e-07
Kurtosis: 3.868 Cond. No. 97.1
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Similarly lets drop PROD now and regress


2
3 X = X.drop(["PROD"], axis=1)
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.637
Model: OLS Adj. R-squared: 0.635
Method: Least Squares F-statistic: 229.9
Date: Wed, 17 Jul 2024 Prob (F-statistic): 2.03e-141
Time: 04:37:04 Log-Likelihood: -1450.2
No. Observations: 660 AIC: 2912.
Df Residuals: 654 BIC: 2939.
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2401 0.230 -1.042 0.298 -0.692 0.212
INFL 0.7175 0.057 12.555 0.000 0.605 0.830
COMMPRI -0.0075 0.003 -2.841 0.005 -0.013 -0.002
PCE 0.3405 0.059 5.756 0.000 0.224 0.457
PERSINC 0.2402 0.059 4.048 0.000 0.124 0.357
HOUST -0.0205 0.004 -4.678 0.000 -0.029 -0.012
==============================================================================
Omnibus: 23.848 Durbin-Watson: 0.100
Prob(Omnibus): 0.000 Jarque-Bera (JB): 31.255
Skew: 0.354 Prob(JB): 1.63e-07
Kurtosis: 3.797 Cond. No. 94.1
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Now lets drop COMMPRI


2
3 X = X.drop(["COMMPRI"], axis=1)
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 2/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
OLS Regression Results
==============================================================================
Dep. Variable: INTRATE R-squared: 0.633
Model: OLS Adj. R-squared: 0.631
Method: Least Squares F-statistic: 282.3
Date: Wed, 17 Jul 2024 Prob (F-statistic): 6.18e-141
Time: 04:37:10 Log-Likelihood: -1454.3
No. Observations: 660 AIC: 2919.
Df Residuals: 655 BIC: 2941.
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2136 0.231 -0.923 0.356 -0.668 0.241
INFL 0.7448 0.057 13.149 0.000 0.634 0.856
PCE 0.3110 0.059 5.311 0.000 0.196 0.426
PERSINC 0.2569 0.059 4.327 0.000 0.140 0.373
HOUST -0.0215 0.004 -4.893 0.000 -0.030 -0.013
==============================================================================
Omnibus: 27.399 Durbin-Watson: 0.100
Prob(Omnibus): 0.000 Jarque-Bera (JB): 33.853
Skew: 0.416 Prob(JB): 4.46e-08
Kurtosis: 3.733 Cond. No. 62.7
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

b.) Solving for Part b

1
2
3 # Start with just a constant
4 X = pd.DataFrame(np.ones(len(df)), columns=['const'])
5 est = sm.OLS(y, X).fit()
6 print(est.summary())
7
8 # Add INFLATION
9 X['INFL'] = df['INFL']
10 est = sm.OLS(y, X).fit()
11 print(est.summary())
12
13

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.000
Model: OLS Adj. R-squared: 0.000
Method: Least Squares F-statistic: nan
Date: Wed, 17 Jul 2024 Prob (F-statistic): nan
Time: 04:46:43 Log-Likelihood: -1784.9
No. Observations: 660 AIC: 3572.
Df Residuals: 659 BIC: 3576.
Df Model: 0
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.3476 0.141 37.958 0.000 5.071 5.624
==============================================================================
Omnibus: 90.501 Durbin-Watson: 0.022
Prob(Omnibus): 0.000 Jarque-Bera (JB): 141.949
Skew: 0.900 Prob(JB): 1.50e-31
Kurtosis: 4.388 Cond. No. 1.00
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
OLS Regression Results
==============================================================================
Dep. Variable: INTRATE R-squared: 0.560
Model: OLS Adj. R-squared: 0.559
Method: Least Squares F-statistic: 836.6
Date: Wed, 17 Jul 2024 Prob (F-statistic): 2.47e-119
Time: 04:46:43 Log-Likelihood: -1514.2
No. Observations: 660 AIC: 3032.
Df Residuals: 658 BIC: 3041.
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.6421 0.159 10.352 0.000 1.331 1.954
INFL 0.9453 0.033 28.925 0.000 0.881 1.010
==============================================================================

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 3/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
Omnibus: 5.019 Durbin-Watson: 0.063
Prob(Omnibus): 0.081 Jarque-Bera (JB): 4.841
Skew: 0.193 Prob(JB): 0.0889
Kurtosis: 3.166 Cond. No. 8.46
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # It has a good p-value so we keep it, now adding PROD variable column
2
3 X['PROD'] = df['PROD']
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.575
Model: OLS Adj. R-squared: 0.573
Method: Least Squares F-statistic: 443.9
Date: Wed, 17 Jul 2024 Prob (F-statistic): 1.06e-122
Time: 04:46:45 Log-Likelihood: -1502.8
No. Observations: 660 AIC: 3012.
Df Residuals: 657 BIC: 3025.
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.2489 0.176 7.088 0.000 0.903 1.595
INFL 0.9750 0.033 29.785 0.000 0.911 1.039
PROD 0.0947 0.020 4.805 0.000 0.056 0.133
==============================================================================
Omnibus: 12.297 Durbin-Watson: 0.065
Prob(Omnibus): 0.002 Jarque-Bera (JB): 12.444
Skew: 0.326 Prob(JB): 0.00199
Kurtosis: 3.168 Cond. No. 11.9
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 #Even prod has a good p-value, we keep it too, now lets add UNEMPL
2
3 X['UNEMPL'] = df['UNEMPL']
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.595
Model: OLS Adj. R-squared: 0.593
Method: Least Squares F-statistic: 321.7
Date: Wed, 17 Jul 2024 Prob (F-statistic): 2.16e-128
Time: 04:46:46 Log-Likelihood: -1486.4
No. Observations: 660 AIC: 2981.
Df Residuals: 656 BIC: 2999.
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.2041 0.172 6.994 0.000 0.866 1.542
INFL 0.8913 0.035 25.404 0.000 0.822 0.960
PROD -0.0799 0.036 -2.230 0.026 -0.150 -0.010
UNEMPL 0.4978 0.086 5.780 0.000 0.329 0.667
==============================================================================
Omnibus: 47.572 Durbin-Watson: 0.068
Prob(Omnibus): 0.000 Jarque-Bera (JB): 65.048
Skew: 0.580 Prob(JB): 7.50e-15
Kurtosis: 4.009 Cond. No. 12.8
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Even unempl has a good p-value, and prod has a p -value less than our threshold of 0.05, so lets keep it too
2 # Lets add COMMPRI
3
4 X['COMMPRI'] = df['COMMPRI']
5 est = sm.OLS(y, X).fit()
6 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.598

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 4/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
Model: OLS Adj. R-squared: 0.595
Method: Least Squares F-statistic: 243.3
Date: Wed, 17 Jul 2024 Prob (F-statistic): 6.16e-128
Time: 04:46:46 Log-Likelihood: -1484.5
No. Observations: 660 AIC: 2979.
Df Residuals: 655 BIC: 3001.
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.1943 0.172 6.949 0.000 0.857 1.532
INFL 0.9009 0.035 25.487 0.000 0.831 0.970
PROD -0.0415 0.041 -1.018 0.309 -0.121 0.039
UNEMPL 0.4349 0.092 4.742 0.000 0.255 0.615
COMMPRI -0.0061 0.003 -1.965 0.050 -0.012 -4.01e-06
==============================================================================
Omnibus: 42.015 Durbin-Watson: 0.066
Prob(Omnibus): 0.000 Jarque-Bera (JB): 56.210
Skew: 0.536 Prob(JB): 6.22e-13
Kurtosis: 3.946 Cond. No. 65.7
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Lets remove prod given its high p value and add PCE
2
3 X['PCE'] = df['PCE']
4 #Removing PROD
5 X = X.drop(["PROD"], axis=1)
6 est = sm.OLS(y, X).fit()
7 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.615
Model: OLS Adj. R-squared: 0.613
Method: Least Squares F-statistic: 262.0
Date: Wed, 17 Jul 2024 Prob (F-statistic): 2.53e-134
Time: 04:46:47 Log-Likelihood: -1469.6
No. Observations: 660 AIC: 2949.
Df Residuals: 655 BIC: 2972.
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.1872 0.240 0.781 0.435 -0.283 0.658
INFL 0.7297 0.046 16.010 0.000 0.640 0.819
UNEMPL 0.1391 0.060 2.329 0.020 0.022 0.256
COMMPRI -0.0100 0.003 -3.704 0.000 -0.015 -0.005
PCE 0.3064 0.055 5.588 0.000 0.199 0.414
==============================================================================
Omnibus: 18.195 Durbin-Watson: 0.074
Prob(Omnibus): 0.000 Jarque-Bera (JB): 20.267
Skew: 0.347 Prob(JB): 3.97e-05
Kurtosis: 3.504 Cond. No. 94.0
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Lets add persinc too


2
3 X['PERSINC'] = df['PERSINC']
4 est = sm.OLS(y, X).fit()
5 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.626
Model: OLS Adj. R-squared: 0.623
Method: Least Squares F-statistic: 219.1
Date: Wed, 17 Jul 2024 Prob (F-statistic): 4.01e-137
Time: 04:46:47 Log-Likelihood: -1460.2
No. Observations: 660 AIC: 2932.
Df Residuals: 654 BIC: 2959.
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.0635 0.238 0.267 0.790 -0.404 0.531
INFL 0.8534 0.053 16.040 0.000 0.749 0.958
UNEMPL 0.0787 0.061 1.300 0.194 -0.040 0.198
COMMPRI -0.0086 0.003 -3.217 0.001 -0.014 -0.003

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 5/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
PCE 0.1855 0.061 3.049 0.002 0.066 0.305
PERSINC 0.2658 0.061 4.349 0.000 0.146 0.386
==============================================================================
Omnibus: 12.857 Durbin-Watson: 0.083
Prob(Omnibus): 0.002 Jarque-Bera (JB): 13.800
Skew: 0.286 Prob(JB): 0.00101
Kurtosis: 3.418 Cond. No. 94.5
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Lets remove UNEMPL and add PERSINC


2
3 X = X.drop(["UNEMPL"], axis=1)
4 #Adding HOUST
5
6 X['HOUST'] = df['HOUST']
7 est = sm.OLS(y, X).fit()
8 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.637
Model: OLS Adj. R-squared: 0.635
Method: Least Squares F-statistic: 229.9
Date: Wed, 17 Jul 2024 Prob (F-statistic): 2.03e-141
Time: 04:46:47 Log-Likelihood: -1450.2
No. Observations: 660 AIC: 2912.
Df Residuals: 654 BIC: 2939.
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2401 0.230 -1.042 0.298 -0.692 0.212
INFL 0.7175 0.057 12.555 0.000 0.605 0.830
COMMPRI -0.0075 0.003 -2.841 0.005 -0.013 -0.002
PCE 0.3405 0.059 5.756 0.000 0.224 0.457
PERSINC 0.2402 0.059 4.048 0.000 0.124 0.357
HOUST -0.0205 0.004 -4.678 0.000 -0.029 -0.012
==============================================================================
Omnibus: 23.848 Durbin-Watson: 0.100
Prob(Omnibus): 0.000 Jarque-Bera (JB): 31.255
Skew: 0.354 Prob(JB): 1.63e-07
Kurtosis: 3.797 Cond. No. 94.1
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

b.) The models in a and b are quite different

c.) Solving for part c, computing the values for our models a and Taylors rule

1 # Getting the model from Part a and regressing it again, keeping onlt the INFL, PCE, PERSINC, HOUST parameters from the d
2
3 y = df["INTRATE"]
4 X = pd.DataFrame(np.ones(len(df)), columns=['const'])
5 X = pd.concat([X, df.drop(["INTRATE"], axis=1)], axis=1)
6 X = X.drop(["OBS"], axis=1)
7 X = X.drop(["PROD"], axis=1)
8 X = X.drop(["UNEMPL"], axis=1)
9 X = X.drop(["COMMPRI"], axis=1)
10
11
12 est = sm.OLS(y, X).fit()
13 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.633
Model: OLS Adj. R-squared: 0.631
Method: Least Squares F-statistic: 282.3
Date: Wed, 17 Jul 2024 Prob (F-statistic): 6.18e-141
Time: 04:51:52 Log-Likelihood: -1454.3
No. Observations: 660 AIC: 2919.
Df Residuals: 655 BIC: 2941.
Df Model: 4
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -0.2136 0.231 -0.923 0.356 -0.668 0.241

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 6/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
INFL 0.7448 0.057 13.149 0.000 0.634 0.856
PCE 0.3110 0.059 5.311 0.000 0.196 0.426
PERSINC 0.2569 0.059 4.327 0.000 0.140 0.373
HOUST -0.0215 0.004 -4.893 0.000 -0.030 -0.013
==============================================================================
Omnibus: 27.399 Durbin-Watson: 0.100
Prob(Omnibus): 0.000 Jarque-Bera (JB): 33.853
Skew: 0.416 Prob(JB): 4.46e-08
Kurtosis: 3.733 Cond. No. 62.7
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

1 # Lets now regeress o the Taylor rule, which he chooses X as only a constant + INFL + PROD only
2
3 X = pd.DataFrame(np.ones(len(df)), columns=['const'])
4 X['INFL'] = df['INFL']
5 X['PROD'] = df['PROD']
6 est = sm.OLS(y, X).fit()
7 print(est.summary())

OLS Regression Results


==============================================================================
Dep. Variable: INTRATE R-squared: 0.575
Model: OLS Adj. R-squared: 0.573
Method: Least Squares F-statistic: 443.9
Date: Wed, 17 Jul 2024 Prob (F-statistic): 1.06e-122
Time: 04:52:57 Log-Likelihood: -1502.8
No. Observations: 660 AIC: 3012.
Df Residuals: 657 BIC: 3025.
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 1.2489 0.176 7.088 0.000 0.903 1.595
INFL 0.9750 0.033 29.785 0.000 0.911 1.039
PROD 0.0947 0.020 4.805 0.000 0.056 0.133
==============================================================================
Omnibus: 12.297 Durbin-Watson: 0.065
Prob(Omnibus): 0.002 Jarque-Bera (JB): 12.444
Skew: 0.326 Prob(JB): 0.00199
Kurtosis: 3.168 Cond. No. 11.9
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

c.) Hence the values of R squared, AIC and BIC are better for our model than the Taylors rule

d.) Solving for Part d

1
2
3 # RESET test
4 from statsmodels.stats.diagnostic import linear_reset
5 print("______________________________________________")
6 reset_test = linear_reset(est, power=2)
7 print(reset_test)
8 print("______________________________________________")
9
10 # Chow break test
11 from statsmodels.formula.api import ols
12 from statsmodels.stats.api import anova_lm
13 df['dummy'] = (df.index >= 20).astype(int)
14 model1 = ols('INTRATE ~ INFL + PROD', data=df).fit()
15 model2 = ols('INTRATE ~ INFL + PROD + dummy + INFL:dummy + PROD:dummy', data=df).fit()
16 chow_test = anova_lm(model1, model2)
17 print(chow_test)
18 print("______________________________________________")
19
20 # Forecast test
21 from statsmodels.tsa.stattools import adfuller
22 pre_break = df['INTRATE'][:20]
23 post_break = df['INTRATE'][20:]
24 adf_pre = adfuller(pre_break)
25 adf_post = adfuller(post_break)
26 print('Pre-break ADF statistic:', adf_pre[0])
27 print('Post-break ADF statistic:', adf_post[0])
28 print("______________________________________________")
29
30 # Jarque-Bera test
https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 7/8
17/07/2024, 10:30 TestExercise 3.ipynb - Colab
31 from statsmodels.stats.stattools import jarque_bera
32 jb_test = jarque_bera(est.resid)
33 print(jb_test)
34 print("______________________________________________")
35

______________________________________________
<Wald test (chi2): statistic=2.5371195394336548, p-value=0.11119747865833446, df_denom=1>
______________________________________________
df_resid ssr df_diff ss_diff F Pr(>F)
0 657.0 3671.394806 0.0 NaN NaN NaN
1 654.0 3669.745555 3.0 1.649251 0.097973 0.961138
______________________________________________
Pre-break ADF statistic: -0.89272514653163
Post-break ADF statistic: -2.603062206099372
______________________________________________
(12.444043308438582, 0.0019852277136483674, 0.32571817051428936, 3.1677538653763313)
______________________________________________

1 Start coding or generate with AI.

https://colab.research.google.com/drive/13b68CWylarwD28XhRqBUi5TJbyIUZQ-d?authuser=0#scrollTo=bT8oUQ0zCyBZ&printMode=true 8/8

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy