/r/econometrics

Photograph via snooOG

Econometric Analysis, newly published papers anything to do with the field.

Do not request paid-for help.

Do not ask someone to do your homework.

If requesting help with a homework/quiz question, demonstrate what you have managed to achieve by yourself.

/r/econometrics

28,080 Subscribers

4

DSGE econometrics help

Hi Guys! I am trying to learn DSGE modelling and apply it using real data. For learning purposes, I am implementing the canonical RBC DSGE model. So far I have got it till deriving the dynamic equations and the state-space model. However, I am unable to get how you move from here to get the IRFs using observed data like I get the implementation and code but I want to get to know the econometrics behind it.

Can you please suggest some good sources or maybe guide me through this for the same ? Your help is very much appreciated.

2 Comments
2024/10/17
22:14 UTC

1

IPUMS Data Help

Working on a research paper. Struggling with finding the data I need.

I want to see if there is a correlation between the amount of welfare a person receives and the length of time it takes them to re-enter the workforce.

Both of these variables seem to exist but not in the same data set. The acs has the welfare data and the cps has the unemployment duration data.

I cannot combine these as they likely do not use the same people. Does anyone have any ideas? I’ve tried the department of labor but am running into a similar problem, in addition to the data being a nightmare to decode.

Any help is appreciated!

5 Comments
2024/10/17
21:25 UTC

12

How do I self-study econometrics given some background in statistics?

Hey everyone!

I recently obtained a bachelor's degree in statistics-heavy program, but I decided that I didn't want to pursue a career directly related to my degree.

For some context, I took three semesters of mathematical statistics, followed by a regression analysis course and a time series analysis course. I also took two (introductory) courses in micro+macro economics, but that was three years ago. For what it's worth, I don't live in the Americas or in Europe.

I'm really interested in going for a career that heavily involves econometrics but I'm struggling to find the starting point.

Of course, I'll need more domain knowledge in economics itself first but how much economics should I know before I start with econometrics? What sources do you recommend? Do you have any tips?

3 Comments
2024/10/17
21:07 UTC

9

Silly question about difference Time series and classical linear regression

What is the difference between time series regression and standard regression? In exercises using the classical linear model, we often use time series data, such as in the simple CAPM example where we analyze stock returns and market returns using daily data. Why, then, isn't this considered time series regression?

4 Comments
2024/10/17
10:49 UTC

3

Job prospects for an econometrics PhD graduate with no work experience?

Beyond a 3 month internship during undergrad

5 Comments
2024/10/16
14:39 UTC

2

Does this formula make sense?

I was tasked with making a scientific article about dynamic of economical gravitational pull. After reading a lot of articles, as a dumb student I couldn't understand everything, but I came up with a bit simplified version of gravity model. Basically, to calculate economical gravitational pull between 2 countries, I take ln(Trade flow between two of them), add ln(GDP of country 1, bln$)*elasticity of Armington(country 1), add ln(GDP of country 2, bln$)*elasticy of Armington(country 2), then I substract ln(distance between 2 countries in km) So, the formula is kinda like EGP=ln(ΣTF)+ln(GDP1)*AE1+ln(GDP2)*AE2-ln(dist) In my head it makes sense, but I was wondering how does it look for professionals, thank you.

2 Comments
2024/10/16
02:39 UTC

1

Panel Model - Stationarity issue, help

Last semester i wrote my BA project, and did really well. My guidance counselor have since asked me if i want to cowrite a continuation of my project with him, which i of course would love to.

We have begun the process (though i wont be payed yet), and I am immediatly confronted with doubts about my ability to do this, but i will just try to push through as i usually due, since it is a great opportunity for me.

The problem i am looking at right now is that of stationarity in a panel model with time dummies (and fixed effects). The model is initially derived from economic theory, the CES production function, that posists a simple relationship between the capital share and capital/output relationship, i.e. (sorry for notation).

ln(cap_share) = c_i + d_t - \phi ln (K/Y) + \epsilon_t

The problem i have is that since i have a macropanel with T>N, i know the estimator relies more heavily on the timeseries asymptotics, and as such, non-stationarity is a problem. I find the variables to be of mixed order of integration (depending on the sample) I(1) and I(0), and i dont think i can simply difference only the I(1) variable without loosing phi. What should i do?

TLDR: how important is stationarity when using a macropanel i.e. T>N. How do i elliviate the problem, when the variables are integrated of different order, so no conintegration? And i cant just difference the I(1) variable since i believe it will change economic meaning of the coefficient i am interested in.

0 Comments
2024/10/15
17:48 UTC

7

Help with applying time series analysis please!

Suppose I have spend data for 3 years for a big customer base where the customers have received a certain treatment X in March and April of every year. There are other treatments that affect the customers' spend as well, these can happen throughout the year or in certain months. I want to isolate and find the impact of solely treatment X this year ie, the impact that X on its own has had on customers' spend behaviour in March and April 2024. What is the best way to go about this? The data I have is the monthly spend of each customer for all the three years.

Here's my approach (but I feel like I'm heading in the wrong direction here):

Use time series analysis to forecast the March & April spend in 2024 and subtract it from the actual spend this year to get the marginal impact of treatment X. However, the problem is that treatment X has had its previous iterations in the past two years as well, which I'm not sure would affect the forecast.

Is there any other angle in which I can approach this problem? Any methods/techniques I could look into? All suggestions are welcome, thank you for reading!

0 Comments
2024/10/15
17:40 UTC

2

A modeler should do a Ph.D. to become strong in Econometrics

13 Comments
2024/10/15
11:00 UTC

29

I built a simple econometrics model. Can anyone guide me on how I can take it further from here?

I built a simple econometrics model to understand relationship between housing price index and major macro-economic indicators.

The factors(independant variables) I took initially were - CPI , Unemployment Rate, Real GDP Growth Rate, Nominal GDP, Mortgage Rate, Real Disposable Income, House Supply, Permits for New Houses, Population - All from FRED using an API

I started by taking log of both the target variable - Housing Price Index as well as Nominal GDP, Real disposable income, house supply etc - basically the variables that were not expressed as "Rate" - so that I can interpret the model in terms of "elasticity"

I was facing the problem of Real GDP growth rate, nominal GDP not being available every month.

  1. So initially I ran a basic OLS model under 3 ways of filling missing GDP - removing months that did not have GDP, make it a quarterly model(i.e taking average of index values for every quarter), filling missing GDP with linear interpolation.
    1. Using values like high AIC/BIC ~(-1300 for interpolation vs -400 for other methods), I decided to go with Interpolation method of filling missing GDP. The quarterly model had Durbin-Watson Statistic of 0.543 vs 0.224 for interpolation favoring it, but I chose to go with interpolation nevertheless giving higher priority to AIC/BIC.
  2. Next , I checked for multi-collinearity using VIF score, I found that variables like log Nominal GDP , log Real Disposable Income and Population had very high VIF score > 200.
    1. I removed Nominal GDP, Real Disposable Income, as I felt CPI and Real GDP growth were enough to explain
    2. I did not remove Population as I felt dropping that would be dropping a major part of the story.
  3. Next, I ran the Breusch-Pagan test to check for heteroscedasticity and got very low p-value, indicating heteroscedasticity.
    1. I ran a GLS model to correct it. Still there was no difference in any of the values for reasons I could not understand.
    2. I ran a weighted GLS model , marginal improvements were seen
  4. Next, I decided to test for auto-regression. I ran ACF/PACF plots and diagnized that there was a AR(1) pattern.
    1. Therefore, I created new variable Log Housing Price Index which was log HPI.shift(1) or lag(1) and made it a dependant variable

    2. I ran the model, but I got too perfect results R-squared of 1.0, AIC/BIC jump to -3000 from -800

    3. Many coefficients totally changed.

These leads to my questions

  1. In 1.1 was I wrong in going with Interpolation method instead of quarterly analysis?

  2. How could I have approached multi-collinearity differently?

  3. How could I have handled heteroscedasticity better?

  4. Was I wrong in creating a lag Housing Price? Should I have ignored auto-regression?

  5. Was there anything else I could have done better like creating an instrumental variable? Or introducing new parameters from FRED dataset?

Looking forward to your suggestions and comments.

14 Comments
2024/10/15
01:13 UTC

3

OLS Sampling Error

Hi everyone,

Could someone please help me show that the OLS sampling error (b-β)=(X'X)-1 X'ε .

Been trying to find it for a while but can't seem to get a direct answer! Thanks in advance :)

1 Comment
2024/10/14
15:14 UTC

3

PSM-DID Help

I am writing my undergrad thesis on credit access and its effect on welfare. The data I use, however, isn't a panel but a repeated cross-section that doesn't track the same households. It has a dummy variable for whether or not a household has taken out a loan or not and categorical ones for the source of the loan.

To control for the non-random process of taking out and being granted a loan, we exploit the fact that the presence and coverage of banks and non-bank financial institutions have grown in between 2019 and 2022. Since we are talking about the "expansion of financial access", how should we define what a "treated" and an "untreated" observation is?

I would think that a treated household would be one that did not take out a loan in 2019 but did in 2022. While the control would be the households that took out loans in both years. However, I find it difficult to operationalize as the dataset doesn't track the same households.

As far as I understand it, the dependent variable logit regression for the PSM should then be the propensity to be "treated" and not the propensity to take out a loan. But if I follow the former, then all "treated" observations would be 2022 loan takers regardless if a matching household did not take out a loan in 2019.

Should I do PSM on the 2019 data first and then find a match in the 2022, and only then should I define what a treatment is? Should I do PSM for the combined data?

TIA!

7 Comments
2024/10/14
10:46 UTC

1

County-by-month and month-by-year fixed effects question

I’m a master’s in economics student and for my thesis my advisor says I should use county-month and month-year fixed effects rather than county and month fixed effects. I understand two-way fixed effects decently well, but never learned about this case, and when I google these types of fixed effects there is literally no information on them.

Could someone please help me understand county-by-month and month-by-year fixed effects? Are there any resources I could learn more about this? I would greatly appreciate any help here as I am lost

4 Comments
2024/10/14
01:55 UTC

16

What are some simple projects I can do to establish a amateur level understanding of econometrics?

Basically, can you recommend me any datasets from Kaggle or any other platform?

I have a data science background and I would love to explore econometrics. What's the "Titanic" datasets equivalent for econometrics - i.e datasets that would help me understand econometrics comprehensively?

7 Comments
2024/10/13
04:23 UTC

5

Any blogdown websites that posts study results using econometric?

Hi

Does anyone know websites that posts about their studies/researches using statistical or econometric methods created with R blogdown? or just websites that post about their studies/researches based on econometric/statistics not necesssary that it's created with blogdown.

Thanks in advance!

1 Comment
2024/10/12
18:58 UTC

2

Code for Variance Ratio Test

What do you think about this code to test the Variance Ratio from Lo and Mackinley in 1988? I copied it from this link: https://mingze-gao.com/posts/lomackinlay1988/

The issue is that I already tried in some other ways, like this Youtube Video and I never get to the same results with the same dataset: https://www.youtube.com/watch?v=LZHQdcaC964&t=53s

Please, would appreciate some help!

CODE:
def estimate_python(data, k_vals=[2, 4, 8, 16]):

results = []

prices = data['Price'].to_numpy(dtype=np.float64)

log_prices = np.log(prices)

rets = np.diff(log_prices)

T = len(rets)

mu = np.mean(rets)

var_1 = np.var(rets, ddof=1, dtype=np.float64)

Some other stats

median = np.median(rets)

max = np.max(rets)

min = np.min(rets)

std = np.std(rets)

skewness = skew(rets)

kurtosis = stats.kurtosis(rets)

jarque_bera = stats.jarque_bera(rets)[0]

observations = T

descriptive_stats = { 'Mean': mu,

'Median': median,

'Maximum': max,

'Minimum': min,

'Std. Dev.': std,

'Skewness': skewness,

'Kurtosis': kurtosis,

'Jarque-Bera': jarque_bera,

'Observations': observations}

for k in k_vals:

rets_k = (log_prices - np.roll(log_prices, k))[k:]

m = k * (T - k + 1) * (1 - k / T)

var_k = 1/m * np.sum(np.square(rets_k - k * mu))

Variance Ratio

vr = var_k / var_1

Phi1

phi1 = 2 * (2*k - 1) * (k-1) / (3*k*T)

z_phi1 = (vr - 1) / np.sqrt(phi1)

Calculate p-value for two-tailed test

p_value = 2 * (1 - norm.cdf(abs(z_phi1)))

Store the results in a list

results.append({

'k': k,

'Variance Ratio': vr,

'z-Stat': z_phi1,

'p-Value': p_value

})

Convert results to a pandas DataFrame

results_df = pd.DataFrame(results)

descriptive_df = pd.DataFrame([descriptive_stats])

return results_df, descriptive_df

0 Comments
2024/10/12
14:10 UTC

6

Converting Spot Exchange rates to annualised returns

Hey everyone. I’m doing a project, which requires converting monthly spot exchange rates to annualised returns. Trying to figure out who to go on about this and code in R. Any ideas? Thanks

7 Comments
2024/10/11
22:07 UTC

6

Data processing

Hey guys,

This is my first post, so please forgive me for any (spelling) mistakes. I'm currently studying for a Master's degree in Economics and am doing my semester abroad. Here we have to write a term paper over the course of the semester, which in itself is "new" for me. In Germany, we actually only had exams or assignments at the end of the semester. Now the term paper itself doesn't present me with a big problem if it weren't for the empirical part. Our lecturer has given us a data set that we are supposed to use to confirm or refute the theory we had previously worked out. My problem is that although I had heard statistics 1 to 3, we never learnt any practical application. This means I don't know how R, Stata or Python could help me analyse the data. As I still have three weeks until the exam, I wanted to ask you whether I still have enough time to learn one of the three languages (?) - if so, which one would you recommend? And is there an online course, slide set or similar for this?

Thank you in advance

6 Comments
2024/10/11
11:31 UTC

3

Looking for suggestion

Guys - i have been looking for a topic for my Phd in management and economics where i can use advanced economatric techniques like DID or RDD. Any suggestions that i could explore or plat form where i can find it?

4 Comments
2024/10/10
10:40 UTC

2

Control sector for Diff-in-Diff

Hi, I am doing diff-in-diff and my treated companies are ones in the gas industry. I would like to know if I have to select somewhat similar sector (like oil industry) or I can select sector that is not so much similar (something like fishing) for my control group. Thank you!

4 Comments
2024/10/10
04:17 UTC

11

Hi, taking my first econometric course

Hi, I'm a 4th semester student and soon I will be taking my first out of 2 econometrics. Beside linear algebra and statics, can anyone give me some tips or "life hacks" to get a good grade. Thanks

8 Comments
2024/10/10
01:10 UTC

2

HELP TO DEFINE A FRAMEWORK

Hey, guys, I need some help! I'm an Electrical Engineering major pursuing a Master’s and have been working as a Data Scientist for almost 3 years. In my Master’s thesis, I want to use Causal Inference/Econometrics to analyze how Covid-19 impacted Non-Technical Losses in the energy sector.

With that in mind, what model could I use to analyze this? I have a time series dataset of Non-Technical Losses and can gather more data about Covid-19 and other relevant datasets. What I want to do is identify the impact of Covid-19 in a time series dataset with observational data of Non-Technical Losses of Energy.

6 Comments
2024/10/09
18:43 UTC

3

Help me with endogeneity issue

I’m working with panel data where the variables are group level indicators of performance. To put simply, the predictor is a group-level aggregated quantity (e.g., average reputation of members) which is time varying over several periods (the predicted variable being group performance). I have reason to believe that the predictor is not strictly exogenous since at times the group is constituted with an aim to make it perform well. However, a “part” of the predictor is exogeneous – it happens when a group member suddenly exits the group in one of the periods (death or some reason, which is strictly exogenous). So, for identification, I am thinking of creating two components of the predictor in my dataset: the first is the group level (reputation) measure assuming no exogenous shock – i.e., the group member has not left the group), and the second component would be the delta(predictor) ONLY there is an exogenous shock (death or some other reason) – this delta(predictor) would be a negative quantity if the exiting group member has an above-average reputation, and would be a positive quantity if the exiting group member has a below-average reputation.  In any case, the second component would be the exogenous component of the predictor – and its coefficient should be ideally significant when testing for the proposed hypothesis. Now having said this, to slightly complicate the matters, I am using Cox regression (predicted is a duration variable) with time-varying covariates, BUT that is beside the point since the essential question I have from you all is whether my strategy makes sense.

1 Comment
2024/10/08
16:53 UTC

1

R packadge for system GMM

Hey!
I want to apply a system GMM in R (panel data and multiple endogenous variables).
I think fixest does not do it.

Is pdynmc a good option?

What would you suggest?

0 Comments
2024/10/08
13:45 UTC

9

Testing b_1 + b_2 = 1 in a regression

Hi all,

Recently, I was asked, given the linear regression Y = b_0 + b_1X_1 + b_2X_2 + e, how we would test the hypothesis b_1 + b_2 = 1 using a t test.

Here is my approach:

Let g = b_1 + b_2. Then have y = b_0 + (g - b_2)X_1 + b_2X_2 + e = b_0 + gX_1 + b_2(X_2 - X_1) + e.

Thus, we can just test the null hypothesis that g = 1 compared to the alternative that g is not 1. So we construct a test statistic: t = (g - 1) / s.e.(g)

However, the problem hinted that I may need to redefine the dependent variable, which I do not do, nor do I understand why it is necessary. In general, I do not understand reparameterization, and was hoping someone could explain.

1 Comment
2024/10/08
01:36 UTC

3

LU decomposition, Matlab translation to R

Hello everyone,

 

In my job as a macroeconomist, I am building a structural vector autoregressive model.

I am translating the Matlab code of the paper « narrative sign restrictions » by Antolin-Diaz and Rubio-Ramirez (2018) to R, so that I can use this code along with other functions I am comfortable with.

I have a matrix, N'*N, to decompose. In Matlab, it determinant is Inf and the decomposition works. In R, the determinant is 0, and the decomposition, logically, fails, since the matrix is singular.  

The problem comes up at this point of the code :

 

Dfx=NumericalDerivative(FF,XX);          % m x n matrix

Dhx=NumericalDerivative(HH,XX);      % (n-k) x n matrix

N=Dfx*perp(Dhx');                  % perp(Dhx') - n x k matrix

ve=0.5*LogAbsDet(N'*N);

 

 

LogAbsDet computes the log of the absolute value of the determinant of the square matrix using an LU decomposition.

Its first line is :

[~,U,~]=lu(X);

 

In Matlab the determinant of N’*N is  « Inf ». This isn’t a problem however : the LU decomposition does run, and it provides me with the U matrix I need to progress.

In R, the determinant of N’*N is 0. Hence, when running my version of that code in R, I get an error stating that the LU decomposition fails due to the matrix being singular.

 

Here is my R version of the problematic section :

  Dfx <- NumericalDerivative(FF, XX)          # m x n matrix

  Dhx <- NumericalDerivative(HH, XX)      # (n-k) x n matrix

  N <- Dfx %*% perp(t(Dhx))             # perp(t(Dhx)) - n x k matrix

  ve <- 0.5 * LogAbsDet(t(N) %*% N)

 

All the functions present here have been reproduced by me from the paper’s Matlab codes.

This section is part of a function named « LogVolumeElement », which itself works properly in another portion of the code.
Hence, my suspicion is that the LU decomposition in R behaves differently from that in Matlab when faced with 0 determinant matrices.

In R, I have tried the functions :

lu.decomposition(), from package « matrixcalc »

lu(), from package "matrix"

Would you know where the problem could originate ? And how I could fix it ?

For now, the only idea I have is to directly call this Matlab function from R, since Mathworks doesn’t allow me to see how their lu() function is made …

3 Comments
2024/10/07
13:15 UTC

1

Econometrics Masters final project

Hi, I’m gathering ideas for my econometrics Masters final projects. Please share ANYTHING that comes to your mind.

You can use whatever model you want, analyse whatever you want.

Thank you in advance!

1 Comment
2024/10/07
11:03 UTC

3

Suggest YouTube tutorials for understanding data collection and manipulation

Hello, as you all already know to do an econometric analysis we for sure need to gather the data first and make them ready for use , there comes my problem, I never understood how exactly we manipulate the data, every analysis I have made is based from professors giving us the data and never put as in the position to gather them, does anyone know any YouTube tutorials or seminars better for that matter ? I have searched but I am not in a position to distinct the goods ones from bad ones .

2 Comments
2024/10/07
09:58 UTC

5

Control group for difference-in-difference-in-differences

Hi, I am doing work on triple diff and I need help for choosing my control group. I need to select countries that will act as a non-treatment ones. I tried to collect the data for countries that have quite a few similarities with the treatment country, but the data was very limited for them. Is it important to have similar control group to the treatment in my case when doing triple diff or it's fine if I select some countries for which I could potentially argue a bit why these?

Thank you!

2 Comments
2024/10/06
23:33 UTC

Back To Top