Econometrics applies regression analysis to economics, adding methods developed for specific problems. This article documents the distinct aspects of econometrics.

Econometrics is concerned with the measurement of economic relations. Measuring marginal effect is the goal of most empirical economics research. Examples of economic relations include the relation between earnings and education, work experience; expenditure on a commodity and household income; price and attributes of a good or service; output of firm and inputs of labor, capital, materials; inflation and unemployment rates.

A regression model is often referred to as a reduced form regression, if the focus is on the prediction of outcome given predictors rather than the causal interpretation of the model parameters. Models for causal inference include simultaneous equations models, potential outcome model, etc.

## Linear Regression Model

Linear regression model (LM):

$$Y \mid \mathbf{X} \sim \text{Normal}(\mathbf{X}' \beta, \sigma^2)$$

Terminology:

• $Y$: outcome, dependent variable, regressand, variable to be explained, left-hand side variable;
• $\mathbf{X}$: predictor/covariate, independent variable, regressor, explanatory variable, right-hand side variable;
• $\beta$: regression coefficient, commonly interpreted as:
• Marginal effect: $\beta_k = \frac{\partial y}{\partial x_k}$, ($y$ on $x_k$).
• Percentage growth: $\beta_k = \frac{1}{y} \frac{\partial y}{\partial x_k}$, ($\ln y$ on $x_k$).
• Marginal effect on doubling: $\beta_k = \frac{\partial y}{\partial x_k} x_k$, ($y$ on $\ln x_k$).
• Elasticity: $\beta_k = \frac{1}{y} \frac{\partial y}{\partial x_k} x_k$, ($\ln y$ on $\ln x_k$); constant elasticity means power function: $y = x^p$.
• $u$: residual, error term, disturbance (not explicit in formula);

If there are multiple outcomes, the model is called general linear model. If there's only one predictor, the model is called simple linear regression, to distinguish from (multiple) linear regression. The residual consists of omitted or unobservable variables (latent variables); often includes a constant/intercept term.

While many observables in social sciences have heavy tailed distribution, the distributions of their logarithms are typically well behaved. For example, in econometrics, monetary variables such as earnings are often log-transformed. Transformation can also improve homogeneity of variances. (An alternative technique is weighted least squares.) But interpretations of the two are not the same.

Model assumptions explained:

1. Linearity (random sample): $(Y_i, \mathbf{X}_i)$ iid.
2. No perfect multicollinearity (Sample matrix has full column rank): $\exists \lambda \ne 0 : \mathbf{X}' \lambda = 0$.
3. Exogeneity (mean independence): $\mathbb{E}[u|\mathbf{X}] = 0$
• A weaker form: $\mathbb{E} u = 0$ and $\mathbb{E} u \mathbf{X} = 0$.
4. Spherical residuals: $\text{Var}(u|\mathbf{X}) = \sigma^2 I$
• Homoskedasticity: $\text{Var}(u_i|\mathbf{X}) = \text{Var}(u_j|\mathbf{X})$
• No autocorrelation or intraclass correlation: $Cov(u_i, u_j|\mathbf{X}) = 0$
5. Normal residuals: $u|\mathbf{X} \sim \text{Normal}$

Multicollinearity (also collinearity) refers to the presence of highly correlated subsets of predictors. Perfect multicollinearity means a subset of predictors are linearly dependent.

Violations of the Exogeneity Assumption that still satisfy the weaker $\mathbb{E}(u_i|X_i) = 0$ only occur with dependent data structures. However, it is often hard to assess whether the Exogeneity Assumption is satisfied, even if the model does not explicitly imply a violation.

The Spherical Residuals Assumption is generally too strong, because data in economics and many other fields commonly have heteroskedasticity, autocorrelation (aka serial correlation in time-series data), or intraclass correlation (in clustered data).

For most purposes, the Conditional Normality Assumption is not necessary (optional).

## Model Assessment

In macroeconomics, R-squared is often very high; 0.8 or higher is not unusual. In microeconomics, it is typically very low, with 0.1 not unusual. The reason might be that the number of observations in macroeconomics is often much lower (e.g., 25 OECD countries) than in micro-econometrics (e.g., 10,000 households), while the number of predictors is not that different.

Seeing regression as the approximation of regression function, R-squared does not have to be very close to one for model justification, distinct from practice in physics and mechanics experiments. This is because we're only considering average effect, rather than elimination of error term.

## Other Topics

MLE (for classification models), GMM

Asymptotic Inference: Asymptotic distribution of regression estimators.

Observational data (case control sampling) vs. experimental data (design of experiments, DOE).

Model and estimator are in some way analogous to simulation and estimation of random processes.

### Use of Monte Carlo methods in econometrics

combine an econometric model with available uncertain information on the model parameters; highlight the fragility of structural models to uncertain specification; select regression models according to the types of prior density specification;

[@Chib1996]

• posterior density of model parameters: the seemingly unrelated regression model;
• correlated model parameters or errors: parameters generated by a Markov process (state-space models); errors generated by the stationary AR(p) process;
• data augmentation: augment the parameter space with latent data of censored observations: Tobit model, binary probit models;
• Baysian inference: posterior moments, standard errors, marginal density functions; model adequacy: "incomplete model" [@Geweke2010]; modal estimates; likelihood surface; maximum likelihood estimates with diffuse priors;

Special models: models with parameter constraints; models with structural breaks at random points; models with censored and discrete data; models with Markov switching;

Monte Carlo studies: fixed in repeated samples; numerical integration;