Lecture 2: Properties of OLS & Towards Inference

Computational Statistics FS2026 Week 2 Dr. Jonathan Koh, ETH Zürich Last updated: February 28, 2026

This lecture builds on the linear model foundations from Week 1. The main goals are to understand the statistical properties of the least squares estimator (unbiasedness, variance, optimality), prove the Gauss–Markov theorem, extend optimality to UMVU under Gaussian errors, and take the first steps towards inference (hypothesis testing and confidence intervals) in the linear model. The lecture also features a group game to build intuition about sampling variability in regression.

1. Recap from Week 1

Last week covered the statistical learning problem, the distinction between prediction and inference, the difference between an estimate (a fixed number computed from observed data) and an estimator (a random variable, since it depends on the random response $Y$). We set up the linear model $Y = X\beta + \varepsilon$ and derived the least squares estimate $\hat{\beta} = (X^\top X)^{-1}X^\top y$, explored the geometry of the projection matrix $P = X(X^\top X)^{-1}X^\top$ (which is idempotent), and saw that the fitted values are $\hat{y} = Py$ while the residuals are $r = (I - P)y$.

2. Group Game & Takeaways

The lecture began with a hands-on group game (~20 minutes) where teams of students collected data (e.g. height, shoe size, hand length) and fitted linear models in groups of about 10. The exercise was designed to build intuition about the following key concepts:

3. Properties of the Least Squares Estimator

Now we move from the estimate (lowercase $y$, fixed data) to the estimator (uppercase $Y$, random). The least squares estimator is $\hat{\beta} = (X^\top X)^{-1}X^\top Y$. Since $Y$ is random, $\hat{\beta}$ is also random — and we want to characterise its distribution.

3.1 Unbiasedness

Under the linear model $Y = X\beta + \varepsilon$ with $E[\varepsilon] = 0$, the proof of unbiasedness is straightforward:

$$E[\hat{\beta}] = E[(X^\top X)^{-1}X^\top Y] = (X^\top X)^{-1}X^\top E[X\beta + \varepsilon] = (X^\top X)^{-1}X^\top X\beta = \beta.$$

In words: on average, across many hypothetical datasets drawn from the same model, the OLS estimator hits the true parameter value. There is no systematic over- or under-estimation.

3.2 Covariance of $\hat{\beta}$

We also need to know how much the estimator varies from sample to sample. The key derivation (shown step-by-step on the slides) proceeds by substituting $Y = X\beta + \varepsilon$ and using $\text{Cov}(\varepsilon) = \sigma^2 I$:

$$\text{Cov}(\hat{\beta}) = \text{Cov}\{(X^\top X)^{-1}X^\top \varepsilon\} = (X^\top X)^{-1}X^\top \cdot \sigma^2 I \cdot X(X^\top X)^{-1} = \sigma^2(X^\top X)^{-1}.$$
Key Properties (Moments of OLS)

Under the linear model $Y = X\beta + \varepsilon$, $E[\varepsilon]=0$, $\text{Cov}(\varepsilon) = \sigma^2 I$:

  • $E[\hat{\beta}] = \beta$  (unbiased)
  • $\text{Cov}(\hat{\beta}) = \sigma^2(X^\top X)^{-1}$
  • $\text{Cov}(\hat{Y}) = \sigma^2 P$,   $\text{Cov}(\tilde{r}) = \sigma^2(I - P)$
  • $\text{Var}(\tilde{r}_i) = \sigma^2(1 - P_{ii})$ — note that the residuals are not uncorrelated!

3.3 Estimating $\sigma^2$

The error variance $\sigma^2$ is unknown and must be estimated. The natural estimator uses the residuals:

$$\hat{\sigma}^2 = \frac{1}{n-p}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2.$$

Why divide by $n - p$ instead of $n$? Because fitting $p$ parameters "uses up" $p$ degrees of freedom. Formally, we can show $E[\hat{\sigma}^2] = \sigma^2$ by using the fact that $\sum \text{Var}(\tilde{r}_i) = \sigma^2(n - \text{tr}(P)) = \sigma^2(n-p)$. So dividing by $n-p$ makes the estimator unbiased.

4. Optimality: The Gauss–Markov Theorem

We now ask: is OLS the "best" we can do? The answer is yes — but only within a specific class of estimators and under specific assumptions.

Gauss–Markov Assumptions

Consider the linear model $Y = X\beta + \varepsilon$ with $\text{rank}(X) = p$, and assume:

  1. $E[\varepsilon] = 0$  (errors have mean zero)
  2. $\text{Cov}(\varepsilon) = \sigma^2 I$  (errors are uncorrelated with constant variance)
Theorem — Gauss–Markov

Under the Gauss–Markov assumptions, $\hat{\beta} = (X^\top X)^{-1}X^\top Y$ is the Best Linear Unbiased Estimator (BLUE) of $\beta$. That is, for any other linear unbiased estimator $\tilde{\beta}$ of $\beta$,

$$\text{Cov}(\tilde{\beta}) - \text{Cov}(\hat{\beta}) \text{ is positive semidefinite.}$$

4.1 Proof Sketch

The proof is elegant and worth understanding in full. Here's the idea:

Any linear estimator can be written as $\tilde{\beta} = AY$ for some matrix $A \in \mathbb{R}^{p \times n}$. For it to be unbiased for all $\beta$, we need $AX = I_p$. Now decompose $A = (X^\top X)^{-1}X^\top + B$ for some matrix $B$. The unbiasedness condition then forces $BX = 0$.

Computing the covariance:

$$\text{Cov}(\tilde{\beta}) = \sigma^2 AA^\top = \sigma^2 BB^\top + \sigma^2(X^\top X)^{-1},$$

where the cross terms vanish because $BX = 0$. Since $BB^\top$ is always positive semidefinite, we get $\text{Cov}(\tilde{\beta}) - \text{Cov}(\hat{\beta}) = \sigma^2 BB^\top \succeq 0$. Equality holds only when $B = 0$, i.e., when the estimator is OLS.

4.2 Limitations of Gauss–Markov

Important Caveat

The Gauss–Markov theorem has clear limitations:

  • It assumes the linear model is correctly specified.
  • It assumes errors are uncorrelated with constant variance $\sigma^2$.
  • It only considers linear and unbiased estimators.

The key insight: we can relax the unbiasedness requirement! By allowing a little bias, we can sometimes achieve much lower overall error (mean squared error). This is the idea behind Ridge and LASSO regression, which will be covered later in the course.

4.3 EduApp Example: Exponential Errors

Example — Are Gauss–Markov assumptions satisfied with $\varepsilon_i \sim \text{Exp}(\lambda)$?

Consider $Y = \beta_0 + \beta_1 X_1 + \varepsilon$ where $\varepsilon \sim \text{Exp}(\lambda)$. At first glance this seems problematic since $E[\varepsilon] = 1/\lambda \neq 0$. But we can rewrite:

$$Y = \underbrace{(\beta_0 + 1/\lambda)}_{\tilde{\beta}_0} + \beta_1 X_1 + \underbrace{(\varepsilon - 1/\lambda)}_{\tilde{\varepsilon}}.$$

Now $E[\tilde{\varepsilon}] = 0$ and $\text{Var}(\tilde{\varepsilon}) = 1/\lambda^2$, and the errors are uncorrelated. So the Gauss–Markov assumptions are satisfied (with a shifted intercept), and OLS is still BLUE. The non-zero mean of the original error simply gets absorbed into the intercept.

5. The Gaussian Linear Model & UMVU

Can we do better than just "best among linear unbiased"? Yes — if we're willing to make a stronger distributional assumption.

5.1 Gaussian Errors: OLS = MLE

Suppose we strengthen our assumptions to $\varepsilon \sim N(0, \sigma^2 I)$, i.e., the errors are jointly normal. The likelihood function is then proportional to

$$\exp\!\left(-\frac{1}{2\sigma^2}(y - X\beta)^\top(y - X\beta)\right).$$

Maximising this likelihood with respect to $\beta$ is equivalent to minimising $\|y - X\beta\|^2$ — which is exactly the least squares criterion. So under Gaussian errors, OLS and maximum likelihood estimation give the same answer.

5.2 UMVU via Cramér–Rao

Theorem — UMVU Optimality

Under the Gaussian linear model, $\hat{\beta}$ is a uniformly minimum variance unbiased (UMVU) estimator of $\beta$. This means it has the smallest variance among all unbiased estimators — not just linear ones.

The proof relies on the Cramér–Rao lower bound. The Fisher information matrix is $I(\beta) = \frac{1}{\sigma^2}X^\top X$, and the Cramér–Rao bound says that no unbiased estimator can have covariance smaller than $I(\beta)^{-1} = \sigma^2(X^\top X)^{-1}$. Since $\text{Cov}(\hat{\beta})$ equals exactly this bound, $\hat{\beta}$ is UMVU.

Hierarchy of Optimality

Think of this as a progression of increasingly strong results:

  • Gauss–Markov assumptions only: $\hat{\beta}$ is BLUE (best among linear & unbiased).
  • Gauss–Markov + Gaussian errors: $\hat{\beta}$ is UMVU (best among all unbiased), and also the MLE.

However, even UMVU does not mean universally optimal — biased estimators (Ridge/LASSO) can have lower MSE.

6. A Step Towards Inference

With the distributional results in hand, we can now do inference — constructing tests and confidence intervals for the regression coefficients.

6.1 Distributions under Gaussian Errors

Key Distributional Results

Under $\varepsilon \sim N(0, \sigma^2 I)$, the following hold, and $\hat{\beta}$ and $\hat{\sigma}^2$ are independent:

  • $\hat{\beta} \sim N_p\!\big(\beta,\; \sigma^2(X^\top X)^{-1}\big)$
  • $\hat{\sigma}^2 \sim \frac{\sigma^2}{n-p}\,\chi^2_{n-p}$

These results are the foundation for all classical inference in the linear model: $t$-tests, $F$-tests, confidence intervals, and prediction intervals.

6.2 Testing Individual Coefficients: The $t$-Test

To test whether the $j$-th predictor is relevant, we test $H_{0,j}: \beta_j = 0$ against $H_{A,j}: \beta_j \neq 0$. Under $H_0$:

$$T_j = \frac{\hat{\beta}_j}{\sqrt{\hat{\sigma}^2 \, (X^\top X)^{-1}_{jj}}} \;\sim\; t_{n-p}.$$

The denominator is the estimated standard error of $\hat{\beta}_j$. If $|T_j|$ is large (or equivalently the $p$-value is small), we reject $H_0$ and conclude the predictor has a significant effect — conditional on all other predictors being in the model.

Interpreting Individual $t$-Tests

An individual $t$-test for $H_{0,j}$ quantifies the effect of the $j$-th predictor after having subtracted the linear effect of all other predictors. This means it's possible for all individual $t$-tests to be non-significant even when the predictors collectively have a strong effect — especially when predictors are correlated.

6.3 Confidence Intervals

From the $t$-distribution result, we can construct a two-sided confidence interval for $\beta_j$:

$$\hat{\beta}_j \;\pm\; \sqrt{\hat{\sigma}^2(X^\top X)^{-1}_{jj}} \cdot t_{n-p;\,1-\alpha/2},$$

which covers the true $\beta_j$ with probability $1 - \alpha$.

6.4 Practical Considerations

The normality assumption on $\varepsilon_i$ is often not exactly satisfied in practice. However, thanks to the central limit theorem, for large sample size $n$, the distributional results above remain approximately valid. For strongly non-Gaussian errors, robust methods may be preferable (though not covered in this course).

7. Reading R Output: Body Fat Example

The lecture included an example with body fat data ($n = 247$ men, $p = 13$ quantitative predictors) to practice reading R's summary(lm(...)) output. The key columns are:

ColumnWhat it showsFormula
Estimate$\hat{\beta}_j$$(X^\top X)^{-1}X^\top y$
Std. ErrorEstimated std. dev. of $\hat{\beta}_j$$\sqrt{\hat{\sigma}^2 (X^\top X)^{-1}_{jj}}$
t valueTest statistic for $H_0: \beta_j = 0$Estimate / Std. Error
Pr(>|t|)Two-sided $p$-valueFrom $t_{n-p}$ distribution

The output also reports the residual standard error $\hat{\sigma} = \sqrt{\hat{\sigma}^2}$ with $n-p$ degrees of freedom, the $R^2$ and adjusted $R^2$ values, and the $F$-statistic for the global null hypothesis that all regression coefficients (except the intercept) are zero.

8. Summary: Levels of Assumptions & Guarantees

AssumptionsWhat OLS achievesClass of comparison
$E[\varepsilon]=0$, $\text{Cov}(\varepsilon)=\sigma^2 I$ (Gauss–Markov) BLUE Linear & unbiased estimators
Gauss–Markov + $\varepsilon \sim N(0,\sigma^2 I)$ UMVU = MLE All unbiased estimators
Relaxing unbiasedness Ridge/LASSO can beat OLS in MSE All estimators (incl. biased)

Key Takeaways