TL;DR: you can accurately estimate the hypothetical performance of multivariate linear regression and classification models trained on infinite data with surprisingly little data, , even when the number of samples (n) is less than the number of features or dimensions (d).
This week in lab meeting we discussed ‘Estimating learnability in the sublinear data-regime’ by Kong and Valiant published in NeurIPS 2018. The key idea is that with clever statistical methods you can estimate the hypothetical performance of a model trained on infinite data, using only a small amount of training data. The authors provide methods to do so for multivariate linear regression and classification.
For the multivariate regression case, the authors seek to estimate the explained variance, as quantified by:
where is the (scalar) output,
is the vector (
) of regressors, and
is the optimal least squares weight vector.
In the linear regression setting, an accurate estimator of this quantity already exists (but has a key limitation):
where is the least squares solution estimated on
samples from (x, y) (same as those in the formula above).
It’s important to note that with finite data, the estimated is not the true
, thus your model wouldn’t actually achieve this performance on held-out data.
The key problem with this simple but effective estimator is that it stops working when you have fewer samples than features , this is because your regressand matrix (x) is over-complete and there is no unique least-squares solution and if you regularize you can fit the data arbitrarily well (leaving no residuals on which to estimate performance).
Kong and Valiant provide a solution to this problem by not estimating performance on the basis of model predictions but directly on the basis of covariance between your regressors and regressand. To get a flavor for this approach I will provide a short alternative derivation of their estimator in the case where Cov(x) = I, E[x]=0, E[y]=0. First note that if I choose one observation of x and y, then:
because . And with two independent observations of x and y, I can have:
.
Then, divide by an unbiased estimator of the variance of , the sample variance will work, and you have an estimate of
with an unbiased numerator and denominator.
Kong and Valiant go beyond the restrictive case I describe above to arbitrary covariance and provide an excellent introduction reviewing prior work in this area (note the solution to the identity covariance case was first provided by Lee H Dicker. Variance estimation in high-dimensional linear models. Biometrika, 2014.).