Term | Description |
---|---|
fitted value | |
x_{k} | k^{th} term. Each term can be a single predictor, a polynomial term, or an interaction term. |
b_{k} | estimate of k^{th} regression coefficient |
The standard error of the fitted value in a regression model with one predictor is:
The standard error of the fitted value in a regression model with more than one predictor is:
For weighted regression, include the weight matrix in the equation:
When the data have a test data set or K-fold cross validation, the formulas are the same. The value of s^{2} is from the training data. The design matrix and the weight matrix are also from the training data.
Term | Description |
---|---|
s^{2} | mean square error |
n | number of observations |
x_{0} | new value of the predictor |
mean of the predictor | |
x_{i} | i^{th} predictor value |
x_{0} | vector of values that produce the fitted values, one for each column in the design matrix, beginning with a 1 for the constant term |
x'_{0} | transpose of the new vector of predictor values |
X | design matrix |
W | weight matrix |
For weighted regression, the formula includes the weights:
where t_{v} is the 1–α/2 quantile of the t distribution with v degrees of freedom for a two-sided interval. For a 1-sided bound, t_{v} is the 1–α quantile of the t distribution with v degrees of freedom.
When you use a test data set or k-fold cross-validation, the degrees of freedom and the mean square error are from the training data set.
Term | Description |
---|---|
fitted value | |
quantile from the t distribution | |
degrees of freedom | |
mean square error | |
leverage for the i^{th} observation | |
w_{i} | weight for the i^{th} observation |
Term | Description |
---|---|
y_{i} | i^{th} observed response value |
i^{th} fitted value for the response |
Standardized residuals are also called "internally Studentized residuals."
Term | Description |
---|---|
e_{i} | i ^{th} residual |
h_{i} | i ^{th} diagonal element of X(X'X)^{–1}X' |
s^{2} | mean square error |
X | design matrix |
X' | transpose of the design matrix |
For weighted regression, the formula includes the weight:
Term | Description |
---|---|
e_{i} | i ^{th} residual in the validation data set |
h_{i} | leverage for the i^{th} validation row |
s^{2} | mean square error for the training data set |
w_{i} | weight for the i^{th} observation in the validation data set |
Also called the externally Studentized residuals. The formula is:
Another presentation of this formula is:
The model that estimates the i^{th} observation omits the i^{th} observation from the data set. Therefore, the i^{th} observation cannot influence the estimate. Each deleted residual has a student's t-distribution with degrees of freedom.
Term | Description |
---|---|
e_{i} | i^{th} residual |
s_{(i)}^{2} | mean square error calculated without the i^{th} observation |
h_{i} | i ^{th} diagonal element of X(X'X)^{–1}X' |
n | number of observations |
p | number of terms, including the constant |
SSE | sum of squares for error |