The standard error of the difference (SE) estimates the variability of the difference between the test mean and the reference mean that you would obtain if you took repeated samples from the same populations. The standard error of the difference estimates the variability between samples, whereas the standard deviation measures the variability within a single sample.
For example, suppose you have a difference between the sample test mean and the sample reference mean of −0.12122 units. The test sample of 10 data values has a standard deviation of 0.26138. The reference sample of 9 data values has a standard deviation of 0.58064. The standard error of the difference equals the square root of the sum (0.58064/10 + 0.26138/9), or 0.20324. If you collected multiple random samples of the same size and from the same population, the standard deviation of the differences between the samples would be approximately 0.20324.
Use the standard error of the difference to determine how precisely the difference between the sample means estimates the difference between the mean of the test population and the mean of the reference population.
Lower values of the standard error indicate a more precise estimate. Usually, a larger standard deviation results in a larger standard error of the difference and a less precise estimate. A larger sample size results in a smaller standard error of the difference and a more precise estimate.
Minitab uses the standard error of the difference to calculate the test statistics (t-values).