The sample size (N) is the total number of observations in the sample.
The sample size affects the confidence interval and the power of the test. Usually, a larger sample results in a narrower confidence interval. A larger sample also gives the test more power.
For more information on power in equivalence tests, go to Power for equivalence tests.
The mean summarizes the values in each sample with a single value that identifies the center of the data. The mean is calculated as the arithmetic average of the data, which is the sum of all the observations divided by the number of observations.
The mean of the test sample is an estimate of the mean of the test population. The mean of the reference sample is an estimate of the mean of the reference population. Therefore, the difference (or the ratio) between the sample means provides an estimate of the difference (or ratio) between the means of the test population and the reference population.
Because the estimate is based on sample data and not on entire populations, you cannot be certain that it equals the difference (or ratio) of the populations. To assess the precision of the estimate for the populations, you can use a confidence interval.
The standard deviation (StDev) is the most common measure of dispersion, or how much the data vary relative to the mean. Variation that is random or natural to a process is often referred to as noise.
The standard deviation uses the same units as the data. The symbol σ (sigma) is often used to represent the standard deviation of a population. The letter s is used to represent the standard deviation of a sample.
Use the standard deviation to determine how spread out the data are from the mean.
The standard deviation of the sample data is an estimate of the population standard deviation. Higher values indicate more variation or "noise" in the data. The standard deviation is used to calculate the confidence interval and the p-value. A higher value results in a wider confidence interval and lower statistical power.
The standard error of the mean (SE Mean) estimates the variability between sample means that you would obtain if you took repeated samples from the same population. Whereas the standard error of the mean estimates the variability between samples, the standard deviation measures the variability within a single sample.
For example, you have a mean delivery time of 3.80 days, with a standard deviation of 1.43 days, from a random sample of 312 delivery times. These numbers yield a standard error of the mean of 0.08 days (1.43 divided by the square root of 312). If you took multiple random samples of the same size, from the same population, the standard deviation of those different sample means would be around 0.08 days.
Use the standard error of the mean to determine how precisely the sample mean estimates the population mean.
A smaller value of the standard error of the mean indicates a more precise estimate of the population mean. Usually, a larger standard deviation results in a larger standard error of the mean and a less precise estimate of the population mean. A larger sample size results in a smaller standard error of the mean and a more precise estimate of the population mean.
Minitab uses the standard error of the mean to calculate the confidence interval.