A boxplot provides a graphical summary of the distribution of a sample. The boxplot shows the shape, central tendency, and variability of the data.
Use a boxplot to examine the spread of the data and to identify any potential outliers. Boxplots are best when the sample size is greater than 20.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
A histogram divides sample values into many intervals and represents the frequency of data values in each interval with a bar.
Use a histogram to assess the shape and spread of the data. Histograms are best when the sample size is greater than 20.
You can use a histogram of the data overlaid with a normal curve to examine the normality of your data. A normal distribution is symmetric and bell-shaped, as indicated by the curve. It is often difficult to evaluate normality with small samples. A probability plot is best for determining the distribution fit.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
Multi-modal data have multiple peaks, also called modes. Multi-modal data often indicate that important variables are not yet accounted for.
If you have additional information that allows you to classify the observations into groups, you can create a group variable with this information. Then, you can create the graph with groups to determine whether the group variable accounts for the peaks in the data.
An individual value plot displays the individual values in the sample. Each circle represents one observation. An individual value plot is especially useful when you have relatively few observations and when you also need to assess the effect of each observation.
Use an individual value plot to examine the spread of the data and to identify any potential outliers. Individual value plots are best when the sample size is less than 50.
Examine the spread of your data to determine whether your data appear to be skewed. When data are skewed, the majority of the data are located on the high or low side of the graph. Often, skewness is easiest to detect with a histogram or boxplot.
Outliers, which are data values that are far away from other data values, can strongly affect the results of your analysis. Often, outliers are easiest to identify on a boxplot.
Try to identify the cause of any outliers. Correct any data–entry errors or measurement errors. Consider removing data values for abnormal, one-time events (also called special causes). Then, repeat the analysis. For more information, go to Identifying outliers.
Quartiles are the three values–the first quartile at 25% (Q1), the second quartile at 50% (Q2 or median), and the third quartile at 75% (Q3)–that divide a sample of ordered data into four equal parts.
The first quartile is the 25th percentile and indicates that 25% of the data are less than or equal to this value.
The interquartile range (IQR) is the distance between the first quartile (Q1) and the third quartile (Q3). 50% of the data are within this range.
Use the interquartile range to describe the spread of the data. As the spread of the data increases, the IQR becomes larger.
The maximum is the largest data value.
In these data, the maximum is 19.
13 | 17 | 18 | 19 | 12 | 10 | 7 | 9 | 14 |
Use the maximum to identify a possible outlier or a data-entry error. One of the simplest ways to assess the spread of your data is to compare the minimum and maximum. If the maximum value is very high, even when you consider the center, the spread, and the shape of the data, investigate the cause of the extreme value.
The median is the midpoint of the data set. This midpoint value is the point at which half the observations are above the value and half the observations are below the value. The median is determined by ranking the observations and finding the observation that are at the number [N + 1] / 2 in the ranked order. If the number of observations are even, then the median is the average value of the observations that are ranked at numbers N / 2 and [N / 2] + 1.
The minimum is the smallest data value.
In these data, the minimum is 7.
13 | 17 | 18 | 19 | 12 | 10 | 7 | 9 | 14 |
Use the minimum to identify a possible outlier or a data-entry error. One of the simplest ways to assess the spread of your data is to compare the minimum and maximum. If the minimum value is very low, even when you consider the center, the spread, and the shape of the data, investigate the cause of the extreme value.
The range is the difference between the largest and smallest data values in the sample. The range represents the interval that contains all the data values.
Use the range to understand the amount of dispersion in the data. A large range value indicates greater dispersion in the data. A small range value indicates that there is less dispersion in the data. Because the range is calculated using only two data values, it is more useful with small data sets.
Quartiles are the three values–the first quartile at 25% (Q1), the second quartile at 50% (Q2 or median), and the third quartile at 75% (Q3)–that divide a sample of ordered data into four equal parts.
The third quartile is the 75th percentile and indicates that 75% of the data are less than or equal to this value.
The mean is the average of the data, which is the sum of all the observations divided by the number of observations.
Use the mean to describe the sample with a single value that represents the center of the data. Many statistical analyses use the mean as a standard measure of the center of the distribution of the data.
The standard error of the mean (SE Mean) estimates the variability between sample means that you would obtain if you took repeated samples from the same population. Whereas the standard error of the mean estimates the variability between samples, the standard deviation measures the variability within a single sample.
For example, you have a mean delivery time of 3.80 days, with a standard deviation of 1.43 days, from a random sample of 312 delivery times. These numbers yield a standard error of the mean of 0.08 days (1.43 divided by the square root of 312). If you took multiple random samples of the same size, from the same population, the standard deviation of those different sample means would be around 0.08 days.
Use the standard error of the mean to determine how precisely the sample mean estimates the population mean.
A smaller value of the standard error of the mean indicates a more precise estimate of the population mean. Usually, a larger standard deviation results in a larger standard error of the mean and a less precise estimate of the population mean. A larger sample size results in a smaller standard error of the mean and a more precise estimate of the population mean.
Minitab uses the standard error of the mean to calculate the confidence interval.
The mean of the data, without the highest 5% and lowest 5% of the values.
Use the trimmed mean to eliminate the impact of very large or very small values on the mean. When the data contain outliers, the trimmed mean may be a better measure of central tendency than the mean.
Grade Level | Count | CumN | Calculation |
---|---|---|---|
1 | 49 | 49 | 49 |
2 | 58 | 107 | 49 + 58 |
3 | 52 | 159 | 49 + 58 + 52 |
4 | 60 | 219 | 49 + 58 + 52 + 60 |
5 | 48 | 267 | 49 + 58 + 52 + 60 + 48 |
6 | 55 | 322 | 49 + 58 + 52 + 60 + 48 + 55 |
The number of missing values in the sample. The number of missing values refers to cells that contain the missing value symbol *.
Total count | N | N* |
---|---|---|
149 | 141 | 8 |
The number of non-missing values in the sample.
Total count | N | N* |
---|---|---|
149 | 141 | 8 |
The total number of observations in the column. Use to represent the sum of N missing and N nonmissing.
Total count | N | N* |
---|---|---|
149 | 141 | 8 |
The cumulative percent is the cumulative sum of the percentages for each group of the By variable. In the following example, the by variable has 4 groups: Line 1, Line 2, Line 3, and Line 4.
Group (by variable) | Percent | CumPct |
---|---|---|
Line 1 | 16 | 16 |
Line 2 | 20 | 36 |
Line 3 | 36 | 72 |
Line 4 | 28 | 100 |
The percent of observations in each group of the By variable. In the following example, there are four groups: Line 1, Line 2, Line 3, and Line 4.
Group (by variable) | Percent |
---|---|
Line 1 | 16 |
Line 2 | 20 |
Line 3 | 36 |
Line 4 | 28 |
Kurtosis indicates how the tails of a distribution differ from the normal distribution.
Skewness is the extent to which the data are not symmetrical.
The coefficient of variation (CoefVar) is a measure of spread that describes the variation in the data relative to the mean. The coefficient of variation is adjusted so that the values are on a unitless scale. Because of this adjustment, you can use the coefficient of variation instead of the standard deviation to compare the variation in data that have different units or that have very different means.
The larger the coefficient of variation, the greater the spread in the data.
Large container | Small container |
---|---|
CoefVar = 100 * 0.4 cups / 16 cups = 2.5 | CoefVar = 100 * 0.08 cups / 1 cup = 8 |
The standard deviation is the most common measure of dispersion, or how spread out the data are about the mean. The symbol σ (sigma) is often used to represent the standard deviation of a population, while s is used to represent the standard deviation of a sample. Variation that is random or natural to a process is often referred to as noise.
Because the standard deviation is in the same units as the data, it is usually easier to interpret than the variance.
Use the standard deviation to determine how spread out the data are from the mean. A higher standard deviation value indicates greater spread in the data. A good rule of thumb for a normal distribution is that approximately 68% of the values fall within one standard deviation of the mean, 95% of the values fall within two standard deviations, and 99.7% of the values fall within three standard deviations.
The variance measures how spread out the data are about their mean. The variance is equal to the standard deviation squared.
The greater the variance, the greater the spread in the data.
Because variance (σ2) is a squared quantity, its units are also squared, which may make the variance difficult to use in practice. The standard deviation is usually easier to interpret because it's in the same units as the data. For example, a sample of waiting times at a bus stop may have a mean of 15 minutes and a variance of 9 minutes2. Because the variance is not in the same units as the data, the variance is often displayed with its square root, the standard deviation. A variance of 9 minutes2 is equivalent to a standard deviation of 3 minutes.
The mode is the value that occurs most frequently in a set of observations. Minitab also displays how many data points equal the mode.
The mean and median require a calculation, but the mode is determined by counting the number of times each value occurs in a data set.
The mode can be used with mean and median to provide an overall characterization of your data distribution. The mode can also be used to identify problems in your data.
For example, a distribution that has more than one mode may identify that your sample includes data from two populations. If the data contain two modes, the distribution is bimodal. If the data contain more than two modes, the distribution is multi-modal.
The MSSD is the mean of the squared successive difference. MSSD is an estimate of variance. One possible use of the MSSD is to test whether a sequence of observations is random. In quality control, a possible use of MSSD is to estimate the variance when the subgroup size = 1.
The sum is the total of all the data values. The sum is also used in statistical calculations, such as the mean and standard deviation.
The uncorrected sum of squares are calculated by squaring each value in the column, and calculates the sum of those squared values. For example, if the column contains x1, x2, ... , xn, then sum of squares calculates (x12 + x22 + ... + xn2). Unlike the corrected sum of squares, the uncorrected sum of squares includes error. The data values are squared without first subtracting the mean.