Suppose you want to generate various benchmark statistics for your product, but you do not have the complexity data available.

The total opportunities count is 657,092. The total DPMO is 1357.5, which translates to an overall Z.ST of 4.498. Now, consider Component 16, which has the lowest Z.ST (that is, the worst capability). Suppose that, because of an upcoming scheduled downtime for the process that makes Component 16, you produce 100 times as many units of Component 16, and also observe 100 times as many defects.

The total opportunity count was not affected much. It went from 657,092 to 734,609. However, total DPMO went from 1357 to 7952.5 (6 times as high). And total Z.ST went from 4.498 to 3.911, a dramatic reduction of about one-half sigma. All these changes are the result of increasing the production of Component 16, not the result of any decay in capability.

Here is the same analysis with complexity data.

Now, the total DPMO is 1300.7, with a total Z.ST of 4.511. Remember, these values should be slightly different from the original values because you used the complexity data to adjust the unit counts and the defect counts. Now, you increase the production of Component 16 as before, but with the complexity data.

You can see that the only differences are in the observed units and observed defects for Component 16. Using complexity data completely removed any effect that is from the disproportionate production and sampling of Component 16.