Before you do a hypothesis test, you must choose a significance level for the test. Use the significance level to judge whether the test results are statistically significant. The significance level also determines the probability of error that is inherent in the test.
If the probability that an event occurs is less than α, the usual interpretation is that the event did not occur by chance. Formally, α is the maximum acceptable level of risk for rejecting a true null hypothesis (Type I error) and is expressed as a probability ranging between 0 and 1. The smaller the significance level, the less likely you are to make a Type I error, and the more likely you are to make a Type II error. Therefore, you should choose an alpha that balances these opposing risks of error based on their practical consequences in your specific situation.
Choose a larger alpha, such as 0.10, to be more certain that you will not miss detecting a difference that might exist.
For example, an engine manufacturer wants to compare the stability of new ball bearings with the current ones. If the new ball bearings are less stable, customers could have disastrous consequences. Therefore, they choose an α of 0.1 to be more certain that they will detect any possible difference in the stability.
Choose a smaller alpha, such as 0.01, to be more certain that you will only detect a difference that really does exist.
For example, a pharmaceutical company wants to be very certain before making an advertising claim that its new product significantly reduces symptoms. The company chooses an α of 0.001 to be sure that any significant difference in symptoms that they detect actually does exist.