Statistical testing, power and sample size |
![]() |
![]() |
< Prev | Next > |
---|
The group Testing consists of three modules: Power and Sample Size, Tests and Contingency Table.
Modules in the group Power and Sample Size compute power of a test, required sample size an minimal difference of parameters that can be detected by the test. The tests support both normal and binomial distribution. Inputs are significance level (or type I error) α, type of the test (one-sided or two-sided and theoretical (expected, specified) distribution parameter value. This parameter is the mean value for normal distribution, or probability for binomial distribution. Further, it is necessary to specify two of the following three numbers: Sample size, expected sample statistic and the power of the test 1 – β (β is the type II error). Available tests:
![]()
Example:
Similarly, we can think of type II error, when we accept H0, though it does not hold. Probability (or risk) of this situation is β. Obviously, number of data N, α, β, and difference between real and estimated parameter ∆x are interdependent. When we want for example to have low both α and β, we have to take more data. When there is big ∆x, we need less data. When we have available only small data set and expect small ∆x, we will obtain lower „reliability“ of the test in term of high α and β, etc. All methods of Power and Sample Size have both one-sided and two-sided option. One-sided option means, that we are testing only „bigger“ or only „less“, and we don’t take into account the other possibility. By two-sided test we do not distinguish between „bigger“ or „less“. One-sided option tests always x > μ in one-sample normal tests, or x2 > x1 in two-sample normal and PA > P0 in one-sample binomial proportion tests or P2 > P1 in two-sample binomial proportion tests. |
< Prev | Next > |
---|