To calculate the number of observations required, XLSTAT uses an algorithm that searches for the root of a function. Calculating sample size for a correlation comparison test The power is then found using the area under the curve of the normal distribution to the left of Zp: Zp = Q * √(N’ – 3)/2 - Zreq where Zreq is the quantile of the normal distribution for alpha and N’ = / + 3. We use the Fisher Z-transformation: Zr = ½ log. The power calculation is done using an approximation by the normal distribution. The alternative hypothesis in this case is: Ha: r1 – r2 ≠ 0. Statistical Power for comparing two correlations The power is then found using the area under the curve of the normal distribution to the left of Zp: Zp = Q * √N - 3 - Zreq where Zreq is the quantile of the normal distribution for alpha. The alternative hypothesis in this case is: Ha: r ≠ r0. Statistical Power for comparing one correlation to a constant The non-centrality parameter used is the following: NCP = √ r²/(1-r²)* √N The part r²/(1-r²) is called effect size. The alternative hypothesis in this case is: H a: r ≠ 0 The method used is an exact method based on the non-central Student distribution. Statistical Power for comparing one correlation to 0
For this specific case we will use an approximation in order to compute the power. The power of a test is usually obtained by using the associated non-central distribution. XLSTAT allows you to compare:Ĭalculations for the Statistical Power of tests comparing correlations The main application of power calculations is to estimate the number of observations necessary to properly conduct an experiment. The statistical power calculations are usually done before the experiment is conducted. For a given power, it also allows to calculate the sample size that is necessary to reach that power. XLSTAT calculates the power (and beta) when other parameters are known. We therefore wish to maximize the power of the test. The power of a test is calculated as 1-beta and represents the probability that we reject the null hypothesis when it is false. We cannot fix it up front, but based on other parameters of the model we can try to minimize it. In fact, it represents the probability that one does not reject the null hypothesis when it is false.
The type II error or beta is less studied but is of great importance. It is set a priori for each test and is 5%. It occurs when one rejects the null hypothesis when it is true.