pspp-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[patch #5583] NPAR TESTS


From: Jason H Stover
Subject: [patch #5583] NPAR TESTS
Date: Sat, 16 Dec 2006 22:27:01 +0000
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.8) Gecko/20061025 Firefox/1.5.0.8

Update of patch #5583 (project pspp):

                  Status:   Ready For Test/Review => Works For Me           
             Assigned to:                 jstover => jmd                    

    _______________________________________________________

Follow-up Comment #11:

The patch works for me. 

> >   Pr ({less than or equal to 10 males} \cup {more than 10 males})        
                                   
>                                                                            
                                   
> >    ...which is 1.0 for a binomial random variable with 20 trials         
                                   
> >    and null hypothesis success probability of 0.5.                       
                                   
>                                                                            
                                   
> Isn't it also 1.0 for ANY hypothesis ?? 

No. Here is one example: If the null hypothesis prob is 0.9, and the observed
proportion is 0.5 with 20 trials. If the null hypothesis were that p >= 0.9,
then the p-value would be 
Pr (X <= 10) = 7.14e-6. If the null hypothesis is that p = 0.9, then the
two-sided p-value is more difficult to compute. In the case of any symmetric
distribution, the two-sided p-value is computed as Pr (X <= m - c) + Pr (X =>
m + c), where m is the expected value of X under the null hypothesis and
either m-c or m+c is the observed test statistic. But the distribution we are
testing is not symmetric (if p = 0.9). In this case, the two-sided p-value
would have to be expressed this way:

Pr (X <= test statistic) + Pr (x >= m + c) 

where m is the expected value and c is chosen to make those two probabilities
equal. But, in the case of the unsymmetric and discrete binomial distribution
with p = 0.9, the expected value of X is 18. Now there is no value c to make
Pr (X >= 18 + c) equal to Pr (X <= 10), and we have now run into a problem of
p-values that Bayesians are always kvetching about: Why are we using
probabilities for extreme and unobserved data to falsify this null
hypothesis? We would of course conclude that p is not 0.9, but the
aforementioned criticism is still a valid. At this point we should just think
about cloning software, since the computations we are talking about don't make
practical sense to a human who wants to compute a two-sided p-value of a test
of "p = 0.9". p-values have their place, but there are examples of data (such
as this case) in which their flaws show, and can't be fixed. So my advice is
just to imitate the other software.

The optimization you mention below is probably not necessary, except in
extreme cases. Even then I don't think it would be necessary, since gsl uses
a VERY precise approximation of the gamma function to compute that
probability. If you want to use it, try using gsl_cdf_binomial_Q along with
gsl_cdf_binomial_P. And these functions are now in gsl 1.8, so they are no
longer necessary in gslextras.

I would use the exact result until the sample size grows enough to cause
overflows. I'm guessing that other software reports its result as asymptotic
because doing so is a legacy from the days of single-precision and expensive
flops. They probably just never changed the documentation, only the code.


    _______________________________________________________

Reply to this item at:

  <http://savannah.gnu.org/patch/?5583>

_______________________________________________
  Message sent via/by Savannah
  http://savannah.gnu.org/





reply via email to

[Prev in Thread] Current Thread [Next in Thread]