Re: R-alpha: R 0.12 alpha: problem with qt()

Peter Dalgaard BSA (pd@kubism.ku.dk)
06 Nov 1996 20:20:09 +0100


To: Bill Venables <bill@stats.ox.ac.uk>
Subject: Re: R-alpha: R 0.12 alpha: problem with qt()
From: Peter Dalgaard BSA <pd@kubism.ku.dk>
Date: 06 Nov 1996 20:20:09 +0100
In-Reply-To: Bill Venables's message of Wed, 6 Nov 1996 18:05:15 GMT
Message-Id: <x2pw1rdo9i.fsf@bush.kubism.ku.dk>

Bill Venables <bill@stats.ox.ac.uk> writes:
> OK this is a side issue, but I can't help myself.

>  >   Re. packages, I agree that it's a bit premature. Not so much
>  >   that we're not out of alpha yet, but as long as we still
>  >   find bugs like the t.test confidence interval booboo, we
>  >   probably shouldn't encourage too many of the clueless to use
>  >   it for real data analysis.
> 
> No package is ever safe for "real data analysis" to be done by
> the "clueless", Peter.  The great problem we face in statistics
> is the widespread and pernicious belief that data analysis
> nowdays is merely a matter of using good software and pressing
> the right buttons.

You didn't really expect me to disagree with that, did you? However,
the one thing that people, even statisticians, never suspect are (is?
- o, the wonders of English grammar..) the calculations. I think that
we need to provide a reasonable level of safety in that area - at
least systematic checks against results from other packages. This
can't be done yet, because people are still meddling with the language
itself and some methods that we'd like to have simply aren't coded.

(I'll admit that I got it phrased a bit inelegant. Often happens when
my mind is partly elsewhere.)

Back to the qbeta problem. I plopped in an 
fprintf(stderr, "%20.10g\n", adj); 
just before the g = g / three; line.

Without optimization (just of qbeta.c, i.e. everything else
optimized), I get: 

> qbeta(.99,1,10)
        0.4642109272
        0.1547369757
        0.1039328151
       -0.3265547945
       -0.1088515982
      -0.03628386605
       -0.1508825143
      -0.05029417144
[... what slow convergence for a Newton search? ...]
     -7.25789957e-16
    -3.991844763e-15
    -1.330614921e-15
     9.677199426e-16
    -2.782194835e-15
    -9.273982783e-16
     9.677199426e-16
     3.225733142e-16
    -3.628949785e-16
    -1.209649928e-16
[1] 0.3690427
> 


With optimization it goes into an endless loop, of which I never saw
the start, but when Ctr-C'ed it shows

...
    -3.628949785e-16
    -1.209649928e-16
    -3.628949785e-16
    -1.209649928e-16
    -3.628949785e-16
    -1.209649928e-16
    -3.628949785e-16
    -1.209649928e-16
    -3.628949785e-16
    -1.209649928e-16
 

So what seems to happen is that instruction reordering and soforth
creates a slight inaccuracy somewhere so that the code can no longer
meet the termination condition. Or what do I know? I still don't quite
get what the routine is supposed to do.

-- 
   O__  ---- Peter Dalgaard             Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics     2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark      Ph: (+45) 35327918
~~~~~~~~~~ - (p.dalgaard@biostat.ku.dk)             FAX: (+45) 35327907
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
r-testers mailing list -- To (un)subscribe, send
subscribe	or	unsubscribe
(in the "body", not the subject !)  To: r-testers-request@stat.math.ethz.ch
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-