R-alpha: Re: S-PLUS on NeXTSTEP

Gregory R. Warnes (warnes@biostat.washington.edu)
Sat, 4 Jan 1997 08:24:31 -0800 ()

Date: Sat, 4 Jan 1997 08:24:31 -0800 ()
From: "Gregory R. Warnes" <warnes@biostat.washington.edu>
To: "Steven M. Boker" <sboker@calliope.psych.nd.edu>
Subject: R-alpha: Re: S-PLUS on NeXTSTEP
In-Reply-To: <199701040031.TAA19610@calliope.psych.nd.edu>

On Fri, 3 Jan 1997, Steven M. Boker wrote:

> Dear Greg-
> I looked at R about a year ago and it wasn't able to fill my needs
> at that time.  Perhaps I should give it another look.  How does it
> do with large datasets (N > 500,000 and variables > 100)?

I haven't used it with very large datasets yet, although I will be doing
so soon.  R is generally much better at using the memory it has allocated
than Splus, since R does much better garbage collection.  It does so well
that the default memory size is a mere 2 Meg, which is clearly too small
for the dataset you mention. This amount can be changed either on the
command line or in the source code to an arbitrary (?) size.  

I'm forwarding this message to the r-testers group, in hopes that someone
has already used R for large datasets successfuly and can comment.


    Gregory R. Warnes          | It is high time that the ideal of success
warnes@biostat.washington.edu  |  be replaced by the ideal of service.
                               |                       Albert Einstein

r-testers mailing list -- For info or help, send "info" or "help",
To [un]subscribe, send "[un]subscribe"
(in the "body", not the subject !)  To: r-testers-request@stat.math.ethz.ch