Re: [R] naive question

From: Tony Plate <tplate_at_blackmesacapital.com>
Date: Thu 01 Jul 2004 - 03:23:37 EST

To be careful, there's lots more to I/O than the functions read.table() & scan() -- I was only commenting on those, and no inference should be made about other aspects of S-plus I/O based on those comments!

I suspect that what has happened is that memory, CPU speed, and I/O speed have evolved at different rates, so what used to be acceptable code in read.table() (in both R and S-plus) is now showing its limitations and has reached the point where it can take half an hour to read in, on a readily-available computer, the largest data table that can be comfortably handled. I'm speculating, but 10 years ago, on a readily available computer, did it take half an hour to read in the largest data table that could be comfortably handled in S-plus or R? People who encounter this now are surprised and disappointed, and IMHO, somewhat justifiably so. The fact that R is an open source volunteer project suggests that the time is ripe for one of those disappointed people to fix the matter and contribute the function read.table.fast()!

At Wednesday 10:08 AM 6/30/2004, Igor Rivin wrote:

>Thank you! It's interesting about S-Plus, since they apparently try to support
>work with much larger data sets by writing everything out to disk (thus
>getting
>around the, eg, address space limitations, I guess), so it is a little
>surprising
>that they did not tweak the I/O more...
>
> Thanks again,
>
> Igor
>



R-help@stat.math.ethz.ch mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Thu Jul 01 03:28:18 2004

This archive was generated by hypermail 2.1.8 : Fri 18 Mar 2005 - 08:10:03 EST