Re: [R] memory limits in R loading a dataset and using the package tree

From: Weiwei Shi <>
Date: Fri 05 Jan 2007 - 20:11:37 GMT

IMHO, R is not good at really large-scale data mining, esp. when the algorithm is complicated. The alternatives are 1. sampling your data; sometimes you really do not need that large number of records and the accuracy might already be good enough when you load less.

2. find an alternative (commercial software) to do this job if you really need to load all.

3. make a wrapper function, sampling your data and load it into R and build model and repeat this process until you get n models. Then you can do like meta-learning or simply majority-win if your problem is classification.

HTH, On 1/4/07, domenico pestalozzi <> wrote:
> I think the question is discussed in other thread, but I don't exactly find
> what I want .
> I'm working in Windows XP with 2GB of memory and a Pentium 4 - 3.00Ghx.
> I have the necessity of working with large dataset, generally from 300,000
> records to 800,000 (according to the project), and about 300 variables
> (...but a dataset with 800,000 records could not be "large" in your
> opinion...). Because of we are deciding if R will be the official software
> in our company, I'd like to say if the possibility of using R with these
> datasets depends only by the characteristics of the "engine" (memory and
> processor).
> In this case we can improve the machine (for example, what memory you
> reccomend?).
> For example, I have a dataset of 200,000 records and 211 variables but I
> can't load the dataset because R doesn't work : I control the loading
> procedure (read.table in R) by using the windows task-manager and R is
> blocked when the file paging is 1.10 GB.
> After this I try with a sample of 100,000 records and I can correctly load
> tha dataset, but I'd like to use the package tree, but after some seconds (
> I use this tree(variable1~., myDataset) ) I obtain the message "Reached
> total allocation of 1014Mb".
> I'd like your opinion and suggestion, considering that I could improve (in
> memory) my computer.
> pestalozzi
> [[alternative HTML version deleted]]
> ______________________________________________
> mailing list
> PLEASE do read the posting guide
> and provide commented, minimal, self-contained, reproducible code.

Weiwei Shi, Ph.D
Research Scientist
GeneGO, Inc.

"Did you always know?"
"No, I did not. But I believed..."
---Matrix III

______________________________________________ mailing list
PLEASE do read the posting guide
and provide commented, minimal, self-contained, reproducible code.
Received on Sat Jan 06 12:10:35 2007

Archive maintained by Robert King, hosted by the discipline of statistics at the University of Newcastle, Australia.
Archive generated by hypermail 2.1.8, at Sat 06 Jan 2007 - 01:30:26 GMT.

Mailing list information is available at Please read the posting guide before posting to the list.