Re: [R] Reading in large file in pieces

From: Sean Davis <sdavis2_at_mail.nih.gov>
Date: Sat 24 Dec 2005 - 04:17:08 EST

On 12/23/05 2:41 AM, "Prof Brian Ripley" <ripley@stats.ox.ac.uk> wrote:

> On Thu, 22 Dec 2005, Sean Davis wrote:
> 

>> I have a large file (millions of lines) and would like to read it in pieces.
>> The file is logically separated into little modules, but these modules do
>> not have a common size, so I have to scan the file to know where they are.
>> They are independent, so I don't have to read one at the end to interpret
>> one at the beginning. Is there a way to read one line at a time and parse
>> it on the fly and do so quickly, or do I need to read say 100k lines at a
>> time and then work with those? Only a small piece of each module will
>> remain in memory after parsing is completed on each module.
>>
>> My direct question is: Is there a fast way to parse one line at a time
>> looking for breaks between "modules", or am I better off taking large but
>> manageable chunks from the file and parsing that chunk all at once?
> 
> On any reasonable OS (you have not told us yours), it will make no
> difference as the file reads will be buffered.  Assuming you are doing
> something like opening a connection and calling readLines(n=1), of course.

Thanks. That is indeed the answer, and you are correct that it is quite fast on MacOS 10.4.4. Most importantly, it does successfully reduce memory usage for my program by an order of magnitude (+/-).

Sean



R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html Received on Sat Dec 24 04:23:29 2005

This archive was generated by hypermail 2.1.8 : Sat 24 Dec 2005 - 09:29:53 EST