Re: [R] Suggestion for big files [was: Re: A comment about R:]

From: François Pinard <>
Date: Mon 09 Jan 2006 - 06:25:08 EST

[Martin Maechler]

> FrPi> Suppose the file (or tape) holds N records (N is not known
> FrPi> in advance), from which we want a sample of M records at
> FrPi> most. [...] If the algorithm is carefully designed, when
> FrPi> the last (N'th) record of the file will have been processed
> FrPi> this way, we may then have M records randomly selected from
> FrPi> N records, in such a a way that each of the N records had an
> FrPi> equal probability to end up in the selection of M records. I
> FrPi> may seek out for details if needed.

>[...] I'm also intrigued about the details of the algorithm you
>outline above.

I went into my old SPSS books and related references to find it for you, to no avail (yet I confess I did not try very hard). I vaguely remember it was related to Spearman's correlation computation: I did find notes about the "severe memory limitation" of this computation, but nothing about the implemented workaround. I did find other sampling devices, but not the very one I remember having read about, many years ago.

On the other hand, Googling tells that this topic has been much studied, and that Vitter's algorithm Z seems to be popular nowadays (even if not the simplest) because it is more efficient than others. Google found a copy of the paper:

Here is an implementation for Postgres:

yet I do not find it very readable -- but this is only an opinion: I'm rather demanding in the area of legibility, while many or most people are more courageous than me! :-).

François Pinard

______________________________________________ mailing list
PLEASE do read the posting guide!
Received on Mon Jan 09 06:43:00 2006

This archive was generated by hypermail 2.1.8 : Fri 03 Mar 2006 - 03:41:57 EST