[tahoe-dev] Client/helper RAM usage

Brian Warner warner at lothar.com
Sat Jul 4 23:34:15 PDT 2009


On Fri, 3 Jul 2009 08:18:49 -0600
Shawn Willden <shawn-tahoe at willden.org> wrote:

> Are files being processed (FEC-encoded, etc.) loaded into RAM?  Is it
> possible to upload a 2 GiB file on a machine with only 512 MiB RAM?
> Is there any way to estimate the memory consumption that will be
> required?

Only in small pieces, yes, and yes, respectively :)

We've carefully designed both the mutable/immutable encoding format and
the immutable encode/decode algorithms to minimize the memory footprint
they need while operating. They only ever work on one segment at a time,
which (by default) is 128KiB. In general, the RAM footprint is a small
multple of the segment size (maybe 3 or 4). We read in a segment's
worth of plaintext from disk, encrypt it (then throw out the
plaintext), encode it (then throw out the ciphertext), push it (then
throw out the shares). It works the same way on the download side.

The mutable publish/retrieve code does not yet do this: because we're
still operating under "SDMF" rules, we only process one segment per
mutable file. The maximum size limit for mutable files was recently
removed, which means the maximum segment size limit was removed
(segsize=filesize). So if you actually upload a 1GB *mutable* file, then
you'll need 3 or 4 GB of ram. When we implement "MDMF", we'll change the
mutable code to stream segments out the same way that immutable files
do, and then switch back to a 128MiB segsize.

There's a considerable static footprint, of course (this is python,
after all). The buildbot performance page has a graph of this footprint
over time. On a 32-bit machine, it's something like 40MB.

> I'm thinking about putting a client or maybe a helper on a Linux
> Virtual Server that I have, but it has only 512 MiB of RAM, and that
> is a hard figure.  There is no swap.

I'd expect that to work just fine.

cheers,
 -Brian


More information about the tahoe-dev mailing list