#1045 closed defect

Memory leak during massive file upload — at Initial Version

Reported by: francois Owned by: nobody
Priority: critical Milestone: 1.8.1
Component: code Version: 1.6.1
Keywords: performance reliability upload download memory sftp unfinished-business Cc: francois@…, zooko
Launchpad Bug:

Description

Today, I copied about 12'000 files for a total size of about 52 GB into Tahoe with the SFTP frontend.

Here's what top has to says about the Tahoe process after this operation.

 2765 francois  20   0 2059m 1.5g 2472 D    2 75.4 527:08.83 python                    

I will update this ticket as soon as I can gather more details.

David-Sarah Hopwood proposed to do the same test a second time via the wapi to help locate the leak.

Here is what Brian Warner proposed on IRC:

keep track of the process size vs time, with munin or a script that saves values and then graph them with gnuplot or something
I think tahoe's /stats WAPI will give you process-memory-size info
the idea is to do some operation repeatedly, measure process-space change while that's running, then switch to some other operation and measure that slope, and look for differences
'cp' to an SFTP-mounted FUSEish thing, vs 'tahoe cp' might be a good comparison

Change History (0)

Note: See TracTickets for help on using tickets.