Opened at 2010-05-18T01:14:15Z
Last modified at 2010-10-31T03:01:41Z
#1045 closed defect
Memory leak during massive file upload — at Version 2
Reported by: | francois | Owned by: | somebody |
---|---|---|---|
Priority: | critical | Milestone: | 1.8.1 |
Component: | code | Version: | 1.6.1 |
Keywords: | performance reliability upload download memory sftp unfinished-business | Cc: | francois@…, zooko |
Launchpad Bug: |
Description (last modified by davidsarah)
Today, I copied about 12'000 files for a total size of about 52 GB into Tahoe with the SFTP frontend.
Here's what top has to say about the Tahoe process after this operation.
2765 francois 20 0 2059m 1.5g 2472 D 2 75.4 527:08.83 python
I will update this ticket as soon as I can gather more details.
David-Sarah Hopwood proposed to do the same test a second time via the wapi to help locate the leak.
Here is what Brian Warner proposed on IRC:
keep track of the process size vs time, with munin or a script that saves values and then graph them with gnuplot or something
I think tahoe's /stats WAPI will give you process-memory-size info
the idea is to do some operation repeatedly, measure process-space change while that's running, then switch to some other operation and measure that slope, and look for differences
'cp' to an SFTP-mounted FUSEish thing, vs 'tahoe cp' might be a good comparison
Change History (2)
comment:1 Changed at 2010-05-18T01:24:48Z by davidsarah
- Component changed from unknown to code
- Description modified (diff)
- Keywords memory sftp added
- Milestone changed from 1.8.0 to undecided
- Owner changed from nobody to somebody
comment:2 Changed at 2010-05-18T01:26:05Z by davidsarah
- Description modified (diff)