Opened at 2011-08-28T22:28:07Z
Last modified at 2013-08-31T00:54:34Z
#1513 new defect
memory usage in MDMF publish — at Initial Version
Reported by: | warner | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | eventually |
Component: | code-mutable | Version: | 1.9.0a1 |
Keywords: | mutable mdmf memory-leak performance docs | Cc: | |
Launchpad Bug: |
Description
I did a 'tahoe push --mdmf --mutable-type=mdmf foo' of a 210MB file. The client process swelled to 1.15GB RSS, making my entire system pretty unresponsive. The publish eventually succeeded, and the memory usage went back to normal.
I'm guessing that either there's a design problem in which it's trying to upload all segments in parallel, or there's a failure in the Pipeline code such that it's holding all shares in memory at the same time.
Since MDMF is supposed to make it possible to work with large files, I think the memory usage should be similar to CHK files: capped at a small constant times the segsize.
It would be nice to fix this for 1.9, but since MDMF is still experimental, I'm willing to ship without it.