#1863 closed defect (invalid)

Work-Around for > ~150MB files on Least Authority TLoS3

Reported by: nejucomo Owned by: davidsarah
Priority: normal Milestone: undecided
Component: unknown Version: 1.9.2
Keywords: lae usability large workaround Cc:
Launchpad Bug:

Description

Background:

TLoS3 aka Tahoe-LAFS on S3 currently has a deficiency where large files (> ~150MB) fail to upload. There is a ticket on their support site where they explain the fix is already implemented but not deployed.

Until the fix is deployed, users with large files need a work-around. This ticket collects folk wisdom about work around. Please add recipes which you have successfully used in comments.

Change History (5)

comment:1 Changed at 2012-11-18T07:48:35Z by nejucomo

This "workaround" ticket may be unnecessary:

I just attempted to make a demo so that I could develop and paste a find and split based workaround, but it appears that it succeeded to upload a 200MB file:

$ find . -type f -print0 | xargs -0 ls -lh
-rw-r--r-- 1 n n  10M Nov 17 20:26 ./a/other_stuff
-rw-r--r-- 1 n n  42M Nov 17 20:26 ./a/stuff
-rw-r--r-- 1 n n 200M Nov 17 20:27 ./b/big_stuff
-rw-r--r-- 1 n n  17M Nov 17 20:26 ./b/more_stuff

$ tahoe create-alias big_file_backup.1
Alias 'big_file_backup.1' created

$ tahoe backup . big_file_backup.1:
 4 files uploaded (0 reused), 0 files skipped, 3 directories created (0 reused), 0 directories skipped
 backup done, elapsed time: 0:51:20

$ tahoe ls -l big_file_backup.1:Latest/b
-r-- 209715200 Nov 17 20:27  big_stuff
-r--  17825792 Nov 17 20:26 more_stuff

comment:2 Changed at 2012-11-18T13:02:38Z by zooko

Fortunately LeastAuthority.com now has graphs of memory usage on all customer storage servers! So I can go look at that and see what effect your 200 MB upload had on your storage server. (The problem with large uploads is all about RAM usage in the storage server.)

Unfortunately, I currently don't have the password to LeastAuthority.com's graphs, so I'll have to wait til another member of the LeastAuthority.com team wakes up. ☺

comment:3 Changed at 2012-11-19T23:19:12Z by nejucomo

I'm migrating away from the strategy of using trac tickets for "work arounds" or "recipes" because for the former case, the work-arounds can go on the original bug ticket. In the latter case there's no clear close-ticket criteria.

In both cases if there's a large enough need, the work-around/recipe should probably have a wiki page.

Therefore I propose closing this ticket after we link to the ticket outlining the original issue and fix.

comment:4 Changed at 2012-11-19T23:21:31Z by nejucomo

  • Resolution set to invalid
  • Status changed from new to closed

The relevant non-workaround tickets are #1786 and #1796. If I have better documentation on a work-around, I'll post it to one of those tickets.

comment:5 Changed at 2012-11-19T23:24:03Z by davidsarah

Related tickets:

  • #1638 (S3 backend: Upload of large files consumes memory > twice the size of the file). This is closed because the cloud backend fixes it, and the S3 backend will never be merged.
  • #1786 (cloud backend: limit memory usage)
  • #1796 (refuse to upload/download a mutable file if it cannot be done in the available memory)
  • #1819 (cloud backend: merge to trunk)
Note: See TracTickets for help on using tickets.