Opened at 2011-02-03T07:15:36Z
Last modified at 2011-10-11T01:52:14Z
#1354 new enhancement
compression (e.g. to efficiently store sparse files)
Reported by: | zooko | Owned by: | |
---|---|---|---|
Priority: | major | Milestone: | undecided |
Component: | code-encoding | Version: | 1.8.2 |
Keywords: | compression space-efficiency performance | Cc: | |
Launchpad Bug: |
Description
I was just re-reading the Fossil/Venti tech docs and realized that their method of omitting trailing zeroes from each block might be adapted to Tahoe-LAFS to efficiently store sparse files. (Tahoe-LAFS would do this per Tahoe-LAFS "segment" where Venti did it per Venti "block".)
This is of course a very specific form of compression and might be subsumed by more general compression instead.
On the other hand compression -- either the specific "remove runs of zeroes" compression from Venti or more general compression -- is often a lose at this layer since the biggest files are almost always uncompressable (because they are already compressed).
Are sparse files common?
Change History (5)
comment:1 Changed at 2011-02-03T18:38:37Z by warner
comment:2 Changed at 2011-02-04T04:52:41Z by davidsarah
Note that this leaks information about how much of each segment is sparse. You'd have to do compression before encryption to avoid that (but then how would range requests know which segments to get?)
comment:3 Changed at 2011-02-04T05:52:51Z by warner
Hm. You could compress data a chunk at a time, watching the output size until it grew above some min-segment-size threshold, then flush the compression stream and declare end-of-segment. Then start again with the remaining data, repeat. Now you've got a list of compressed segments and a table with the amount of plaintext that went into each one. You encrypt the table with the readcap and store it in each share. You also store (unencrypted) the table of ciphertext segment sizes. (unlike the uncompressed case, the plaintext-segment-size table will differ significantly from the ciphertext-segment-size table).
Alacrity would rise: you'd have to download the whole encrypted-segment-size table (which is O(filesize), although the multiplier is very small, something like 8 bytes per segment). There's probably a clever O(log(N)) scheme lurking in there somewhere, but I expect it'd involve adding roundtrips (you store multiple layers of offset tables: first you fetch the coarse one that tells you which parts of the next-finer-grained table you need to fetch, then the last table you fetch has actual segment offsets).
This scheme requires a compression library that either avoids deep pipelining or is willing to tell you how much more compressed output would be emitted if you did a flush() right now. I don't think most libraries have this property. You declare a segment to be finished as soon as you've emitted say 1MB of compressed data, then you tell the compressor to flush the pipeline and add whatever else it gives you to the segment. The concern is that you could wind up with a segment significantly greater than 1MB if the pipeline is deep.
comment:4 Changed at 2011-05-21T00:17:48Z by davidsarah
- Keywords compression space-efficiency performance added
comment:5 Changed at 2011-10-11T01:52:14Z by davidsarah
#994 is about supporting compression at the WAPI layer, rather than the storage layer. That approach has the advantage that you also get compression between the gateway and HTTP client, if the HTTP client supports it.
Neat trick!
No, from what I've seen, sparse files are not very common. The only things that come to mind are coredumps and database files, and I suspect that most modern (cross-platform compatible) DB files are not sparse anymore.
It shouldn't be too hard to rig up a tool to test that claim: os.walk, use stat to count the number of blocks, compare it against st_size/blocksize, if they're too far off you've probably got a sparse file.
The question of compression is an interesting one. To retain our low alacrity, we'd want to compress each segment separately, which would then result in variable-sized segments, so we'd need a new share layout (with a start-of-each-block table). Compressing the whole file would let us squeeze it down further, of course, but you can't generally get random-access that way. There may be some clever trick wherein we might save a copy of the compressor's internal state between segments to allow both random access *and* good whole-file compression, but I'd be afraid of the complexity/fragility of that approach.