= [wiki:Summit 2nd Summit] Day 3 = 10-Nov-2011, Mozilla SF. No video due to technical problems. == Attendees == * Zooko * David-Sarah * Zancas * Brian * Bill Frantz * Sam Stoller == Goals == * David-Sarah to present LAE's s3-backend branch * figure out merge plan for s3-backend and accounting * figure out leasedb crawler: detailed case analysis == Topics == * (one-person) Tahoe !InstallFest! * successfully got Tahoe installed on Bill Frantz's laptop. Excitement ensued from its lack of XCode and gcc. Eggs were built and installation proceeded smoothly. * (one-person) LAE Customer Setup! * successfully (eventually) got Brian signed up as an LAE customer. Kinda neat to include a PGP fingerprint on a signup form. Succeeded in uploading a (small) test file. * quick review of LAE's s3-backend branch (davidsarah) * brian hesitant about apparent complexity, but mitigated by the pre-existing messy complexity of the non-pluggable-backend code. * all parties looking forward to ripping out lease code and replacing with whatever comes out of brian's Accounting project * s3 backend currently has memory-footprint problems with large shares * branch also includes extra cleanups: {{{twisted.python.filepath}}}-ification, whitespace cleanups, {{{Interface}}}-de-parameterization, client-side comment fixes. Hopefully these will turn into separate patches which can be landed separately. * discussion of mutable-file CAP stuff * proposal to use two-phase commit and locking to improve mutable-file data preservation * general idea would be that servers allow one client to lock the next version of a share (for some limited time), then clients obtain lock on all shares before committing to a change. If clients observe contention (i.e. they are refused a lock because some second client already grabbed it), abandon all locks and back off. * additional idea: servers retain previous version of share until a third phase lets them delete it. Clients obtain lock, upload+commit to v2, when all servers report success, clients delete v1. Some failure cases result in mix of (v1-only) and (v1+v2) servers. Repair would need to spot these and be able to roll back to v1. Servers would need to report all versions that had been committed. Readers could see v1 and v2 and deliver v1 if v2 is not recoverable. * quick review of MDMF, bringing David-Sarah's diagram up-to-date * [attachment:day3-MDMF.jpg whiteboard]