#265 closed enhancement (fixed)

Prevent uncoordinated writes locally.

Reported by: nejucomo Owned by: zooko
Priority: major Milestone: 1.1.0
Component: code-mutable Version: 0.7.0
Keywords: coordination consistency Cc: nejucomo, booker, warner, secorp
Launchpad Bug:

Description

Tahoe does not attempt to solve uncoordinated writes, generally. However, it could make it hard for a single user, using a single node, to trigger uncoordinated writes, by detecting this behavior locally.

Change History (8)

comment:1 Changed at 2008-03-27T05:21:13Z by zooko

  • Cc nejucomo booker warner secorp added
  • Milestone changed from undecided to 1.0.1
  • Owner set to zooko
  • Status changed from new to assigned

We just had a conversation with MikeB and Peter about how the allmydata.com 3.0 native Windows integration has some uncoordinated writes due to links of child directories and unlinks of any kind being uncoordinated.

It would be good to serialize those to prevent errors, but it would require a bit of restructuring of MikeB's Windows integration code.

On the way out to the taxi, Nathan reminded me of this idea -- that the Tahoe node could serialize those for you. We could probably do it in Tahoe more easily than Mike could do it in the Window integration.

The first step, of course, would be to make unit tests which issue two successive updates through the WAPI to one mutable file, which makes sure that the first update doesn't complete before the second one is issued (by using a Fake object for the mutable file), and which makes sure that the second update doesn't get issued to the Fake mutable file object by the wapi layer until after the first update finishes.

comment:2 Changed at 2008-03-27T16:31:49Z by zooko

Oh wait, even better is to do this serialization in the MutableFileNode object, and for the wapi layer to simply guarantee that whenever you make a call to a mutable file or directory, that if a MutableFileNode? object or Dirnode object for that file or directory is already in memory that it uses it rather than creating a new object pointing to the same mutable file or directory.

comment:3 Changed at 2008-03-27T17:59:30Z by warner

Yeah, I've wondered if the Client should keep a weak-value-dict that maps read-cap to filenode, so it could make a promise that there will never be more than a single filenode for any given mutable file. Then the MutableFileNode.update method would need internal locking: only one update may be active at a time.

comment:4 Changed at 2008-04-24T23:25:39Z by warner

The MutableFileNode object now does internal serialization: if a single instance is asked to perform an operation while it's in the middle of another operation, the second will wait until the first has finished.

To make this useful for external webapi callers, we still need to implement the weak-value-dict scheme. Basically client.create_node_from_uri() should look up the URI in a weak-value-dict, and return the value if found. If not, it should create a new node, then add it to the weak-value-dict before returning it.

It will still be possible to get, say, a write-node and a read-node to the same file in the same process, but these won't collide with each other in bad ways (only annoying ones: a read could fail because the write-node modified the shares as the read was trying to fetch them).

comment:5 Changed at 2008-04-24T23:44:31Z by warner

  • Component changed from code-frontend to code-mutable

comment:6 Changed at 2008-04-24T23:46:07Z by warner

see also #391

comment:7 Changed at 2008-05-05T21:08:36Z by zooko

  • Milestone changed from 1.0.1 to 1.1.0

Milestone 1.0.1 deleted

comment:8 Changed at 2008-05-09T01:20:24Z by warner

  • Resolution set to fixed
  • Status changed from assigned to closed

The weak-value-dict is now present, added in 26187bfc8166a868.

Note: See TracTickets for help on using tickets.