[tahoe-lafs-trac-stream] [tahoe-lafs] #1869: pluggable backends: serialize backend operations on a shareset

tahoe-lafs trac at tahoe-lafs.org
Wed Apr 24 00:08:18 UTC 2013


#1869: pluggable backends: serialize backend operations on a shareset
-------------------------+-------------------------------------------------
     Reporter:           |      Owner:
  davidsarah             |     Status:  new
         Type:  defect   |  Milestone:  1.11.0
     Priority:  normal   |    Version:  cloud-branch
    Component:  code-    |   Keywords:  cloud-backend storage shareset
  storage                |  cache design-review-needed
   Resolution:           |
Launchpad Bug:           |
-------------------------+-------------------------------------------------

Comment (by daira):

 Zooko asked for clarification on the problem this ticket is trying to
 solve.

 Consider a remote operation such as
 {{{RIStorageServer.slot_testv_and_readv_and_writev}}}. The specification
 of that operation says that the tests, reads, and writes are done
 atomically with respect to other remote operations. So for example if the
 operation succeeds, then the reads will always be consistent with the
 tests.

 Before pluggable backends, that was achieved by the implementation of
 {{{slot_testv_and_readv_and_writev}}} being synchronous. But for a cloud
 backend, the implementation ''cannot'' be synchronous because it must make
 HTTP[S] requests. Using a synchronous HTTP implementation would be a bad
 idea because the latency of HTTP (especially the worst-case latency) is
 much greater than the latency of local disk, and the reactor would be
 blocked for the whole sequence of requests, even though operations ''on
 other sharesets'' could safely proceed in parallel. So what happens is
 that the backend code creates a Deferrred for the result of the operation
 and immediately returns it. Nothing prevents the subsequent callbacks for
 distinct operations on a shareset from interleaving.

 While developing the cloud backend, we filed this ticket for the problem
 and glossed over it. In practice, race conditions between concurrent
 operations on a shareset are rarely hit, especially when there are few
 concurrent operations because there is only one user. Nevertheless, the
 implementation is clearly incorrect as it stands.

 Note that the problem isn't restricted to operations where the atomicity
 requirements were spelled out in the operation's specification. There were
 ''implicit'' assumptions about atomicity between other operations that
 were implemented synchronously -- for example that reads of mutable files
 cannot interleave with writes. This doesn't just apply to mutable
 sharesets, because an immutable shareset has mutable state (the bucket
 allocations made by {{{RIStorageServer.allocate_buckets}}}).

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1869#comment:5>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list