Changes between Initial Version and Version 2 of Ticket #1187


Ignore:
Timestamp:
2010-08-27T00:25:04Z (14 years ago)
Author:
davidsarah
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #1187 – Description

    initial v2  
    55In principle, we can use the erasure coding to mitigate this bottleneck. If k were larger than it typically is now, we could download as many shares per segment from each server as its current effective bandwidth is able to support. Since the individual shares would be a smaller fraction of the segment size, so would be the wasted time per segment.
    66
    7 A further optimization would be to pipeline requests across consecutive segments. That is, when we're getting near to finishing the download of shares for a segment, we can also be downloading shares for the next segment (but ''only'' the next segment) from servers that would otherwise be idle. Note that this depends on downloading a variable number of shares from each server for each segment (otherwise, the servers that finish first on one segment would also typically finish first on the next segment, so we would either have unbounded memory usage or still have to leave the faster servers idle some of the time).
     7A further optimization would be to pipeline requests across consecutive segments. That is, when we're getting near to finishing the download of shares for a segment, we can also be downloading shares for the next segment (but ''only'' the next segment) from servers that would otherwise be idle. Note that this depends on downloading a variable number of shares from each server for each segment (otherwise, the servers that finish first on one segment would also typically finish first on the next segment, so we would either have unbounded memory usage or still have to leave the ~~faster servers~~ servers that finish first idle some of the time).
    88
    99It isn't entirely clear how large k would need to be for this approach to work. If it would need to be larger than the number of servers, then we would have to rethink the servers-of-happiness criterion. The current definition basically only credits each server for having one share, and it would have to be changed to credit each server for having multiple shares. I think this can still be given a fairly simple definition in the bipartite matching model.