[tahoe-lafs-trac-stream] [Tahoe-LAFS] #1765: gossip-introducer should forget about old nodes somehow

Tahoe-LAFS trac at tahoe-lafs.org
Thu Mar 31 14:34:28 UTC 2016


#1765: gossip-introducer should forget about old nodes somehow
--------------------------------+---------------------------------
     Reporter:  warner          |      Owner:  warner
         Type:  enhancement     |     Status:  new
     Priority:  normal          |  Milestone:  soon
    Component:  code-nodeadmin  |    Version:  1.9.1
   Resolution:                  |   Keywords:  gossip introduction
Launchpad Bug:                  |
--------------------------------+---------------------------------

Old description:

> Just a note-to-self: when #68 gets working, and decentralized
> gossip-based introduction is implemented, we should make sure the
> announcements are:
>
> * 1: refreshed periodically
> * 2: are dropped by clients when they're stale
>
> The idea is that a server who has left the grid permanently should
> eventually be forgotten by everyone else. Gossip never forgets
> (even if you forget it locally, you'll be reminded by your cohorts,
> and if you don't remember what you forgot, you'll fail to forget it
> again).
>
> The simplest way to accomplish this is with a timestamp in the
> announcement, and to prune entries more than maybe a month old.
> (but wait a few minutes after startup to do that, so if you leave
> your node offline for several months, it still has a chance to
> connect to somebody and fetch fresh announcements).
>
> We aren't usually keen on timestamps, in particular comparing time
> from different nodes (in this case, the announcement's timestamp
> plus one month versus the client's clock). But I think this would
> be a reasonable use of clocks. As of yesterday, the announcement
> record includes a timestamp, named "seqnum" (so named because I
> didn't want to make any claims about it's use as a timestamp, but
> merely as a mostly-monotonically increasing number, used to decide
> when one announcement may replace another).
>
> Maybe I should rename that to "when" or "announcement-time"?
>
> The Introducer Client still needs code to refresh its announcements
> periodically (once a week would be fine). Currently it only
> refreshes them at node boot, and we don't want live-and-connected
> nodes with good uptime to start being ignored merely because they
> weren't rebooted frequently enough.

New description:

 Just a note-to-self: when #68 gets working, and decentralized
 gossip-based introduction is implemented, we should make sure the
 announcements are:

 * 1: refreshed periodically
 * 2: are dropped by clients when they're stale

 The idea is that a server who has left the grid permanently should
 eventually be forgotten by everyone else. Gossip never forgets
 (even if you forget it locally, you'll be reminded by your cohorts,
 and if you don't remember what you forgot, you'll fail to forget it
 again).

 The simplest way to accomplish this is with a timestamp in the
 announcement, and to prune entries more than maybe a month old.
 (but wait a few minutes after startup to do that, so if you leave
 your node offline for several months, it still has a chance to
 connect to somebody and fetch fresh announcements).

 We aren't usually keen on timestamps, in particular comparing time
 from different nodes (in this case, the announcement's timestamp
 plus one month versus the client's clock). But I think this would
 be a reasonable use of clocks. As of yesterday, the announcement
 record includes a timestamp, named "seqnum" (so named because I
 didn't want to make any claims about it's use as a timestamp, but
 merely as a mostly-monotonically increasing number, used to decide
 when one announcement may replace another).

 Maybe I should rename that to "when" or "announcement-time"?

 The Introducer Client still needs code to refresh its announcements
 periodically (once a week would be fine). Currently it only
 refreshes them at node boot, and we don't want live-and-connected
 nodes with good uptime to start being ignored merely because they
 weren't rebooted frequently enough.

--

Comment (by daira):

 Something like this is being worked on in #467.

--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1765#comment:10>
Tahoe-LAFS <https://Tahoe-LAFS.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list