[tahoe-lafs-trac-stream] [tahoe-lafs] #1657: Sneakernet grid scenario

tahoe-lafs trac at tahoe-lafs.org
Sun Mar 4 20:37:24 UTC 2012


#1657: Sneakernet grid scenario
-----------------------------+-----------------------
     Reporter:  amontero     |      Owner:  nobody
         Type:  enhancement  |     Status:  new
     Priority:  major        |  Milestone:  undecided
    Component:  unknown      |    Version:  1.8.3
   Resolution:               |   Keywords:
Launchpad Bug:               |
-----------------------------+-----------------------
Description changed by amontero:

Old description:

> Hi all.
>
> I'm trying to achieve a familynet/sneakernet grid.
>
> As I'm learning more and more about tahoe, I'm still trying to improve
> some issues in order to achieve my goals. I've created this ticket to
> keep track of those issues and relevant tickets. Later, as I get
> advice/comments/suggestions I will spawn more detailed tickets as
> necessary, keeping them also tracked here.
>
> == Use case ==
> Family grid for reciprocally storing each members personal files (mostly
> photos). I will be the sole admin of the grid because other grid members
> have no skills to manage the grid.
>
> I created a root: dir as the grid admin where I create each users' home
> dir as a subdirectory of root. The users will store their pictures'
> backups in their home dirs via "tahoe backup". So, just keeping root dir
> healthy across all members will do in order to achieve safety of those
> backups.
>
> Set up a introducer on public inet and a local storage node in each of
> the grid members. An important point to note here is that most of the
> time, when users will do their backups, their local node will be the only
> present node on the grid. So I lowered "shares.happy" to 1. The rest are
> as default 3/7/10. Thus the 'sneakernet' grid name.
>
> I'm doing the replication work manually when 2 nodes do rendez-vous and
> that's the only time when they will have direct (ie LAN) connection.
> For example, my brother has a node stored in a external USB drive and he
> brings it with him to my home. My computer is a desktop one, but my dad's
> is a laptop, and so on.
>
> They will rendez-vous their "home" nodes carrying their latest backups to
> another member's node and repair, thus getting also those member backups
> replicated in the exchange, too.
>

> == Issues ==
> Here are some issues and how I'm addressing them:
>
> 1. Storage use: I don't want any node to store a full set of shares since
> it doesn't add security to the grid and it is a waste. I want each member
> to hold x+1 shares, where x is enough for the file to be readable from
> that single node. Now I'm acomplishing this by fully repairing the grid
> on that node isolated and later pruning storage shares down to the
> desired count with a script (it's dirty, but works). Thinking about it,
> I've come to the conclusion that a 'hold-no-more-than-Z-shares' kind of
> setting for storage nodes will help me a lot. Ticket #711 would also be
> useful. Also #1340.
> 2. Repairing: Related to the above, I have to always ensure that no
> repair will end with all shares on the same node. So before doing a
> repair between 2 nodes I ensure that each isolated node is 100% repaired
> (10/10) and all files healthy. Then I 'prune' the storage shares to 5 and
> now is when I can do a 2 node verify/repair. I know this is very
> inefficient, so any advice on how to improve this is welcome.
> 3. Verification: I would like to place in each node crontab to do a deep-
> check-verify of the root verifycap and currently I can't because of #568.
> So I keep an eye on it.
> 4. Verification: In my usage scenario, a healthy file will be any one
> just readable in the local node or somewhat configurable. Related issues:
> #614, #1212.
> 5. Verification caps: I also planned to ease the verification/repair
> process via the WUI by linking the root verifycap into each user's home
> dir. But the WUI gives me an error when attempting to do it. I plan to
> use this also for establishing a "reciprocity list" for each user. I
> mean, if I grow the grid to outsiders, and I don't want them  to hold
> some users home dirs, a "verifycaps" folder with the desired users home's
> verifycaps will do. In both members and outsiders cases, they just have
> to deep-check-repair their home's verifycaps-dir.
> 6. Helper: Another idea I've come to is having a helper node that could
> "spool" the shares until they were pushed to at least X different nodes
> or until configurable expiration. Since the helper would be accessible by
> everyone, that would mitigate the isolation effect when doing backups.
> This can be useful for more use cases, IMO.
>

> I've also read a lot of tickets with rebalancing issues and server
> distribution, but I doubt they'll fit to my use case. And since I'm not a
> Python programmer, I think bite-sized and simpler issues will allow me to
> help test improvements and suggestions and get to a usable state soon.
>

> I'll keep adding issues as they come up. I know I'm trying to address too
> much issues in one single ticket, but I'm doing it to keep them organized
> in a single place. I expect to get some starting tips or advice on
> improving my use case and will gladly open new tickets as needed to get
> into the details, referencing this ticket.
> Later, this issue can be used as a base documentation for those trying to
> achieve the same scenario.
>
> Thanks in advance.

New description:

 Hi all.

 I'm trying to achieve a familynet/sneakernet grid.

 As I'm learning more and more about tahoe, I'm still trying to improve
 some issues in order to achieve my goals. I've created this ticket to keep
 track of those issues and relevant tickets. Later, as I get
 advice/comments/suggestions I will spawn more detailed tickets as
 necessary, keeping them also tracked here.

 == Use case ==
 Family grid for reciprocally storing each members personal files (mostly
 photos). I will be the sole admin of the grid because other grid members
 have no skills to manage the grid.

 I created a root: dir as the grid admin where I create each users' home
 dir as a subdirectory of root. The users will store their pictures'
 backups in their home dirs via "tahoe backup". So, just keeping root dir
 healthy across all members will do in order to achieve safety of those
 backups.

 Set up a introducer on public inet and a local storage node in each of the
 grid members. An important point to note here is that most of the time,
 when users will do their backups, their local node will be the only
 present node on the grid. So I lowered "shares.happy" to 1. The rest are
 as default 3/7/10. Thus the 'sneakernet' grid name.

 I'm doing the replication work manually when 2 nodes do rendez-vous and
 that's the only time when they will have direct (ie LAN) connection.
 For example, my brother has a node stored in a external USB drive and he
 brings it with him to my home. My computer is a desktop one, but my dad's
 is a laptop, and so on.

 They will rendez-vous their "home" nodes carrying their latest backups to
 another member's node and repair, thus getting also those member backups
 replicated in the exchange, too.


 == Issues ==
 Here are some issues and how I'm addressing them:

 1. Storage use: I don't want any node to store a full set of shares since
 it doesn't add security to the grid and it is a waste. I want each member
 to hold x+1 shares, where x is enough for the file to be readable from
 that single node. Now I'm acomplishing this by fully repairing the grid on
 that node isolated and later pruning storage shares down to the desired
 count with a script (it's dirty, but works). Thinking about it, I've come
 to the conclusion that a 'hold-no-more-than-Z-shares' kind of setting for
 storage nodes will help me a lot. Ticket #711 would also be useful. Also
 #1340 and [https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1212#comment:35
 comment on #1212]
 2. Repairing: Related to the above, I have to always ensure that no repair
 will end with all shares on the same node. So before doing a repair
 between 2 nodes I ensure that each isolated node is 100% repaired (10/10)
 and all files healthy. Then I 'prune' the storage shares to 5 and now is
 when I can do a 2 node verify/repair. I know this is very inefficient, so
 any advice on how to improve this is welcome.
 3. Verification: I would like to place in each node crontab to do a deep-
 check-verify of the root verifycap and currently I can't because of #568.
 So I keep an eye on it.
 4. Verification: In my usage scenario, a healthy file will be any one just
 readable in the local node or somewhat configurable. Related issues: #614,
 #1212.
 5. Verification caps: I also planned to ease the verification/repair
 process via the WUI by linking the root verifycap into each user's home
 dir. But the WUI gives me an error when attempting to do it. I plan to use
 this also for establishing a "reciprocity list" for each user. I mean, if
 I grow the grid to outsiders, and I don't want them  to hold some users
 home dirs, a "verifycaps" folder with the desired users home's verifycaps
 will do. In both members and outsiders cases, they just have to deep-
 check-repair their home's verifycaps-dir.
 6. Helper: Another idea I've come to is having a helper node that could
 "spool" the shares until they were pushed to at least X different nodes or
 until configurable expiration. Since the helper would be accessible by
 everyone, that would mitigate the isolation effect when doing backups.
 This can be useful for more use cases, IMO.


 I've also read a lot of tickets with rebalancing issues and server
 distribution, but I doubt they'll fit to my use case. And since I'm not a
 Python programmer, I think bite-sized and simpler issues will allow me to
 help test improvements and suggestions and get to a usable state soon.


 I'll keep adding issues as they come up. I know I'm trying to address too
 much issues in one single ticket, but I'm doing it to keep them organized
 in a single place. I expect to get some starting tips or advice on
 improving my use case and will gladly open new tickets as needed to get
 into the details, referencing this ticket.
 Later, this issue can be used as a base documentation for those trying to
 achieve the same scenario.

 Thanks in advance.

--

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1657#comment:6>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list