Changes between Version 2 and Version 3 of Ticket #2123
- Timestamp:
- 2013-11-30T21:09:59Z (11 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Ticket #2123 – Description
v2 v3 18 18 With this setup, the process would be as follows: 19 19 1. A "tahoe backup" is run against a locally reachable, disconnected from grid storage node. h=1 achieves "always happy" successful uploads. k=1 just is the simplest value, no stripping is desired. This step backups files into the grid by placing one share in the local storage node. Backup done. 20 2. Later, another node comes online/gets reachable. Either via cronjob or manual run, now it's time for the grid to achieve redundancy. We run a deep-repair operation from any node. Having N=2 and only one share in the most up-to-date backup node, the arriving node would receive another share for each file it didn't knew previously.20 2. Later, another node comes online/gets reachable. Either via cronjob or manual run, now it's time for the grid to achieve redundancy. No connectivity scheduling: we don't know when we'll see that node again. We run a deep-repair operation from any node. Having N=2 and only one share in the most up-to-date backup node, the arriving node would receive another share for each file it didn't knew previously. Replication done. 21 21 22 22 === Current problem ===