[tahoe-dev] Why no error message when I have only one storage node?
Ludovic Courtès
ludo at gnu.org
Mon Aug 10 13:54:22 PDT 2009
Zooko Wilcox-O'Hearn <zooko at zooko.com>
writes:
> On Monday,2009-08-10, at 9:35 , Ludovic Courtès wrote:
>
>> Zooko Wilcox-O'Hearn <zooko at zooko.com>
>> writes:
>>
>>> On Monday,2009-08-10, at 9:18 , Ludovic Courtès wrote:
>>>
>>>> Having stored all shares on the user's node, will an eventual
>>>> "tahoe deep-check --repair" reshuffle shares to other servers that
>>>> have become available in the meantime?
>>>
>>> That would be ticket #699. It currently doesn't. Instead, the deep-
>>> check detects that all of the shares are downloadable and quits,
>>> happy that everything is right with the world.
>>
>> Thanks for the explanation.
>>
>> This issue is indeed critical for a backup use case.
>
> I agree with your opinion that this is a serious issue and that it
> would be really great if someone would fix it, but I'm not sure if I
> would call it "critical", since people can and do use the current
> version of Tahoe-LAFS for backup. I guess the way they work-around
> this is some combination of (a) don't run a storage server on the
> computer which has the data that they want to back up, and (b) make
> sure that enough good storage servers are connected before doing a
> backup.
Hmm, I find it inconvenient, and even unsuitable for an "unattended
backup" scenario (where you don't want any manual intervention).
I think (b) is fragile. One has to check before starting the backup
whether other storage servers are available; if some of them vanish
during the backup, then data may silently end up being stored locally.
I would expect it to be quite common in the "friend net" scenario, where
storage servers are not up and running 24/7.
> Pretty kloogey! Note that there is a clump of tickets related to
> #699. I think my personal favorite way to improve this would be for
> someone to fix #573.
Same for me.
<shameless plug>
In my dissertation and prototype implementation, the replication
strategy is essentially a function that takes two user-defined
predicates as its arguments. These predicates define a per-block (read
"per-share") and a per-server replication strategy. See Section 6.3.2.1
"Replication Strategies", page 170 of the PDF at
http://tel.archives-ouvertes.fr/tel-00196822/en/ .
</shameless plug>
This is essentially a functional approach to the "share targeting API"
discussed in #573.
Thanks,
Ludo'.
More information about the tahoe-dev
mailing list