"backup" behavior and corrupted file

Zooko Wilcox-O'Hearn zookog at gmail.com
Sat Jul 25 00:36:42 UTC 2015


Dear droki:

Awesome! This might be just the key we need to unlock this.

One possibility would be if you'd be willing to have your node connect to
our log-gatherer. Then all of the logs from your node would be transferred
(over Foolscap, therefore encrypted) to us.

Regards,

Zooko
On Jul 10, 2015 17:39, "droki" <droki at riseup.net> wrote:

> >>
> >> I'm having trouble running "tahoe backup" involving two different
> >> issues. First, my backup command keeps getting stuck on a single file.
> >> It just hangs on the file.
> >>
> >> In fact, this behavior isn't limited to "backup", when I run "tahoe
> >> check URI:CHK:..." on the URI in question I get the same result - it
> >> just hangs.
> >
> > We're currently diagnosing a similar issue that a couple of users are
> > having with the LeastAuthority.com S4 service. Are you using S4? If
> > so, you're another S4 user having this problem, and we could use your
> > help investigating it. If not, then this is surprising because I
> > thought the bug was in the S4 backend, so if you're having this bug
> > *without* using the S4 backend, please let us know!
> >
>
> Hi Zooko, thanks for your response.
>
> I am not a LeastAuthority S4 customer, but I am using the S3 storage
> backend. Let me know if I can do anything to help diagnose the issue.
>
> I used 'tahoe debug dump-cap' against the URI:CHK to get the storage
> index and found the share that the URI refers to, but I don't know if
> there's anything helpful there. Here are the English words that 'cat 0 |
> strings' spit out:
>
> codec_name:3:crs,codec_params:8:5881-1-1,crypttext_hash:32:
> ,crypttext_root_hash:32:
> ,needed_shares:1:1,num_segments:1:1,segment_size:4:5881,share_root_hash:32:
> ,size:4:5881,tail_codec_params:8:5881-1-1,total_shares:1:1,
>
> Let me know what else I can do to help debug this.
>
> -droki
>
>
> >> This brings me to my second issue. I was trying to work around this
> >> problem and thought "I'll just make a whole new backup."  So I ran
> >> "tahoe backup" and specified a new directory as the destination. But I
> >> saw that tahoe was still skipping all the files that had previously been
> >> backed up, so it wasn't creating  a new complete backup. Is this the
> >> intended behavior?
> >
> > Yes, it re-uses the already-uploaded files that are already in the
> > Tahoe-LAFS grid. It links to them from the newly created backup. Make
> > sense?
> >
> >> And, even when specifying a new destination, the backup command got
> >> stuck on the same URI.
> >
> > Same problem as above. If you're an S4 customer, please send email to
> > support at LeastAuthority.com. If not, or even if so, please reply to
> > this letter to tahoe-dev. :-)
> >
> >> How can I create a totally new backup?
> >
> > I don't think it would help to re-upload the files that are already
> > successfully in the grid, but if you want that you could rm the backup
> > database and next time you ran "tahoe backup" it would reupload them.
> >
> > (Then they would get de-duped on the server side, so it wouldn't use
> > any more server space after upload.)
> >
> >
> >>  What should I think of this bad URI?
> >
> > Our current hypothesis is that it is a bug in the S4 backend which is
> > somehow data-dependent or sticky so that it applies only to certain
> > files, but does to with 100% reproducibility after it gets triggered.
> > Daira is investigating right now, so stay tuned, or send us
> > information about which S4 account is yours and which file of yours
> > hangs to help us debug it.
> >
> > Thanks!
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tahoe-lafs.org/pipermail/tahoe-dev/attachments/20150725/f543011b/attachment.html>


More information about the tahoe-dev mailing list