[tahoe-lafs-trac-stream] [tahoe-lafs] #1636: Unhandled error in Deferred while uploading (current darcs build)
tahoe-lafs
trac at tahoe-lafs.org
Wed Dec 14 21:08:06 UTC 2011
#1636: Unhandled error in Deferred while uploading (current darcs build)
------------------------+---------------------------
Reporter: killyourtv | Owner: nobody
Type: defect | Status: new
Priority: major | Milestone: undecided
Component: unknown | Version: 1.9.0
Keywords: | Launchpad Bug:
------------------------+---------------------------
{{{
allmydata-tahoe: 1.9.0-r5387,
foolscap: 0.6.2,
pycryptopp: 0.5.29,
zfec: 1.4.22,
Twisted: 11.0.0,
Nevow: 0.10.0,
zope.interface: unknown,
python: 2.7.2+,
platform: Linux-debian_wheezy/sid-x86_64-64bit_ELF,
pyOpenSSL: 0.13,
simplejson: 2.3.0,
pycrypto: 2.4.1,
pyasn1: unknown,
mock: 0.7.2,
sqlite3: 2.6.0 [sqlite 3.7.9],
setuptools: 0.6c16dev3
}}}
...with patches from #1007, #1010, and #1628 applied.
When uploading files to the volunteer grid on I2P, I received the
following exception:
{{{
2011-12-14 13:34:49+0000 [-] Unhandled error in Deferred:
2011-12-14 13:34:49+0000 [-] Unhandled Error
Traceback (most recent call last):
File "$INSTALL_LOCATION/tahoe-
lafs/src/allmydata/mutable/retrieve.py", line 610, in
_download_current_segment
d = self._process_segment(self._current_segment)
File "$INSTALL_LOCATION/tahoe-
lafs/src/allmydata/mutable/retrieve.py", line 638, in _process_segment
dl.addErrback(self._validation_or_decoding_failed, [reader])
File "/usr/lib/python2.7/dist-
packages/twisted/internet/defer.py", line 308, in addErrback
errbackKeywords=kw)
File "/usr/lib/python2.7/dist-
packages/twisted/internet/defer.py", line 286, in addCallbacks
self._runCallbacks()
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-
packages/twisted/internet/defer.py", line 542, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "$INSTALL_LOCATION/tahoe-
lafs/src/allmydata/mutable/retrieve.py", line 736, in
_validation_or_decoding_failed
self._mark_bad_share(reader.server, reader.shnum, reader, f)
File "$INSTALL_LOCATION/tahoe-
lafs/src/allmydata/mutable/retrieve.py", line 595, in _mark_bad_share
self.notify_server_corruption(server, shnum, str(f.value))
File "$INSTALL_LOCATION/tahoe-
lafs/src/allmydata/mutable/retrieve.py", line 938, in
notify_server_corruption
rref.callRemoteOnly("advise_corrupt_share",
exceptions.AttributeError: 'NoneType' object has no attribute
'callRemoteOnly'
}}}
I do not see any flog files from the time of the error (and it took place
when I was AFK).
At http://127.0.0.1:3456/status/ it's still visible in the "Active
Operations" section, even though the event took place hours ago.
{{{
Mutable File Publish Status
Started: 13:34:51 14-Dec-2011
Storage Index: 7kkjs3zair343ujzz7ghzwx5si
Helper?: No
Current Size: 2972
Progress: 0.0%
Status: Pushing shares
Retrieve Results
Encoding: 3 of 10
Sharemap:
0 -> Placed on [q5b52rmg], [bgahn3ft]
4 -> Placed on [q5b52rmg]
7 -> Placed on [5bqq6b7f]
9 -> Placed on [jvgf7m73]
Timings:
Total: ()
Setup: 6.2ms
Encrypting: 310us (9.59MBps)
Encoding: 264us (11.25MBps)
Packing Shares: ()
RSA Signature: 6.7ms
Pushing: ()
}}}
For what it's worth, there are other failed upload attempts that are
remaining in the list of "Active Operations" (even though they failed
many, many hours ago). I'm retrying the batch upload; I'll try to generate
the flog files if it errors out while near.
(The fact that uploads may fail occasionally on networks like I2P or Tor
isn't unusual; that they're still shown as active long after failing is
(and it's not something that I saw with 1.8.x)).
--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1636>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage
More information about the tahoe-lafs-trac-stream
mailing list