[tahoe-lafs-trac-stream] [tahoe-lafs] #1689: assertion failure

tahoe-lafs trac at tahoe-lafs.org
Wed Mar 21 14:19:20 UTC 2012


#1689: assertion failure
-------------------------+-------------------------------------------------
     Reporter:  jg71     |      Owner:  nobody
         Type:  defect   |     Status:  new
     Priority:  major    |  Milestone:  undecided
    Component:  unknown  |    Version:  1.9.1
   Resolution:           |   Keywords:  tahoe-check verify regression
Launchpad Bug:           |  assertion error mutable
-------------------------+-------------------------------------------------
Description changed by jg71:

Old description:

> tahoe deep-check -v --repair --add-lease tahoe:
> {{{
> '<root>': healthy
> done: 1 objects checked
>  pre-repair: 1 healthy, 0 unhealthy
>  0 repairs attempted, 0 successful, 0 failed
>  post-repair: 1 healthy, 0 unhealthy
> }}}
> when 9 out of 9 storage servers are available.
>
> The error I have found is reproducable by just stopping one storage
> server and run once more:
> tahoe deep-check -v --repair --add-lease tahoe:
> {{{
> ERROR: AssertionError()
> "[Failure instance: Traceback: <type 'exceptions.AssertionError'>: "
> /usr/lib64/python2.6/site-
> packages/allmydata/mutable/filenode.py:563:upload
> /usr/lib64/python2.6/site-
> packages/allmydata/mutable/filenode.py:661:_do_serialized
> /usr/lib64/python2.6/site-
> packages/twisted/internet/defer.py:298:addCallback
> /usr/lib64/python2.6/site-
> packages/twisted/internet/defer.py:287:addCallbacks
> --- <exception caught here> ---
> /usr/lib64/python2.6/site-
> packages/twisted/internet/defer.py:545:_runCallbacks
> /usr/lib64/python2.6/site-
> packages/allmydata/mutable/filenode.py:661:<lambda>
> /usr/lib64/python2.6/site-
> packages/allmydata/mutable/filenode.py:689:_upload
> /usr/lib64/python2.6/site-
> packages/allmydata/mutable/publish.py:404:publish
> }}}
> I can see from the second capture (attached flogtool.tail-2.txt) that
> tahoe connects exactly to that storage
> server that has been stopped: connectTCP to ('256.256.256.256', 66666)
>
> Versions used locally:
>
> Nevow: 0.10.0, foolscap: 0.6.3, setuptools: 0.6c11, Twisted: 11.1.0,
> zfec: 1.4.22, zbase32: 1.1.3, pyOpenSSL: 0.13, simplejson: 2.3.2, mock:
> 0.7.2, argparse: 1.2.1, pycryptopp:
> 0.6.0.1206569328141510525648634803928199668821045408958, pyutil: 1.8.4,
> zope.interface: 3.8.0, allmydata-tahoe: allmydata-
> tahoe-1.9.0-94-gcef646c, pycrypto: 2.5, pyasn1: 0.0.13
>
> all 9 storage servers use these:
> Nevow: 0.10.0, foolscap: 0.6.3, setuptools: 0.6c11, Twisted: 11.1.0,
> zfec: 1.4.22, pycrypto: 2.4.1, zbase32: 1.1.3, pyOpenSSL: 0.13,
> simplejson: 2.3.2, mock: 0.7.2, argparse: 1.2.1, pyutil: 1.8.4,
> zope.interface: 3.8.0, allmydata-tahoe: 1.9.1, pyasn1: 0.0.13,
> pycryptopp: 0.5.29
>
> Might this be related to [https://tahoe-lafs.org/trac/tahoe-
> lafs/ticket/1648] ?

New description:

 tahoe deep-check -v --repair --add-lease tahoe:
 {{{
 '<root>': healthy
 done: 1 objects checked
  pre-repair: 1 healthy, 0 unhealthy
  0 repairs attempted, 0 successful, 0 failed
  post-repair: 1 healthy, 0 unhealthy
 }}}
 when 9 out of 9 storage servers are available.

 The error I have found is reproducable by just stopping one storage
 server and run once more:
 tahoe deep-check -v --repair --add-lease tahoe:
 {{{
 ERROR: AssertionError()
 "[Failure instance: Traceback: <type 'exceptions.AssertionError'>: "
 /usr/lib64/python2.6/site-
 packages/allmydata/mutable/filenode.py:563:upload
 /usr/lib64/python2.6/site-
 packages/allmydata/mutable/filenode.py:661:_do_serialized
 /usr/lib64/python2.6/site-
 packages/twisted/internet/defer.py:298:addCallback
 /usr/lib64/python2.6/site-
 packages/twisted/internet/defer.py:287:addCallbacks
 --- <exception caught here> ---
 /usr/lib64/python2.6/site-
 packages/twisted/internet/defer.py:545:_runCallbacks
 /usr/lib64/python2.6/site-
 packages/allmydata/mutable/filenode.py:661:<lambda>
 /usr/lib64/python2.6/site-
 packages/allmydata/mutable/filenode.py:689:_upload
 /usr/lib64/python2.6/site-
 packages/allmydata/mutable/publish.py:404:publish
 }}}
 I can see from the second capture (attached flogtool.tail-2.txt) that
 tahoe connects exactly to that storage
 server that has been stopped: connectTCP to ('256.256.256.256', 66666)

 There were no incident report files.

 Versions used locally:

 Nevow: 0.10.0, foolscap: 0.6.3, setuptools: 0.6c11, Twisted: 11.1.0, zfec:
 1.4.22, zbase32: 1.1.3, pyOpenSSL: 0.13, simplejson: 2.3.2, mock: 0.7.2,
 argparse: 1.2.1, pycryptopp:
 0.6.0.1206569328141510525648634803928199668821045408958, pyutil: 1.8.4,
 zope.interface: 3.8.0, allmydata-tahoe: allmydata-tahoe-1.9.0-94-gcef646c,
 pycrypto: 2.5, pyasn1: 0.0.13

 all 9 storage servers use these:
 Nevow: 0.10.0, foolscap: 0.6.3, setuptools: 0.6c11, Twisted: 11.1.0, zfec:
 1.4.22, pycrypto: 2.4.1, zbase32: 1.1.3, pyOpenSSL: 0.13, simplejson:
 2.3.2, mock: 0.7.2, argparse: 1.2.1, pyutil: 1.8.4, zope.interface: 3.8.0,
 allmydata-tahoe: 1.9.1, pyasn1: 0.0.13, pycryptopp: 0.5.29

 Might this be related to [https://tahoe-lafs.org/trac/tahoe-
 lafs/ticket/1648] ?

--

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1689#comment:2>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list