Custom Query (2551 matches)
Results (7 - 9 of 2551)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#652 | invalid | "appname" in version shouldn't change keys of "tahoe --version" | somebody | warner |
Description |
When Zandr was merging the recent 1.3.0 release into his "tahoe-server" branch, he had to do a global-search-and-replace of "allmydata-tahoe" with "tahoe-server". Unfortunately it looks like this also affects the output of "tahoe --version" in a negative way: the "key" (i.e. the thing on the left side of the colon) changes. The relevant part of the output goes from: allmydata-tahoe: 1.3.0-r5678 to: tahoe-server: 1.3.0-r1234 Likewise, the version dictionary that flogtool tail retrieves sees a new key name. I seem to recall that there were some other places which were affected that I'd prefer were not, but I don't remember which ones right now. This makes it difficult for external processes to correctly query Tahoe for its version string. I think that "tahoe --version" and the version data given to foolscap should use a fixed key (perhaps "tahoe"), and have a value which contains the combined appname and release number: tahoe: allmydata-tahoe-r5678 or tahoe: tahoe-server-r5678. |
|||
#2129 | invalid | "bin/tahoe debug trial" runs installed code somewhere other than modified source files in src/ | amiller | amiller |
Description |
To reproduce:
Details:
|
|||
#1418 | invalid | "cannot convert float NaN to integer" in next_power_of_k, during upload via helper | rycee | rycee |
Description |
While performing a backup just a few minutes ago, I got the following exception: Traceback (most recent call last): File "/home/rycee/bin/allmydata-tahoe-1.8.2/support/bin/tahoe", line 9, in <module> load_entry_point('allmydata-tahoe==1.8.2', 'console_scripts', 'tahoe')() File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/runner.py", line 113, in run rc = runner(sys.argv[1:], install_node_control=install_node_control) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/runner.py", line 99, in runner rc = cli.dispatch[command](so) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/cli.py", line 540, in backup rc = tahoe_backup.backup(options) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 325, in backup return bu.run() File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 118, in run new_backup_dircap = self.process(options.from_dir) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 188, in process childcap = self.process(childpath) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 188, in process childcap = self.process(childpath) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 188, in process childcap = self.process(childpath) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 188, in process childcap = self.process(childpath) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 194, in process childcap, metadata = self.upload(childpath) File "/home/rycee/bin/allmydata-tahoe-1.8.2/src/allmydata/scripts/tahoe_backup.py", line 305, in upload raise HTTPError("Error during file PUT", resp) allmydata.scripts.common_http.HTTPError: Error during file PUT: 500 Internal Server Error The remote exception was: Traceback (most recent call last): Failure: foolscap.tokens.RemoteException: <RemoteException around '[CopiedFailure instance: Traceback from remote host -- Traceback (most recent call last): File "/home/rycee/allmydata-tahoe-1.8.2/support/lib/python2.6/site-packages/foolscap-0.6.1-py2.6.egg/foolscap/call.py", line 674, in _done self.request.complete(res) File "/home/rycee/allmydata-tahoe-1.8.2/support/lib/python2.6/site-packages/foolscap-0.6.1-py2.6.egg/foolscap/call.py", line 60, in complete self.deferred.callback(res) File "/home/rycee/allmydata-tahoe-1.8.2/support/lib/python2.6/site-packages/Twisted-10.2.0-py2.6-linux-i686.egg/twisted/internet/defer.py", line 361, in callback self._startRunCallbacks(result) File "/home/rycee/allmydata-tahoe-1.8.2/support/lib/python2.6/site-packages/Twisted-10.2.0-py2.6-linux-i686.egg/twisted/internet/defer.py", line 455, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File "/home/rycee/allmydata-tahoe-1.8.2/support/lib/python2.6/site-packages/Twisted-10.2.0-py2.6-linux-i686.egg/twisted/internet/defer.py", line 542, in _runCallbacks current.result = callback(current.result, *args, **kw) File "/home/rycee/allmydata-tahoe-1.8.2/src/allmydata/immutable/upload.py", line 924, in locate_all_shareholders num_segments, n, k, desired) File "/home/rycee/allmydata-tahoe-1.8.2/src/allmydata/immutable/upload.py", line 225, in get_shareholders None) File "/home/rycee/allmydata-tahoe-1.8.2/src/allmydata/immutable/layout.py", line 88, in make_write_bucket_proxy num_share_hashes, uri_extension_size_max, nodeid) File "/home/rycee/allmydata-tahoe-1.8.2/src/allmydata/immutable/layout.py", line 108, in __init__ effective_segments = mathutil.next_power_of_k(num_segments,2) File "/home/rycee/allmydata-tahoe-1.8.2/src/allmydata/util/mathutil.py", line 35, in next_power_of_k x = int(math.log(n, k) + 0.5) exceptions.ValueError: cannot convert float NaN to integer I have previously performed successful backups with this precise setup, no configurations have been changed or servers restarted recently. The storage node has ample available space. I tried running the backup several times and the error occur each time. However, when running a second time the file on which the first run failed is reported as uploaded and the second run fails on the next file. Running a check on the reported cap gives: $ tahoe check --verify --raw URI:CHK:2pqpbe... { "results": { "needs-rebalancing": false, "count-shares-expected": 3, "healthy": false, "count-unrecoverable-versions": 1, "count-shares-needed": 1, "sharemap": {}, "count-recoverable-versions": 0, "servers-responding": [ "5yea4my3w3frgp524lgthrb7rdd6frtr", "44g5kkgwulzrrrntdzci7jtt5rgt6nuo", "bzyf23mghgxycnr34pdkqdmybnevf4ks" ], "count-good-share-hosts": 0, "count-wrong-shares": 0, "count-shares-good": 0, "count-corrupt-shares": 3, "list-corrupt-shares": [ [ "bzyf23mghgxycnr34pdkqdmybnevf4ks", "barr7kgra3pst6icbtnmxrggca", 0 ], [ "44g5kkgwulzrrrntdzci7jtt5rgt6nuo", "barr7kgra3pst6icbtnmxrggca", 1 ], [ "5yea4my3w3frgp524lgthrb7rdd6frtr", "barr7kgra3pst6icbtnmxrggca", 2 ] ], "recoverable": false }, "storage-index": "barr7kgra3pst6icbtnmxrggca", "summary": "Not Healthy: 0 shares (enc 1-of-3)" } Without using --verify $ tahoe check --raw URI:CHK:2pqp... { "results": { "needs-rebalancing": false, "count-shares-expected": 3, "healthy": true, "count-unrecoverable-versions": 0, "count-shares-needed": 1, "sharemap": { "0": [ "bzyf23mghgxycnr34pdkqdmybnevf4ks" ], "1": [ "44g5kkgwulzrrrntdzci7jtt5rgt6nuo" ], "2": [ "5yea4my3w3frgp524lgthrb7rdd6frtr" ] }, "count-recoverable-versions": 1, "servers-responding": [ "5yea4my3w3frgp524lgthrb7rdd6frtr", "44g5kkgwulzrrrntdzci7jtt5rgt6nuo", "bzyf23mghgxycnr34pdkqdmybnevf4ks" ], "count-good-share-hosts": 3, "count-wrong-shares": 0, "count-shares-good": 3, "count-corrupt-shares": 0, "list-corrupt-shares": [], "recoverable": true }, "storage-index": "barr7kgra3pst6icbtnmxrggca", "summary": "Healthy" } Both client and server nodes are running 1.8.2 on Debian stable. Sorry for the big pastes, if there is any other information you would like please let me know. |