#1991 assigned defect

cloud backend fails with DataUnavailable when uploading+downloading a 10 GB file

Reported by: daira Owned by: daira
Priority: normal Milestone: 1.15.0
Component: code-storage Version: 1.10.0
Keywords: DataUnavailable error large cloud-backend immutable upload download blocks-cloud-merge Cc:
Launchpad Bug:


$ bin/tahoe put ~/tahoe/grid/random azure:random
201 Created
$ bin/tahoe webopen azure:

The upload appeared to succeed, taking about 4 hours. (There were errors on some HTTP PUT requests but they were all successfully retried.) The file is listed in the directory. However:

$ time bin/tahoe get URI:CHK:[censored]:1:1:10000000000
Error during GET: 410 Gone
"NoSharesError: no shares could be found. Zero shares usually indicates a corrupt URI, or that no servers were connected, but it might also indicate severe corruption. You should perform a filecheck on this object to learn more.

The full error message is:
no shares (need 1). Last failure: [Failure instance: Traceback (failure with no frames): <class 'allmydata.immutable.downloader.share.DataUnavailable'>: need len=1160: [10008388708-10008388739],[10008388772-10008388803],
 but will never get it

real	0m4.025s
user	0m0.268s
sys	0m0.044s

The entry in the leasedb shares table is:

storage_index               shnum  prefix  backend_key  used_space   sharetype  state
ohcac6xn5ot7hxwfcstdeqcf4e  0      oh                   10016777607  0          1

(sharetype 0 is immutable; state 1 is STABLE).

The disk backend is capable of storing files this size.

Change History (9)

comment:1 Changed at 2013-05-28T13:37:12Z by daira

Note that the used_space in the cloud backend is 10016777607, but the requested ranges go up to 10025165837. (The ranges are data offsets so they exclude the 12-byte immutable header.) The used_space passed to the disk backend for an upload of the same share is 10025166838.

Next step is to determine whether the share got truncated as stored in the cloud, or whether it was stored correctly but the end can't be read.

comment:2 follow-up: Changed at 2013-05-28T20:36:20Z by daira

This also happens with a 5 GB file, but not with a 1 GB file.

comment:3 in reply to: ↑ 2 Changed at 2013-05-28T20:41:40Z by daira

Replying to daira:

This also happens with a 5 GB file, but not with a 1 GB file.

... assuming it is deterministic, that is.

Oh, maybe it's a 232-byte (4 GiB) or 231-byte (2 GiB) threshold issue. It would still have to be something that is handled differently by the disk and cloud backends, though.

comment:4 Changed at 2013-07-04T19:14:27Z by daira

  • Keywords blocks-cloud-merge added
  • Status changed from new to assigned

comment:5 Changed at 2013-07-22T20:50:23Z by daira

  • Milestone changed from undecided to 1.12.0

comment:6 Changed at 2016-03-22T05:02:25Z by warner

  • Milestone changed from 1.12.0 to 1.13.0

Milestone renamed

comment:7 Changed at 2016-06-28T18:17:14Z by warner

  • Milestone changed from 1.13.0 to 1.14.0

renaming milestone

comment:8 Changed at 2017-02-09T15:42:19Z by exarkun

I tested a 5GiB upload against the 2237.cloud-backend-merge.0 and it succeeded. Was this fixed in the branch?

comment:9 Changed at 2020-06-30T14:45:13Z by exarkun

  • Milestone changed from 1.14.0 to 1.15.0

Moving open issues out of closed milestones.

Note: See TracTickets for help on using tickets.