Changeset 32f8062 in trunk
- Timestamp:
- 2011-09-12T22:26:55Z (14 years ago)
- Branches:
- master
- Children:
- 48f56da
- Parents:
- 942c5e51
- Location:
- src/allmydata
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified src/allmydata/interfaces.py ¶
r942c5e51 r32f8062 210 210 exist previously will cause that share to be created. 211 211 212 Each write vector is accompanied by a 'new_length' argument. If 213 new_length is not None, use it to set the size of the container. This 214 can be used to pre-allocate space for a series of upcoming writes, or 215 truncate existing data. If the container is growing, new_length will 216 be applied before datav. If the container is shrinking, it will be 217 applied afterwards. If new_length==0, the share will be deleted. 212 In Tahoe-LAFS v1.8.3 or later (except 1.9.0a1), if you send a write 213 vector whose offset is beyond the end of the current data, the space 214 between the end of the current data and the beginning of the write 215 vector will be filled with zero bytes. In earlier versions the 216 contents of this space was unspecified (and might end up containing 217 secrets). 218 219 Each write vector is accompanied by a 'new_length' argument, which 220 can be used to truncate the data. If new_length is not None and it is 221 less than the current size of the data (after applying all write 222 vectors), then the data will be truncated to new_length. If 223 new_length==0, the share will be deleted. 224 225 In Tahoe-LAFS v1.8.2 and earlier, new_length could also be used to 226 enlarge the file by sending a number larger than the size of the data 227 after applying all write vectors. That behavior was not used, and as 228 of Tahoe-LAFS v1.8.3 it no longer works and the new_length is ignored 229 in that case. 230 231 If a storage client can rely on a server being of version v1.8.3 or 232 later, it can extend the file efficiently by writing a single zero 233 byte just before the new end-of-file. Otherwise it must explicitly 234 write zeroes to all bytes between the old and new end-of-file. In any 235 case it should avoid sending new_length larger than the size of the 236 data after applying all write vectors. 218 237 219 238 The read vector is used to extract data from all known shares, -
TabularUnified src/allmydata/storage/mutable.py ¶
r942c5e51 r32f8062 147 147 num_extra_leases = self._read_num_extra_leases(f) 148 148 f.seek(old_extra_lease_offset) 149 extra_lease_data = f.read(4 + num_extra_leases * self.LEASE_SIZE) 149 leases_size = 4 + num_extra_leases * self.LEASE_SIZE 150 extra_lease_data = f.read(leases_size) 151 152 # Zero out the old lease info (in order to minimize the chance that 153 # it could accidentally be exposed to a reader later, re #1528). 154 f.seek(old_extra_lease_offset) 155 f.write('\x00' * leases_size) 156 f.flush() 157 158 # An interrupt here will corrupt the leases. 159 150 160 f.seek(new_extra_lease_offset) 151 161 f.write(extra_lease_data) 152 # an interrupt here will corrupt the leases, iff the move caused the153 # extra leases to overlap.154 162 self._write_extra_lease_offset(f, new_extra_lease_offset) 155 163 … … 162 170 if offset+length >= data_length: 163 171 # They are expanding their data size. 172 164 173 if self.DATA_OFFSET+offset+length > extra_lease_offset: 174 # TODO: allow containers to shrink. For now, they remain 175 # large. 176 165 177 # Their new data won't fit in the current container, so we 166 178 # have to move the leases. With luck, they're expanding it … … 176 188 # Their data now fits in the current container. We must write 177 189 # their new data and modify the recorded data size. 190 191 # Fill any newly exposed empty space with 0's. 192 if offset > data_length: 193 f.seek(self.DATA_OFFSET+data_length) 194 f.write('\x00'*(offset - data_length)) 195 f.flush() 196 178 197 new_data_length = offset+length 179 198 self._write_data_length(f, new_data_length) … … 399 418 self._write_share_data(f, offset, data) 400 419 if new_length is not None: 401 self._change_container_size(f, new_length) 402 f.seek(self.DATA_LENGTH_OFFSET) 403 f.write(struct.pack(">Q", new_length)) 420 cur_length = self._read_data_length(f) 421 if new_length < cur_length: 422 self._write_data_length(f, new_length) 423 # TODO: if we're going to shrink the share file when the 424 # share data has shrunk, then call 425 # self._change_container_size() here. 404 426 f.close() 405 427 -
TabularUnified src/allmydata/storage/server.py ¶
r942c5e51 r32f8062 223 223 "tolerates-immutable-read-overrun": True, 224 224 "delete-mutable-shares-with-zero-length-writev": True, 225 "fills-holes-with-zero-bytes": True, 225 226 "prevents-read-past-end-of-share-data": True, 226 227 },
Note: See TracChangeset
for help on using the changeset viewer.