[tahoe-lafs-trac-stream] [Tahoe-LAFS] #2251: Database full or disk full error, stops backups

Tahoe-LAFS trac at tahoe-lafs.org
Mon Jun 30 22:07:44 UTC 2014


#2251: Database full or disk full error, stops backups
--------------------------+----------------------------------------
     Reporter:  CyberAxe  |      Owner:  daira
         Type:  defect    |     Status:  new
     Priority:  normal    |  Milestone:  undecided
    Component:  unknown   |    Version:  1.10.0
   Resolution:            |   Keywords:  database, full, file space
Launchpad Bug:            |
--------------------------+----------------------------------------

Comment (by Zancas):

 After the issue was reported to me I began investigating the state of
 !CyberAxe's server.
 My investigation began at ~ 2014-06-27T19:45Z.

 I ssh'd in to the server and ran:

 {{{
 Last login: Mon Apr  7 21:32:11 2014 from 184-96-237-182.hlrn.qwest.net
  ubuntu at ip-10-185-214-61:~$ df
  Filesystem     1K-blocks    Used Available Use% Mounted on
  /dev/xvda1       8256952 7838464         0 100% /
  udev              299036       8    299028   1% /dev
  tmpfs              60948     160     60788   1% /run
  none                5120       0      5120   0% /run/lock
  none              304720       0    304720   0% /run/shm
 }}}

 Here's how big the storageserver/storage db's were:
 {{{
 customer at ip-10-185-214-61:~/storageserver/storage$ ls -l ./*
 -rw------- 1 customer customer        0 May  1 09:05
 ./accounting_crawler.state
 -rw------- 1 customer customer    16384 Apr 30 14:27
 ./bucket_counter.state
 -rw-r--r-- 1 customer customer 21267456 Apr 14 05:16 ./leasedb.sqlite
 -rw-r--r-- 1 customer customer    32768 May 26 08:46 ./leasedb.sqlite-shm
 -rw-r--r-- 1 customer customer  1246104 May 25 08:55 ./leasedb.sqlite-wal
 }}}

 Here's what I pasted into IRC as the result of an experiment using 'du':
 {{{
 <zancas> customer at ip-10-185-214-61:~$ du -sh
 {storageserver,introducer}/logs
 [14:01]
 <zancas> 6.1G    storageserver/logs
 <zancas> 20K     introducer/logs
 <zancas>
 <zooko> aha
 <zancas> customer at ip-10-185-214-61:~/storageserver/logs$ pwd
 <zancas> /home/customer/storageserver/logs
 <zancas> customer at ip-10-185-214-61:~/storageserver/logs$ ls -l
 twistd.log.999
 <zancas> -rw------- 1 customer customer 1000057 Apr 26 20:43
 twistd.log.999
 }}}

 I bzipp2'd and tape-archived the log files, and scp'd them to my local dev
 machine.

 Here's a 'df' I ran on Saturday, before informing !CyberAxe that we had a
 fix to test:
 {{{
 customer at ip-10-185-214-61:~/storageserver/logs$ df -h
 Filesystem      Size  Used Avail Use% Mounted on
 /dev/xvda1      7.9G  1.5G  6.1G  19% /
 udev            293M  8.0K  293M   1% /dev
 tmpfs            60M  160K   60M   1% /run
 none            5.0M     0  5.0M   0% /run/lock
 none            298M     0  298M   0% /run/shm
 }}}

 Here's the result of a df run at about 2014-06-30T22:0930Z:
 {{{
 customer at ip-10-185-214-61:~/storageserver/logs$ df -h
 Filesystem      Size  Used Avail Use% Mounted on
 /dev/xvda1      7.9G  1.5G  6.1G  20% /
 udev            293M  8.0K  293M   1% /dev
 tmpfs            60M  160K   60M   1% /run
 none            5.0M     0  5.0M   0% /run/lock
 none            298M     0  298M   0% /run/shm
 customer at ip-10-185-214-61:~/storageserver/logs$
 }}}

 If I Understand Correctly the second df run above occurred *after*
 !CyberAxe attempted another backup, and it failed in ticket:2254.

--
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2251#comment:2>
Tahoe-LAFS <https://Tahoe-LAFS.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list