#1523 closed defect (duplicate)

MDMF upload via web-API uses much more memory in the gateway process than expected

Reported by: davidsarah Owned by: davidsarah
Priority: major Milestone: undecided
Component: code-frontend-web Version: 1.9.0a1
Keywords: mdmf memory-leak tahoe-put performance Cc:
Launchpad Bug:

Description (last modified by davidsarah)

Split from #113:

The web-API interface does not support streaming (#113, #320), so it is expected for the gateway to need to hold the whole file in memory in order to upload it. However, when using tahoe put to upload an MDMF file, the increase in memory usage of the gateway process seems to be much larger than the file size. For example, when uploading a 191 MiB MDMF file in 1.9alpha using tahoe put --mutable --mutable-type=mdmf, the peak RSS of the gateway (which was also a storage server in this test) was over 1300 MiB. There is also a huge memory leak of more than 700 MiB after the upload has finished.

I originally thought that the memory usage was larger when using the web-API than when updating the same file using SFTP, but apparently that was wrong (I may have been misled by at first doing the SFTP experiment without restarting the nodes).

Change History (17)

comment:1 Changed at 2011-09-02T15:54:16Z by davidsarah

  • Keywords performance added
  • Owner set to davidsarah
  • Status changed from new to assigned

comment:2 follow-up: Changed at 2011-09-02T16:31:42Z by warner

I can imagine two problems here.. which are you thinking of?

  • SFTP only uses SDMF so far (I think), so maybe MDMF uploads use more memory than SDMF, regardless of how the data gets to the gateway
  • the webapi path holds temporary data in different ways than SFTP does (in which case we'd be comparing SFTP-to-SDMF against webapi-to-SDMF)

comment:3 in reply to: ↑ 2 Changed at 2011-09-02T17:24:10Z by davidsarah

Replying to warner:

I can imagine two problems here.. which are you thinking of?

  • SFTP only uses SDMF so far (I think), so maybe MDMF uploads use more memory than SDMF, regardless of how the data gets to the gateway

SFTP will update existing mutable files that are either SDMF or MDMF. In this case we're using SFTP to update an MDMF file as a comparison for the memory that would be used if streaming were supported.

  • the webapi path holds temporary data in different ways than SFTP does (in which case we'd be comparing SFTP-to-SDMF against webapi-to-SDMF)

No, I'm comparing SFTP-to-MDMF against webapi-to-MDMF. We expect the memory usage for SDMF to be bad because that would use a whole-file segment, and there would be more than one segment-sized buffer in memory. For webapi-to-MDMF, on the other hand, we should be able to have only one file-sized buffer in memory even without supporting streaming, and so the memory usage should only be worse than for SFTP-to-MDMF by approximately the file size.

comment:4 Changed at 2011-09-02T18:16:59Z by davidsarah

I started an introducer, 4 storage servers and a gateway. This time the gateway had storage disabled. The encoding parameters of the gateway were k=3, happy=1, N=10. Initially the memory usage as measured by ps -O rss,vsize -C tahoe (command paths snipped for readability) was:

  PID   RSS    VSZ S TTY          TIME COMMAND
16979 39900 163864 S ?        00:00:01 [...]/tahoe start ../grid/introducer
16989 35788 119252 S ?        00:00:00 [...]/tahoe start ../grid/server1
23864 35752 119028 S ?        00:00:00 [...]/tahoe start ../grid/server2
23898 35604 119432 S ?        00:00:00 [...]/tahoe start ../grid/server3
23919 35952 119576 S ?        00:00:00 [...]/tahoe start ../grid/server4
24326 43768 175908 S ?        00:00:00 [...]/tahoe start

I ran bin/tahoe put --mutable --mutable-type=mdmf zeros, where zeros is a file containing 200000000 zero bytes (190.7 MiB). The memory usage of the gateway initially climbed to 1384.5 MiB RSS:

  PID   RSS    VSZ S TTY          TIME COMMAND
16979 39896 163864 S ?        00:00:01 [...]/tahoe start ../grid/introducer
16989 36268 119700 S ?        00:00:00 [...]/tahoe start ../grid/server1
23864 36276 119720 S ?        00:00:00 [...]/tahoe start ../grid/server2
23898 36236 119916 S ?        00:00:00 [...]/tahoe start ../grid/server3
23919 36108 119728 S ?        00:00:00 [...]/tahoe start ../grid/server4
24326 1417760 1549184 R ?     00:00:14 [...]/tahoe start
26433  5064  28488 S pts/3    00:00:00 /usr/bin/python bin/tahoe put --mutable --mutable-type=mdmf zeros
26434 30280 100568 S pts/3    00:00:01 [...]/tahoe put --mutable --mutable-type=mdmf zeros

and then the memory usage of the storage servers climbed uniformly to about 117 MiB RSS each:

  PID   RSS    VSZ S TTY          TIME COMMAND
16979 39688 163864 S ?        00:00:01 [...]/tahoe start ../grid/introducer
16989 120040 203588 D ?       00:00:03 [...]/tahoe start ../grid/server1
23864 119952 203512 D ?       00:00:03 [...]/tahoe start ../grid/server2
23898 119924 203804 R ?       00:00:03 [...]/tahoe start ../grid/server3
23919 119796 203524 D ?       00:00:02 [...]/tahoe start ../grid/server4
24326 1417252 1549184 S ?     00:00:36 [...]/tahoe start
26433  5016  28488 S pts/3    00:00:00 /usr/bin/python bin/tahoe put --mutable --mutable-type=mdmf zeros
26434 30196 100568 S pts/3    00:00:01 [...]/tahoe put --mutable --mutable-type=mdmf zeros

and then more irregularly to a different amount for each server at the end of the command, while the gateway usage dropped to about 746 MiB RSS:

  PID   RSS    VSZ S TTY          TIME COMMAND
16979 38984 163864 S ?        00:00:01 [...]/tahoe start ../grid/introducer
16989 127284 211508 S ?       00:00:06 [...]/tahoe start ../grid/server1
23864 165436 249888 S ?       00:00:05 [...]/tahoe start ../grid/server2
23898 204000 288408 S ?       00:00:09 [...]/tahoe start ../grid/server3
23919 203812 288128 S ?       00:00:09 [...]/tahoe start ../grid/server4
24326 763624 896564 S ?       00:01:10 [...]/tahoe start

There seems to be quite a severe memory leak, since these figures hadn't decreased 20 minutes later.

comment:5 Changed at 2011-09-02T18:27:50Z by davidsarah

I restarted all the nodes, and repeated the experiment using bin/tahoe put --mutable --mutable-type=mdmf zeros <URI from previous run>, i.e. an update rather than an initial upload. The results were similar except that the peak RSS of the gateway process was 1322.5 MiB, and the final usages were:

  PID   RSS    VSZ S TTY          TIME COMMAND
28617 39340 163824 S ?        00:00:00 [...]/tahoe start ../grid/introducer
28627 202768 286752 S ?       00:00:05 [...]/tahoe start ../grid/server1
28637 204752 288772 S ?       00:00:05 [...]/tahoe start ../grid/server2
28647 289676 373704 S ?       00:00:07 [...]/tahoe start ../grid/server3
28657 204468 288544 S ?       00:00:07 [...]/tahoe start ../grid/server4
28667 700936 829076 S ?       00:01:05 [...]/tahoe start

(i.e. 684.5 MiB RSS for the gateway). Again these hadn't decreased several minutes later.

comment:6 Changed at 2011-09-02T18:46:10Z by davidsarah

I restarted all the nodes again, and logged in using sftp -P 8022 127.0.0.1. The memory usage at that point was:

  PID   RSS    VSZ S TTY          TIME COMMAND
29546 39984 163876 S ?        00:00:00 [...]/tahoe start ../grid/introducer
29566 35408 118816 S ?        00:00:00 [...]/tahoe start ../grid/server1
29593 35736 119120 S ?        00:00:00 [...]/tahoe start ../grid/server2
29604 35552 119144 S ?        00:00:00 [...]/tahoe start ../grid/server3
29614 35388 119048 S ?        00:00:00 [...]/tahoe start ../grid/server4
29624 44040 173988 S ?        00:00:00 [...]/tahoe start

I issued the SFTP client command put zeros /uri/<URI from first run>, i.e. updating the same MDMF file.

The results were almost identical to the first experiment (except that the gateway memory usage increase only started after the file had been completely received over SFTP). The peak RSS of the gateway was 1386 MiB, and the final usages were:

  PID   RSS    VSZ S TTY          TIME COMMAND
29546 38480 163876 S ?        00:00:00 [...]/tahoe start ../grid/introducer
29566 180072 266984 S ?       00:00:05 [...]/tahoe start ../grid/server1
29593 134600 221044 S ?       00:00:06 [...]/tahoe start ../grid/server2
29604 203136 288520 S ?       00:00:08 [...]/tahoe start ../grid/server3
29614 286520 373392 S ?       00:00:08 [...]/tahoe start ../grid/server4
29624 763504 896112 S ?       00:01:42 [...]/tahoe start

(i.e. 745.6 MiB RSS for the gateway). These also hadn't decreased several minutes later, or on closing the SFTP connection.

This doesn't support my contention in comment:3 or the original description, that the web-API usage is higher.

Last edited at 2011-09-02T18:54:13Z by davidsarah (previous) (diff)

comment:7 Changed at 2011-09-02T18:50:57Z by davidsarah

  • Description modified (diff)
  • Summary changed from MDMF upload via web-API uses much more memory in the gateway process than updating the same file via SFTP to MDMF upload via web-API uses much more memory in the gateway process than expected

comment:8 Changed at 2011-09-02T19:07:53Z by davidsarah

  • Description modified (diff)

comment:9 follow-up: Changed at 2011-09-02T19:23:28Z by davidsarah

When uploading an immutable file using bin/tahoe put zeros (again after restarting the nodes), the memory usage did not increase above:

  PID   RSS    VSZ S TTY          TIME COMMAND
 1073 39776 163820 S ?        00:00:00 [...]/tahoe start ../grid/introducer
 1088 40784 124072 S ?        00:00:04 [...]/tahoe start ../grid/server1
 1098 41840 125264 S ?        00:00:04 [...]/tahoe start ../grid/server2
 1108 40940 124396 S ?        00:00:04 [...]/tahoe start ../grid/server3
 1118 40928 124532 S ?        00:00:09 [...]/tahoe start ../grid/server4
 1129 47900 175688 S ?        00:00:56 [...]/tahoe start

(I tried again using data from /dev/urandom in case of some optimization involving zero memory pages, with the same results.)

I don't understand how the gateway avoided holding the file in memory. Did somebody fix #320 while I wasn't looking? :-)

comment:10 in reply to: ↑ 9 Changed at 2011-09-02T19:26:21Z by davidsarah

Replying to davidsarah:

I don't understand how the gateway avoided holding the file in memory. Did somebody fix #320 while I wasn't looking? :-)

Oh, it was holding the data in a temporary file.

comment:11 Changed at 2011-09-02T19:49:05Z by davidsarah

For comparison, SDMF (still using 1.9alpha), has a peak gateway memory usage of 1314.5 MiB RSS, and then toward the end of the upload:

  PID   RSS    VSZ S TTY          TIME COMMAND
 3284 41252 165572 S ?        00:00:00 [...]/tahoe restart ../grid/introducer
 3297 100980 184612 S ?       00:00:04 [...]/tahoe restart ../grid/server1
 3310 219140 302992 S ?       00:00:06 [...]/tahoe restart ../grid/server2
 3320 226224 310220 S ?       00:00:06 [...]/tahoe restart ../grid/server3
 3332 165964 249720 S ?       00:00:04 [...]/tahoe restart ../grid/server4
 3344 1085276 1213212 R ?     00:01:01 [...]/tahoe restart
 3364  4776  28488 S pts/3    00:00:00 /usr/bin/python bin/tahoe put --mutable --mutable-type=sdmf zeros
 3365 29840 100572 S pts/3    00:00:00 [...]/tahoe put --mutable --mutable-type=sdmf zeros

After the upload it went back down to

  PID   RSS    VSZ S TTY          TIME COMMAND
 3284 41252 165572 S ?        00:00:00 [...]/tahoe restart ../grid/introducer
 3297 101092 184612 S ?       00:00:04 [...]/tahoe restart ../grid/server1
 3310 36036 119484 S ?        00:00:07 [...]/tahoe restart ../grid/server2
 3320 230492 314044 S ?       00:00:07 [...]/tahoe restart ../grid/server3
 3332 35936 119504 S ?        00:00:04 [...]/tahoe restart ../grid/server4
 3344 43628 171488 S ?        00:01:02 [...]/tahoe restart

So, there is no memory leak in the gateway for SDMF, although it seems that the storage servers are taking a while to release their memory.

Edit: the memory usage of server1 and server3 was still this high after several hours.

Last edited at 2011-09-03T03:48:32Z by davidsarah (previous) (diff)

comment:12 Changed at 2011-09-02T19:49:40Z by davidsarah

  • Keywords mdmf leak added

comment:13 Changed at 2011-09-02T19:51:10Z by davidsarah

  • Keywords memory-leak added; memory leak removed

comment:14 Changed at 2011-09-03T03:45:33Z by davidsarah

For an upload using tahoe put --mutable --mutable-type=mdmf of a 100000000-byte file, the peak gateway memory usage is 713.8 MiB RSS, and the final usage is:

  PID   RSS    VSZ S TTY          TIME COMMAND
15837 40124 164148 S ?        00:00:00 [...]/tahoe restart ../grid/introducer
15853 162204 245664 S ?       00:00:04 [...]/tahoe restart ../grid/server1
15863 155156 238736 S ?       00:00:04 [...]/tahoe restart ../grid/server2
15873 119860 203288 S ?       00:00:02 [...]/tahoe restart ../grid/server3
15883 119756 203440 S ?       00:00:02 [...]/tahoe restart ../grid/server4
15893 404656 532308 S ?       00:00:33 [...]/tahoe restart

(i.e. 395.2 MiB RSS for the gateway).

So, the major part of the leak is, unsurprisingly, roughly proportional to the filesize.

comment:15 Changed at 2011-09-03T03:59:41Z by davidsarah

I repeated the SDMF test with the v1.8.2 release. There doesn't appear to be any memory regression in 1.9alpha upload (I haven't tried download) with respect to SDMF. The leak in MDMF is serious enough that we still might want to block 1.9beta to fix it, though.

Last edited at 2011-09-03T04:04:45Z by davidsarah (previous) (diff)

comment:16 Changed at 2011-09-03T06:42:41Z by zooko

Brian said on IRC that he didn't want to treat this as a blocker for v1.9. Presumably his reasoning is that it isn't a regression—nothing in MDMF can be a regression. :-) And also nothing in MDMF is turned on by default for v1.9. Still, this is pretty disappointing behavior. It means people cannot upload (put) MDMFs larger than their RAM (or actually than about 1/6 of their available RAM)? If we're going to ship v1.9 with this behavior, we certainly need to make sure docs/performance.rst tells about it!

Brian also said that he thought fixing it would require a new method added to the API of RIStorageServer. I can't imagine why that would be, but I'm way too sleepy at this point to think about such things.

comment:17 Changed at 2011-09-03T15:59:38Z by davidsarah

  • Resolution set to duplicate
  • Status changed from assigned to closed

Duplicate of #1513. I'd forgotten that ticket when I filed this one.

Note: See TracTickets for help on using tickets.