[tahoe-lafs-trac-stream] [tahoe-lafs] #2054: Track unit test duration for possible regressions

tahoe-lafs trac at tahoe-lafs.org
Thu Aug 8 13:03:44 UTC 2013


#2054: Track unit test duration for possible regressions
--------------------------------+---------------------------
 Reporter:  markberger          |          Owner:
     Type:  enhancement         |         Status:  new
 Priority:  normal              |      Milestone:  undecided
Component:  dev-infrastructure  |        Version:  1.10.0
 Keywords:                      |  Launchpad Bug:
--------------------------------+---------------------------
 Currently there is no way to track regressions in tahoe over time.
 Ideally, real world performance testing would occur (see tickets #1406 and
 #1530) but we might be able to gain 80% of what we want by tracking the
 duration of each unit test on a dedicated machine. This would also draw
 attention to regressions in the test suite that were missed in code review
 (see ticket #2048).

 The overall process would be something like this:

 * Set buildbot to track master and branches with significant changes
 * A developer pushes a commit to one of those branches
 * Buildbot runs the test suite on a slave (possibly multiple times to get
 good stats) and pushes those stats to a database.
 * If any significant performance regressions occurred, an email is sent to
 tahoe-dev or tahoe-bot complains in IRC. We could also have a bot that
 writes a comment on Github.

 There should also be sanity checks for longer periods of time, such as a
 month.

 To close this ticket:
 * Set up a free database instance on a site like
 [http://www.iriscouch.com/ Iris Couch] or [http://www.mongohq.com/pricing
 MongoHQ].
 * Write a script that has buildbot push stats to the database.
 * Create some sort of reoccurring daemon that will notify the dev team
 when a regression occurs.

 Some other things we might want to consider:
 * Buildbot already tracks the duration of each unit test so maybe we
 should dump all of this historical data into the database. I'm not sure
 how useful it would be but we might want to back it up anyway.
 * Have a web page that displays all of this information in graphs
 * Since performance depends on each machine, maybe we want to create a
 script that iterates through the commit history and runs the unit tests on
 a dedicated machine.

-- 
Ticket URL: <https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2054>
tahoe-lafs <https://tahoe-lafs.org>
secure decentralized storage


More information about the tahoe-lafs-trac-stream mailing list