Skip site navigation (1) Skip section navigation (2)

Re: Performance Farm Release

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "Scott I(dot) Luxenberg" <Scott(dot)Luxenberg(at)noblis(dot)org>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Performance Farm Release
Date: 2010-08-31 07:28:24
Message-ID: 4C7CAF18.7080705@2ndquadrant.com (view raw or flat)
Thread:
Lists: pgsql-hackers
Stephen Frost wrote:
> You can certainly run it yourself locally w/o setting it up to report
> back to the build or performance farm..  So, yes, you can, you'll just
> have to look through the outputs yourself and it won't necessairly make
> much sense unless you've been doing those runs for a period of time to
> get a feel for how volatile the speed is on your system..
>   

Thanks again to Scott for working on this all summer, and to Stephen and 
Noblis for helping fund it.  There were a lot of small pieces that 
needed to assemble just right to make this all fit together in a way I 
hope will make it integrate into the core infrastructure soon.

It's probably not obvious to everyone else where this stands as far as 
reporting results goes though, so let me expand on what Stephen said 
here.  When you run performance tests with this, those are stored 
locally.  Scott demonstrated that you can get basic reports out of that, 
and I'm content the right initial tests are being run and the most 
useful data to record is being saved.  But none of the performance 
numbers are sent anywhere yet--they're just stored as CSV files.  From 
the README:  "WHENEVER YOU RUN A PERFORMANCE TEST, RUN WITH --test, so 
not to spam the buildfarm server".  That's the bad news that warning is 
conveying:  you can't upload results yet.

I expect the next steps here to look like this:

1) Nail down what subset of the information gathered locally should be 
uploaded to the buildfarm master server.  Probably just the same columns 
of data already being saved for each test, but perhaps with some extra 
metadata.  The local animal will also have graphs and such, but 
unrealistic to upload all of those for a number of reasons I think.  
Only really useful to drill down when there is some sort of regression I 
expect, and hopefully the animal is still alive when that happens.

2) Update the buildfarm server code to accept and store that data.

3) Update this perffarm client to talk to that.

4) Merge the perfarm fork changes into the mainline buildfarm code.  I 
expect continued bitrot of this code as changes are made to the regular 
buildfarm client, so it might be worth considering that sooner rather 
than later.

My understanding is that the code for the server side of the buildfarm 
isn't public to everyone right now, just because of time limitations 
getting it cleaned up for that.  So a couple of parts here are being 
funneled through how much spare time Andrew has, and there are more 
important git and buildfarm related things for him to worry about right now.

Presuming nothing exciting on this happens before then, I'm hoping that 
I can catch up with Andrew at PG West and map out how to get the rest of 
this done, so it goes live somewhere during 9.1 development.  Now that 
the code has been released from the Noblis fortress, I can start 
cleaning up some of the little details on it before then too (i.e. not 
working for anything but 9.0 yet).

-- 
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com   www.2ndQuadrant.us


In response to

Responses

pgsql-hackers by date

Next:From: vamsi krishnaDate: 2010-08-31 08:38:57
Subject: Estimation of Plan quality
Previous:From: PostgreSQL - Hans-Jürgen SchönigDate: 2010-08-31 07:10:25
Subject: Re: How to construct an exact plan

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group