Re: [GSOC 18] Performance Farm Project——Initialization Project

From: Dave Page <dpage(at)pgadmin(dot)org>
To: Mark Wong <mark(at)2ndQuadrant(dot)com>
Cc: Hongyuan Ma <cs_maleicacid(at)163(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [GSOC 18] Performance Farm Project——Initialization Project
Date: 2018-03-15 02:47:34
Message-ID: 5D86FB25-BC12-4B91-8BA9-3E7C5794D8E7@pgadmin.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> On 14 Mar 2018, at 19:14, Mark Wong <mark(at)2ndQuadrant(dot)com> wrote:
>
> Hi,
>
> I have some additional comments to a couple areas...
>
>> On Wed, Mar 14, 2018 at 05:33:00PM -0400, Dave Page wrote:
>>> On Tue, Mar 13, 2018 at 11:31 PM, Hongyuan Ma <cs_maleicacid(at)163(dot)com> wrote:
>>> At the same time I hope you can help me understand the functional
>>> requirements of this project more clearly. Here are some of my thoughts
>>> on PerfFarm:
>>>
>>> - I see this comment in the client folder (In line 15 of the "pgperffarm
>>> \ client \ benchmarks \ pgbench.py" file):
>>> '''
>>> # TODO allow running custom scripts, not just the default
>>> '''
>>> Will PerfFarm have many test items so that the test item search function
>>> or even the test item sort function is expected to be provided?
>>>
>>
>> I don't know - Tomas or Mark are probably the best folks to ask about that.
>> I spent some time on the initial web app work, but then ran out of spare
>> cycles.
>
> Yes, there is potential for the number of tests to change over time.
> (Maybe a test is less relevant, maybe there is a more useful test down
> he road.)
>
> I'm haven't thought too much about sorting and searching functions. But
> right now there is just pgbench with 6 basic configurations: read/write,
> read-only, and a low, medium, and high data set size for each of those.
>
>>> - What value will be used to determine if a machine's performance is
>>> improving or declining?
>>>
>>
>> Good question - I think that needs discussion, as it's not necessarily
>> clear-cut when you think that performance may vary at different numbers of
>> clients.
>
> My thought here is that each test should provide a single metric,
> defined by the test itself, so expect this to be define in the test
> results sent to the web site.
>
> I do expect that off of the test results should be archived somehow and
> retrievable. But the idea I have is to have a generic high level view
> that all of the different tests can share.
>
>>> - I see BuildFarm assigning an animal name to each registered machine. Will
>>> PerfFarm also have this interesting feature?
>>>
>>
>> It was going to, but I forget what we decided on for a naming scheme!
>> Another discussion item I think - in the code, we can just use "Name" or
>> similar.
>
> Since the buildfarm is composed of animals, I thought plants would be a
> complimentary scheme? I'm also not sure if this was discussed
> previously...

For some reason I was thinking fish!

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2018-03-15 02:51:14 Re: Instability in parallel regression tests
Previous Message Thomas Munro 2018-03-15 02:42:02 Re: Instability in parallel regression tests