|From:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|To:||Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>|
|Cc:||David Rowley <dgrowleyml(at)gmail(dot)com>, PostgreSQL Developers <pgsql-hackers(at)lists(dot)postgresql(dot)org>|
|Subject:||Re: Recording test runtimes with the buildfarm|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com> writes:
> Alternatively, people with access to the database could extract the logs
> and post-process them using perl or python. That would involve no work
> on my part :-) But it would not be automated.
Yeah, we could easily extract per-test-script runtimes, since pg_regress
started to print those. But ...
> What we do record (in build_status_log) is the time each step took. So
> any regression test that suddenly blew out should likewise cause a
> blowout in the time the whole "make check" took.
I have in the past scraped the latter results and tried to make sense of
them. They are *mighty* noisy, even when considering just one animal
that I know to be running on a machine with little else to do. Maybe
averaging across the whole buildfarm could reduce the noise level, but
I'm not very hopeful. Per-test-script times would likely be even
noisier (ISTM anyway, maybe I'm wrong).
The entire reason we've been discussing a separate performance farm
is the expectation that buildfarm timings will be too noisy to be
useful to detect any but the most obvious performance effects.
regards, tom lane
|Next Message||Adam Brusselback||2020-06-10 14:27:11||Re: Terminate the idle sessions|
|Previous Message||Laurenz Albe||2020-06-10 14:07:05||pg_upgrade fails if vacuum_defer_cleanup_age > 0|