Re: random_page_cost = 2.0 on Heroku Postgres

From: Joshua Berkus <josh(at)agliodbs(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Peter van Hardenberg <pvh(at)pvh(dot)ca>, pgsql-performance(at)postgresql(dot)org, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Subject: Re: random_page_cost = 2.0 on Heroku Postgres
Date: 2012-02-12 20:01:59
Message-ID: 873667728.6754.1329076919645.JavaMail.root@mail-1.01.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> Is there an easy and unintrusive way to get such a metric as the
> aggregated query times? And to normalize it for how much work
> happens
> to have been doing on at the time?

You'd pretty much need to do large-scale log harvesting combined with samples of query concurrency taken several times per minute. Even that won't "normalize" things the way you want, though, since all queries are not equal in terms of the amount of data they hit.

Given that, I'd personally take a statistical approach. Sample query execution times across a large population of servers and over a moderate amount of time. Then apply common tests of statistical significance. This is why Heroku has the opportunity to do this in a way that smaller sites could not; they have enough servers to (probably) cancel out any random activity effects.

--Josh Berkus

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Peter van Hardenberg 2012-02-12 22:28:27 Re: random_page_cost = 2.0 on Heroku Postgres
Previous Message Jeff Janes 2012-02-12 19:49:33 Re: random_page_cost = 2.0 on Heroku Postgres