Re: random_page_cost = 2.0 on Heroku Postgres

From: Peter van Hardenberg <pvh(at)pvh(dot)ca>
To: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: random_page_cost = 2.0 on Heroku Postgres
Date: 2012-02-09 02:54:10
Message-ID: CAAcg=kX1oZMCWY4ooQgBvt95kiKumsBXCtHqxaqC8P6TpZUmhQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
>>> That said, I have access to a very large fleet in which to can collect
>>> data so I'm all ears for suggestions about how to measure and would
>>> gladly share the results with the list.
>>
>> I wonder if some kind of script that grabbed random queries and ran
>> them with explain analyze and various random_page_cost to see when
>> they switched and which plans are faster would work?
>
> We aren't exactly in a position where we can adjust random_page_cost
> on our users' databases arbitrarily to see what breaks. That would
> be... irresponsible of us.
>

Oh, of course we could do this on the session, but executing
potentially expensive queries would still be unneighborly.

Perhaps another way to think of this problem would be that we want to
find queries where the cost estimate is inaccurate.

--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2012-02-09 04:38:34 Re: random_page_cost = 2.0 on Heroku Postgres
Previous Message Peter van Hardenberg 2012-02-09 02:47:50 Re: random_page_cost = 2.0 on Heroku Postgres