| From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
|---|---|
| To: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
| Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
| Date: | 2012-02-09 02:28:12 |
| Message-ID: | CAOR=d=3CqkNHvznRggv5HvY8C=Pex1EO9XQ2giNKX6VnL8LTyw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
> Having read the thread, I don't really see how I could study what a
> more principled value would be.
Agreed. Just pointing out more research needs to be done.
> That said, I have access to a very large fleet in which to can collect
> data so I'm all ears for suggestions about how to measure and would
> gladly share the results with the list.
I wonder if some kind of script that grabbed random queries and ran
them with explain analyze and various random_page_cost to see when
they switched and which plans are faster would work?
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Peter van Hardenberg | 2012-02-09 02:47:50 | Re: random_page_cost = 2.0 on Heroku Postgres |
| Previous Message | Marcos Ortiz Valmaseda | 2012-02-09 02:05:19 | Re: random_page_cost = 2.0 on Heroku Postgres |