Re: : Cost calculation for EXPLAIN output

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "PGSQL Performance" <pgsql-performance(at)postgresql(dot)org>, "Venkat Balaji" <venkat(dot)balaji(at)verse(dot)in>
Subject: Re: : Cost calculation for EXPLAIN output
Date: 2012-02-23 16:51:25
Message-ID: 4F461A2D0200002500045ABD@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Venkat Balaji <venkat(dot)balaji(at)verse(dot)in> wrote:

> The cost is "13.88" to fetch 1 row by scanning an Primary Key
> indexed column.
>
> Isn't the cost for fetching 1 row is too high ?

I don't know, how many index pages will need to be randomly accessed
in addition to the random heap access? How many dead versions of
the row will need to be visited besides the row which is actually
visible? How many of these pages are in shared_buffers? How many
of these pages are in OS cache?

> I am looking for a way to reduce cost as much as possible because
> the query executes 100000+ times a day.

Well, you can reduce the cost all you want by dividing all of the
costing factors in postgresql.conf by the same value, but that won't
affect query run time. That depends on the query plan which is
chosen. The cost is just an abstract number used for comparing the
apparent resources needed to run a query through each of the
available plans. What matters is that the cost factors accurately
reflect the resources used; if not you should adjust them.

If you calculate a ratio between run time and estimated cost, you
should find that it remains relatively constant (like within an
order of magnitude) for various queries. Since you didn't show
actual run times, we can't tell whether anything need adjustment.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Shaun Thomas 2012-02-23 16:56:22 Re: : Cost calculation for EXPLAIN output
Previous Message Shaun Thomas 2012-02-23 16:37:31 Re: Very long deletion time on a 200 GB database