Re: Query performance over a large proportion of data

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: "Steve McLellan" <smclellan(at)mintel(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Query performance over a large proportion of data
Date: 2009-03-11 00:09:04
Message-ID: 17384.1236730144@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> You probably need to reduce random_page_cost. If your caching is
> complete enough, you might want to set it equal to seq_page_cost
> (never set it lower that seq_page_cost!) and possibly reduce both of
> these to 0.1.

> Some people have had good luck with boosting cpu_tuple_cost and
> cpu_index_tuple_cost. (I've seen 0.5 for both recommended.) I've
> never had to do that, but if the earlier suggestions don't get good
> plans, you might try that.

It might be worth pointing out here that all that matters are the
relative values of the various xxx_cost parameters. If your DB is
mostly or entirely cached, you probably want to lower the estimated cost
of I/O relative to CPU work. You can do that *either* by dropping the
seq_/random_page_costs, *or* by raising the cpu_xxx_costs (there are
more than two of those BTW). Doing both, as Kevin's comments might be
read to suggest, is not useful ... and in particular I bet that having
seq_page_cost actually less than cpu_tuple_cost would lead to some
pretty wacko decision-making by the planner.

See
http://www.postgresql.org/docs/8.3/static/runtime-config-query.html
for some more info about what you're twiddling here.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2009-03-11 00:16:55 Re: Query performance over a large proportion of data
Previous Message Scott Marlowe 2009-03-10 23:19:53 Re: Query performance over a large proportion of data