Re: query using incorrect index

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Russell Keane" <Russell(dot)Keane(at)inps(dot)co(dot)uk>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: query using incorrect index
Date: 2012-08-03 14:33:31
Message-ID: 501B9AEB02000025000493CD@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Russell Keane <Russell(dot)Keane(at)inps(dot)co(dot)uk> wrote:

> "log_min_duration_statement";"1ms"

> "shared_buffers";"32MB"
> "work_mem";"1MB"

Those are pretty low values even for a 4GB machine. I suggest the
following changes and additions, based on the fact that you seem to
have the active portion of the database fully cached.

shared_buffers = '160MB'
work_mem = '8MB'
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.03
effective_cache_size = '2GB'

> Explain analyse with both indexes present but without the limit
> (uses the correct index):

> "Total runtime: 0.092 ms"

Part of problem is that it thinks it will find a matching row fairly
quickly, and having done so using the index it chose will mean it is
the *right* row. The problem is that there are no matching rows, so
it has to scan the entire index. More fine-grained statistics
*might* help. If other techniques don't help, you can rewrite the
query slightly to create an optimization fence, but that should be a
last resort. I agree with Robert that if you have a lot of queries
that select on "incoming" and/or "inactive", a conditional index
(with a WHERE clause in its definition) is likely to be very
helpful.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Laszlo Nagy 2012-08-03 15:23:51 Re: Messed up time zones
Previous Message Tom Lane 2012-08-03 14:19:44 Re: Messed up time zones