Re: query gone haywire :)

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Robin Ericsson <robin(dot)ericsson(at)profecta(dot)se>
Cc: Postgres general mailing list <pgsql-general(at)postgresql(dot)org>
Subject: Re: query gone haywire :)
Date: 2004-10-08 14:11:36
Message-ID: 12037.1097244696@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Robin Ericsson <robin(dot)ericsson(at)profecta(dot)se> writes:
> The query have been running ok for some time now, but this morning I
> decided to run vacuum analyze (vacuumdb -a -z) on the cluster, and
> suddenly the query isn't running very well at all.

> -> Index Scan using idx_d_entered on data (cost=0.00..18024.04 rows=50360 width=16) (actual time=0.210..0.247 rows=1 loops=1)
> Index Cond: (((('now'::text)::timestamp(6) with time zone)::timestamp without time zone - '00:01:00'::interval) < entered)

You're running into the well-known problem that the planner can't make
good estimates for index conditions that involve non-constant terms
(such as CURRENT_TIMESTAMP). Lacking a decent estimate, it guesses that
this scan will produce many more rows than it really will, and so it
tends to favor plans that would be good in that scenario, but are not
optimal for retrieving just a couple of rows.

One workaround is to do the date arithmetic on the client side; another
is to cheat by hiding the arithmetic in a function like "ago(interval)"
that you lyingly claim is IMMUTABLE. See the pgsql-performance
archives.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David Garamond 2004-10-08 14:11:37 Re: when to use NULL and when to NOT NULL DEFAULT ''
Previous Message Phil Endecott 2004-10-08 14:07:27 pg_restore -l and schema name