Re: slow query

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
Cc: "pgsql-performance\(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: slow query
Date: 2009-01-12 23:51:17
Message-ID: 87prisjd6y.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


"Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> writes:

> -> Index Scan using users_orgid_idx on users u (cost=0.00..129.52 rows=5 width=271) (actual time=843.825..860.638 rows=0 loops=35)
> Index Cond: (u.orgid = j2.orgid)
> Filter: ((u.type_id < 10) AND (u.deleted = 0) AND ((lower((u.lname)::text) ~~ 'boat%'::text) OR (lower((u.fname)::text) ~~ 'boat%'::text) OR (lower((u.username)::text) ~~ 'boat%'::text) OR (lower(u.option1) ~~ 'boat%'::text) OR (lower((u.email)::text) ~~ '%boat%'::text) OR (lower(u.external_id) = 'boat'::text)))

Not sure if this is what's going on but I find the high startup time for this
index scan suspicious. Either there are a lot of dead tuples (which would
explain the second run being fast if it marks them all as lp_dead) or there
are a lot of matching index pointers which fail those other constraints.
Assuming it's the latter perhaps some other index definition would let it zero
in on the right tuples more quickly instead of having to grovel through a lot
of irrelevant rows?

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's Slony Replication support!

In response to

  • slow query at 2009-01-12 22:59:24 from Scott Marlowe

Browse pgsql-performance by date

  From Date Subject
Next Message David Wilson 2009-01-12 23:55:13 Re: slow query
Previous Message bricklen 2009-01-12 23:40:27 Re: slow query