Re: ORDER BY ... LIMIT.. performance

From: "Josh Berkus" <josh(at)agliodbs(dot)com>
To: "john cartmell" <john(dot)cartmell(at)mediaburst(dot)co(dot)uk>, "Josh Berkus" <josh(at)agliodbs(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: ORDER BY ... LIMIT.. performance
Date: 2002-12-06 18:13:40
Message-ID: web-2024310@davinci.ethosmedia.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

John,

> But when it is 20:
> EXPLAIN ANALYZE SELECT * FROM tblcompany WHERE
> lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0;
> NOTICE: QUERY PLAN:
>
> Limit (cost=0.00..4864.92 rows=20 width=223) (actual
> time=559.58..21895.02 rows=20 loops=1)
> -> Index Scan using idx_tblcompany_companyname on tblcompany
> (cost=0.00..1542006.83 rows=6339 width=223) (actual
> time=559.57..21894.97 rows=20 loops=1)
> Total runtime: 21895.13 msec

That's extremely odd. From the look of it, Postgres is taking an
extra 18 seconds just to find that 20th row.

Does this table expereince very frequent deletions and updates, or
perhaps mass record replacement from a file? Try running VACUUM FULL
ANALYZE, and possibly even REINDEX on idx_tblcompany_companyname.
Massive numbers of dead tuples could account for this performance
irregularity.

-Josh

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2002-12-06 20:19:04 Re: ALTER TABLE .. < ADD | DROP > OIDS
Previous Message Joe Conway 2002-12-06 18:10:45 Re: Speeding up aggregates