Skip site navigation (1) Skip section navigation (2)

Re: Query slows after offset of 100K

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: Michael Lorenz <mlorenz1(at)hotmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Query slows after offset of 100K
Date: 2008-02-15 00:02:32
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Greg Smith <gsmith(at)gregsmith(dot)com> writes:
> On Thu, 14 Feb 2008, Michael Lorenz wrote:
>> When offsetting up to about 90K records, the EXPLAIN ANALYZE is similar to the following:
>> Limit  (cost=15357.06..15387.77 rows=20 width=35) (actual time=19.235..19.276 rows=20 loops=1)
>> ->  Index Scan using account_objectname on "object" o  (cost=0.00..1151102.10 rows=749559 width=35) (actual time=0.086..14.981 rows=10020 loops=1)

> It looks like the planner thinks that index scan will have to go through 
> 749559 rows, but there are actually only 10020 there.

No, you have to be careful about that.  The estimated rowcount is for
the case where the plan node is run to completion, but when there's a
LIMIT over it, it may not get run to completion.  In this case the limit
was satisfied after pulling 10020 rows from the indexscan, but we can't
tell what fraction of the underlying scan was actually completed.

			regards, tom lane

In response to

pgsql-performance by date

Next:From: David CraneDate: 2008-02-15 01:15:21
Subject: Avoid long-running transactions in a long-running stored procedure?
Previous:From: Greg SmithDate: 2008-02-14 23:54:34
Subject: Re: Query slows after offset of 100K

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group