Re: Performance issues when the number of records are around 10 Million

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "venu madhav" <venutaurus539(at)gmail(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance issues when the number of records are around 10 Million
Date: 2010-05-12 13:56:08
Message-ID: 4BEA6D2902000025000315F9@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

venu madhav <venutaurus539(at)gmail(dot)com> wrote:

>> > If the records are more in the interval,
>>
>> How do you know that before you run your query?
>>
> I calculate the count first.

This and other comments suggest that the data is totally static
while this application is running. Is that correct?

> If generate all the pages at once, to retrieve all the 10 M
> records at once, it would take much longer time

Are you sure of that? It seems to me that it's going to read all
ten million rows once for the count and again for the offset. It
might actually be faster to pass them just once and build the pages.

Also, you didn't address the issue of storing enough information on
the page to read off either edge in the desired sequence with just a
LIMIT and no offset. "Last page" or "page up" would need to reverse
the direction on the ORDER BY. This would be very fast if you have
appropriate indexes. Your current technique can never be made very
fast.

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2010-05-12 14:08:20 Re: Performance issues when the number of records are around 10 Million
Previous Message venu madhav 2010-05-12 12:14:55 Re: Performance issues when the number of records are around 10 Million