From: | Bill Gribble <grib(at)linuxdevel(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Join query on 1M row table slow |
Date: | 2004-02-11 13:55:46 |
Message-ID: | 1076507746.1371.7.camel@serrano |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 10 Feb 2004, CSN wrote:
>
> I have a pretty simple select query that joins a table
> (p) with 125K rows with another table (pc) with almost
> one million rows:
>
> select p.*
> from product_categories pc
> inner join products p
> on pc.product_id = p.id
> where pc.category_id = $category_id
> order by p.title
> limit 25
> offset $offset
This idiom looks to me a lot like "results paging". You have a query
that returns a lot of rows, and you are formatting them one page at a
time in your CGI or whatever.
In PostgreSQL, cursors do this very well:
BEGIN;
DECLARE resultset CURSOR FOR
select p.* from product_categories pc
inner join products p on pc.product_id = p.id
where pc.category_id = $category_id
order by p.title ;
MOVE $offset IN resultset;
FETCH 25 FROM resultset;
[ repeat as necessary ];
This does use some resources on the server side, but it is very much
faster than LIMIT/OFFSET.
The biggest "gotcha" about cursors is that their lifetime is limited to
the enclosing transaction, so they may not be appropriate for CGI-type
applications.
Bill Gribble
From | Date | Subject | |
---|---|---|---|
Next Message | Jean-Michel POURE | 2004-02-11 14:41:41 | Re: Converting timestamps and IP addresses |
Previous Message | John DeSoi | 2004-02-11 13:21:36 | Re: I want to use postresql for this app, but... |