| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Mitch Vincent" <mitch(at)huntsvilleal(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Full text indexing preformance! (long) |
| Date: | 2000-05-30 05:49:55 |
| Message-ID: | 718.959665795@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
"Mitch Vincent" <mitch(at)huntsvilleal(dot)com> writes:
> The query is very fast now (.0.039792 seconds to be exact).
Cool ...
> In my paging system I only have a need for 10 records at a time so I LIMIT
> the query. The problem comes when I need to get a total of all the records
> that matched the query (as a good search engine, I must tell people how many
> records were found).. I can't count() and LIMIT in the same query, so I'm
> forced to do 2 queries, one with count() and one without.
Well, of course the whole *point* of LIMIT is that it stops short of
scanning the whole query result. So I'm afraid you're kind of stuck
as far as the performance goes: you can't get a count() answer without
scanning the whole query.
I'm a little curious though: what is the typical count() result from
your queries? The EXPLAIN outputs you show indicate that the planner
is only expecting about one row out now, but I have no idea how close
that is to the mark. If it were really right, then there'd be no
difference in the performance of LIMIT and full queries, so I guess
it's not right; but how far off is it?
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2000-05-30 06:06:08 | Re: Header File cleanup. |
| Previous Message | Bruce Momjian | 2000-05-30 02:55:02 | Re: Applying TOAST to CURRENT |