> From: pgsql-performance-owner(at)postgresql(dot)org
> [mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of
> Dan Harris
> After some digging, I've found that the planner is choosing
> to apply a necessary seq scan to the table. Unfortunately,
> it's scanning the whole table, when it seems that it could
> have joined it to a smaller table first and reduce the
> amount of rows it would have to scan dramatically ( 70
> million to about 5,000 ).
Joining will reduce the amount of rows to scan for the filter, but
performing the join is non-trivial. If postgres is going to join two tables
together without applying any filter first then it will have to do a seqscan
of one of the tables, and if it chooses the table with 5000 rows, then it
will have to do 5000 index scans on a table with 70 million records. I
don't know which way would be faster.
I wonder if you could find a way to use an index to do the text filter.
Maybe tsearch2? I haven't used anything like that myself, maybe someone
else has more input.
In response to
pgsql-performance by date
|Next:||From: Josh Berkus||Date: 2007-03-30 00:23:23|
|Subject: Re: Shared buffers, db transactions commited, and write IO on Solaris|
|Previous:||From: Dave Dutcher||Date: 2007-03-29 23:12:58|
|Subject: Re: Weird performance drop|