On 07/14/10 15:49, Stephen Frost wrote:
> * Ivan Voras (ivoras(at)freebsd(dot)org) wrote:
>> Total runtime: 0.507 ms
>> Total runtime: 118.689 ms
>> See in the first query where I have a simple LIMIT, it fetches random 10
>> rows quickly, but in the second one, as soon as I give it to execute and
>> calculate the entire result set before I limit it, the performance is
> What you've shown is that it takes 0.5ms for 10 rows, and 118ms for 8500
> Now, maybe I've missed something, but that's 0.05ms per row for
> the first query and 0.01ms per row for the second, and you've added a
> sort into the mix. The performance of going through the data actually
> improves on a per-record basis, since you're doing more in bulk.
> Since you're ordering by 'id', PG has to look at every row returned by
> the index scan. That's not free.
This part of the processing is going on on the backend, and the backend
needs to sort through 8500 integers. I don't think the sort is
> Regarding the statistics, it's entirely possible that the index is *not*
> the fastest way to pull this data (it's nearly 10% of the table..), if
> the stats were better it might use a seq scan instead, not sure how bad
> the cost of the filter itself would be.
I think that what I'm asking here is: is it reasonable for tsearch2 to
extract 8,500 rows from an index of 90,000 rows in 118 ms, given that
the approximately same task can be done with an unindexed "LIKE"
operator in nearly the same time?
In response to
pgsql-performance by date
|Next:||From: Kevin Grittner||Date: 2010-07-14 13:58:23|
|Subject: Re: Need help in performance tuning.|
|Previous:||From: Stephen Frost||Date: 2010-07-14 13:49:28|
|Subject: Re: Understanding tsearch2 performance|