Bill Martin <bill(dot)martin(at)communote(dot)com> writes:
> Ive created following table which contains one million records.
> "Limit (cost=10091.09..19305.68 rows=3927 width=621) (actual time=0.255..0.255 rows=0 loops=1)"
> " -> Bitmap Heap Scan on core_content content (cost=10091.09..57046.32 rows=20011 width=621) (actual time=0.254..0.254 rows=0 loops=1)"
> " Recheck Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)"
> " -> Bitmap Index Scan on ft_simple_core_content_content_idx (cost=0.00..10086.09 rows=20011 width=0) (actual time=0.251..0.251 rows=0 loops=1)"
> " Index Cond: (to_tsvector('simple'::regconfig, content) @@ '''asdasdadas'':*'::tsquery)"
> "Total runtime: 0.277 ms"
> Is there any posibility to tune up the performance even if the limit is only 10?
The problem is the way-off rowcount estimate (20011 rows when it's
really none); with a smaller estimate there, the planner wouldn't decide
to switch to a seqscan.
Did you take the advice to increase the column's statistics target?
Because 20011 looks suspiciously close to the default estimate that
tsquery_opr_selec will fall back on if it hasn't got enough stats
to come up with a trustworthy estimate for a *-pattern query.
(I think there are probably some bugs in tsquery_opr_selec's estimate
for this, as I just posted about on pgsql-hackers. But this number
looks like you're not even getting to the estimation code, for lack
of enough statistics entries.)
The other thing that seems kind of weird here is that the cost estimate
for the bitmap index scan seems out of line even given the
20000-entries-to-fetch estimate. I'd have expected a cost estimate of a
few hundred for that, not 10000. Perhaps this index is really bloated,
and it's time to REINDEX it?
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Venkat Balaji||Date: 2012-09-12 06:57:55|
|Subject: Re: : PostgreSQL Index behavior|
|Previous:||From: Andrew Dunstan||Date: 2012-09-11 14:07:04|
|Subject: Re: add column with default value is very slow|