On Mon, 2009-10-26 at 21:02 +0100, Jesper Krogh wrote:
> Test system.. average desktop, 1 SATA drive and 1.5GB memory with pg 8.4.1.
> The dataset consists of words randomized, but .. all records contains
> "commonterm", around 80% contains commonterm80 and so on..
> my $rand = rand();
> push @doc,"commonterm" if $commonpos == $j;
> push @doc,"commonterm80" if $commonpos == $j && $rand < 0.8;
You should probably re-generate your random value for each call rather
than store it. Currently, every document with commonterm20 is guaranteed
to also have commonterm40, commonterm60, etc, which probably isn't very
realistic, and also makes doc size correlate with word rarity.
> Given that the seq-scan have to visit 50K row to create the result and
> the bitmap heap scan only have to visit 40K (but search the index) we
> would expect the seq-scan to be at most 25% more expensive than the
> bitmap-heap scan.. e.g. less than 300ms.
I suspect table bloat. Try VACUUMing your table and trying again.
In this sort of test it's often a good idea to TRUNCATE the table before
populating it with a newly generated data set. That helps avoid any
residual effects from table bloat etc from lingering between test runs.
In response to
pgsql-performance by date
|Next:||From: Jesper Krogh||Date: 2009-10-27 05:08:41|
|Subject: Re: bitmap heap scan way cheaper than seq scan on the same
amount of tuples (fts-search).|
|Previous:||From: Jesper Krogh||Date: 2009-10-26 20:02:57|
|Subject: bitmap heap scan way cheaper than seq scan on the same amount of
tuples (fts-search). |