Re: pg_trgm indexes giving bad estimations?

From: Ben <bench(at)silentmedia(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: pg_trgm indexes giving bad estimations?
Date: 2006-11-01 05:41:38
Message-ID: Pine.LNX.4.64.0610312106260.5452@GRD.cube42.tai.silentmedia.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Now that I have a little time to work on this again, I've thought about it
and it seems that an easy and somewhat accurate cop-out to do this is to
use whatever the selectivity function would be for the like operator,
multiplied by a scalar that pg_tgrm should already have access to.

Unfortunately, it's not at all clear to me from reading
http://www.postgresql.org/docs/8.1/interactive/xoper-optimization.html#AEN33077
how like impliments selectivity. Any pointers on where to look?

On Wed, 4 Oct 2006, Tom Lane wrote:

> Ben <bench(at)silentmedia(dot)com> writes:
>> How can I get the planner to not expect so many rows to be returned?
>
> Write an estimation function for the pg_trgm operator(s). (Send in a
> patch if you do!) I see that % is using "contsel" which is only a stub,
> and would likely be wrong for % even if it weren't.
>
>> A possibly related question is: because pg_tgrm lets me set the
>> matching threshold of the % operator, how does that affect the planner?
>
> It hasn't a clue about that.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-11-01 05:58:24 Re: Help w/speeding up range queries?
Previous Message Luke Lonergan 2006-11-01 05:26:04 Re: Help w/speeding up range queries?