|From:||Jan Urbański <j(dot)urbanski(at)students(dot)mimuw(dot)edu(dot)pl>|
|To:||Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>|
|Cc:||Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Heikki Linnakangas <heikki(at)enterprisedb(dot)com>, Postgres - Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: gsoc, text search selectivity and dllist enhancments|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
Tom Lane wrote:
> The way I think it ought to work is that the number of lexemes stored in
> the final pg_statistic entry is statistics_target times a constant
> (perhaps 100). I don't like having it vary depending on tsvector width
I think the existing code puts at most statistics_target elements in a
pg_statistic tuple. In compute_minimal_stats() num_mcv starts with
stats->attr->attstattarget and is adjusted only downwards.
My original thought was to keep that property for tsvectors (i.e. store
at most statistics_target lexemes) and advise people to set it high for
their tsvector columns (e.g. 100x their default).
Also, the existing code decides which elements are worth storing as most
common ones by discarding those that are not frequent enough (that's
where num_mcv can get adjusted downwards). I mimicked that for lexemes
but maybe it just doesn't make sense?
> But in any case, given a target number of lexemes to accumulate,
> I'd suggest pruning with that number as the bucket width (pruning
> distance). Or perhaps use some multiple of the target number, but
> the number itself seems about right.
Fine with me, I'm too tired to do the math now, so I'll take your word
for it :)
GPG key ID: E583D7D2
|Next Message||Tom Lane||2008-07-10 22:09:13||Re: Adding variables for segment_size, wal_segment_size and block sizes|
|Previous Message||Radek Strnad||2008-07-10 21:24:29||[WIP] collation support revisited (phase 1)|