Re: integrated tsearch doesn't work with non utf8 database

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Teodor Sigaev <teodor(at)sigaev(dot)ru>
Cc: Heikki Linnakangas <heikki(at)enterprisedb(dot)com>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: integrated tsearch doesn't work with non utf8 database
Date: 2007-09-10 14:37:26
Message-ID: 4202.1189435046@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Teodor Sigaev <teodor(at)sigaev(dot)ru> writes:
>> Note the Seq Scan on pg_ts_config_map, with filter on ts_lexize(mapdict,
>> $1). That means that it will call ts_lexize on every dictionary, which
>> will try to load every dictionary. And loading danish_stem dictionary
>> fails in latin2 encoding, because of the problem with the stopword file.

> Attached patch should fix it, I hope.

Uh, how will that help? AFAICS it still has to call ts_lexize with
every dictionary.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Mark Mielke 2007-09-10 14:38:02 Re: A Silly Idea for Vertically-Oriented Databases
Previous Message Mark Mielke 2007-09-10 14:36:28 Re: Hash index todo list item