Re: integrated tsearch doesn't work with non utf8 database

From: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
To: "Pavel Stehule" <pavel(dot)stehule(at)gmail(dot)com>
Cc: "Oleg Bartunov" <oleg(at)sai(dot)msu(dot)su>, "Teodor Sigaev" <teodor(at)sigaev(dot)ru>, "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: integrated tsearch doesn't work with non utf8 database
Date: 2007-09-07 21:23:42
Message-ID: 46E1C15E.5060805@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Pavel Stehule wrote:
> postgres=# select ts_debug('cs','Příliš žluťoučký kůň se napil žluté vody');
> ERROR: character 0xc3a5 of encoding "UTF8" has no equivalent in "LATIN2"
> CONTEXT: SQL function "ts_debug" statement 1

I can reproduce that. In fact, you don't need the custom config or
dictionary at all:

postgres=# CREATE DATABASE latin2 encoding='latin2';
CREATE DATABASE
postgres=# \c latin2
You are now connected to database "latin2".
latin2=# select ts_debug('simple','foo');
ERROR: character 0xc3a5 of encoding "UTF8" has no equivalent in "LATIN2"
CONTEXT: SQL function "ts_debug" statement 1

It fails trying to lexize the string using the danish snowball stemmer,
because the danish stopword file contains character 'å' which doesn't
have an equivalent in LATIN2.

Now what the heck is it doing with the danish stemmer, you might ask.
ts_debug is implemented as a SQL function; EXPLAINing the complex SELECT
behind it, I get this plan:

latin2=# \i foo.sql
QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=2.80..1134.45 rows=80 width=100)
Hash Cond: (parse.tokid = tt.tokid)
InitPlan
-> Seq Scan on pg_ts_config (cost=0.00..1.20 rows=1 width=4)
Filter: (oid = 3748::oid)
-> Seq Scan on pg_ts_config (cost=0.00..1.20 rows=1 width=4)
Filter: (oid = 3748::oid)
-> Function Scan on ts_parse parse (cost=0.00..12.50 rows=1000
width=36)
-> Hash (cost=0.20..0.20 rows=16 width=68)
-> Function Scan on ts_token_type tt (cost=0.00..0.20 rows=16
width=68)
SubPlan
-> Limit (cost=7.33..7.36 rows=1 width=36)
-> Subquery Scan dl (cost=7.33..7.36 rows=1 width=36)
-> Sort (cost=7.33..7.34 rows=1 width=8)
Sort Key: m.mapseqno
-> Seq Scan on pg_ts_config_map m
(cost=0.00..7.32 rows=1 width=8)
Filter: ((ts_lexize(mapdict, $1) IS NOT
NULL) AND (mapcfg = 3765::oid) AND (maptokentype = $0))
-> Sort (cost=6.57..6.57 rows=1 width=8)
Sort Key: m.mapseqno
-> Seq Scan on pg_ts_config_map m (cost=0.00..6.56 rows=1
width=8)
Filter: ((mapcfg = 3765::oid) AND (maptokentype = $0))
(21 rows)

Note the Seq Scan on pg_ts_config_map, with filter on ts_lexize(mapdict,
$1). That means that it will call ts_lexize on every dictionary, which
will try to load every dictionary. And loading danish_stem dictionary
fails in latin2 encoding, because of the problem with the stopword file.

We could rewrite ts_debug as a C-function, so that it doesn't try to
access any unnecessary dictionaries. It seems wrong to install
dictionaries in databases where they won't work in the first place, but
I don't see an easy fix for that. Any comments or better ideas?

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Treat 2007-09-07 21:52:01 equivilant to contrib "tsearch" trigger function in 8.3
Previous Message Josh Berkus 2007-09-07 21:13:57 Re: A Silly Idea for Vertically-Oriented Databases