full text search and hyphens in uuid

From: Martin Norbäck Olivers <martin(at)norpan(dot)org>
To: pgsql-sql(at)lists(dot)postgresql(dot)org
Subject: full text search and hyphens in uuid
Date: 2023-10-27 11:48:32
Message-ID: CALoTC6s=QAvj=yw2cY=8t_dyQsByXF_AT8k=z-YXOcgcj3sO=g@mail.gmail.com
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hi!
I have a problem with full text search and uuids in the text which I index
using to_tsvector . I have uuids in my text and most of the time, it works
well because they are lexed as words so I can just search for the parts of
the uuid.

The problem is an uuid like this:
select to_tsvector('simple','0232710f-8545-59eb-abcd-47aa57184361')

Which gives this result
'-59':3 '-8545':2 '0232710f':1 '47aa57184361':7 'abcd':6 'eb':5
'eb-abcd-47aa57184361':4

So, I found dict_int and asked it to remove the minus signs

create extension dict_int;
ALTER TEXT SEARCH DICTIONARY intdict (MAXLEN = 12, absval = true);
alter text search configuration simple alter mapping for int, uint with
intdict

and now I get this result instead:
'0232710f':1 '47aa57184361':7 '59':3 '8545':2 'abcd':6 'eb':5
'eb-abcd-47aa57184361':4

which is slightly better, but still not good enough because there is no
token 59eb. It's being split into 59 and eb.

Is there any way to change this behaviour of the tsvector lexer? Do I have
to write my own tsvector or is there a way to "turn off" integer handling
in the lexer?

Regards,
Martin

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Tom Lane 2023-10-28 02:05:14 Re: full text search and hyphens in uuid
Previous Message hector vass 2023-10-26 13:50:53 Re: Concurrently run scipts