we have a very large table (about 1 million entries), and we have an "add"
operation that will check a new entry for equality or similarity with all of
the existing entries. The generated SQL queries look like that:
SELECT pid FROM rec WHERE (((f_lname_PC = '2C38D2E44501ED31778E0EFDFD5200CD'
OR f_lname_PH = 'CB85F68FFDDECD7CC39AF5BC2FBC0BBC') OR (f_lname_PC IS NULL OR
f_lname_PH IS NULL)) AND (f_fname_PC = '3A160A9BFF2EA5A0918F5F6667A411A7' OR
f_fname_PH = '5152F1177F0BD28FB51501597669962E') AND f_bd =
'9E6E0D70A9B76BB6990477FCF100557E' AND f_bm =
'4BE74390684A423853B68B9F05A4BAA0' AND f_by =
We have set indices for each of the fields (f_*), but the matching process
doesn't seem to become faster.
Are there any things we could improve, e. g. special index types or things
pgsql-general by date
|Next:||From: Paul Green||Date: 2001-10-26 09:11:38|
|Subject: Re: Using other database tables in a query|
|Previous:||From: Keary Suska||Date: 2001-10-26 07:28:47|
|Subject: Re: FYI To Postgres Authors|