| From: | Mark Harrison <mh(at)pixar(dot)com> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | coercing int to bigint for indexing purposes |
| Date: | 2004-02-11 22:38:41 |
| Message-ID: | 402AAEF1.2050703@pixar.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Is there a way to automatically coerce an int into a bigint for
indexing purposes?
We have a table with a bigint column that is an index.
For mi, there's no problem, since I now know to say
select * from foo where id = 123::bigint
but our casual users still say
select * from foo where id = 123
causing a sequential scan because the type of 123 is not
a bigint.
As you can see, there's nearly 4 orders of magnitude difference
in time, and we anticipate this will only get worse as our
tables increase in size:
LOG: duration: 0.861 ms statement: select * from big where id = 123123123123123;
LOG: duration: 6376.917 ms statement: select * from big where id = 123;
One thing I have considered is starting our id sequence at 5000000000
so that "real" queries will always be bigint-sized, but this seems
to me a bit of a hack.
Many TIA,
Mark
--
Mark Harrison
Pixar Animation Studios
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2004-02-11 23:24:41 | Re: coercing int to bigint for indexing purposes |
| Previous Message | scott.marlowe | 2004-02-11 18:41:18 | Re: slow database |