From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Oleg Lebedev" <oleg(dot)lebedev(at)waterford(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Query question |
Date: | 2002-10-04 20:52:34 |
Message-ID: | 26674.1033764754@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Oleg Lebedev" <oleg(dot)lebedev(at)waterford(dot)org> writes:
> We have several tables each of which has an objectid field. These fields
> contain id values, 99% of which are in a 0 to 1,000,000 range and 1% of
> ids is randomly dispersed between 1,000,000 and 10,000,000,000 (these
> are the identity values we imported from SQL Server when moving to
> Postgres).
> We started a new project and want to use id values > 10,000,000,000 for
> it, so that they do not overlap with existing values.
> 1. Will this hit performance of queries involving this table?
Doesn't seem to be a big problem.
> 2. Can I help the planner by providing stats about ranges of values in
> this table? If yes then how?
Run ANALYZE ;-).
It might be worth pushing up the statistics target for the object ID
columns (see ALTER TABLE) to ensure that the planner has a pretty good
idea about the distribution of the IDs. Offhand I think the default
target (10) would be okay, but it'd be worth your while to experiment.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Dagoberto Torres | 2002-10-04 21:01:42 | sum function |
Previous Message | Cristóvão Dalla Costa | 2002-10-04 20:49:55 | LISTEN/NOTIFY |