| From: | Greg Stark <gsstark(at)mit(dot)edu> |
|---|---|
| To: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Poor performance of group by query |
| Date: | 2004-04-16 22:57:51 |
| Message-ID: | 87pta78r9s.fsf@stark.xeocode.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
> stats=# explain analyze SELECT work_units, min(raw_rank) AS rank FROM Trank_work_overall GROUP BY work_units;
>
> ...
>
> raw_rank | bigint |
> work_units | bigint |
If you create a copy of the same table using regular integers does that run
fast? And a copy of the table using bigints is still slow like the original?
I know bigints are less efficient than integers because they're handled using
dynamically allocated memory. This especially bites aggregate functions. But I
don't see why it would be any slower for a hash aggregate than a regular
aggregate. It's a pretty gross amount of time for 18k records.
There was a thought a while back about making 64-bit machines handle 64-bit
datatypes like bigints without pointers. That would help on your Opteron.
--
greg
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2004-04-16 23:45:01 | Re: sunquery and estimated rows |
| Previous Message | Manfred Koizar | 2004-04-16 22:26:22 | Re: query slows down with more accurate stats |