From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Tuning Question sort_mem vs pgsql_tmp |
Date: | 2003-02-05 05:42:55 |
Message-ID: | 87ptq75lls.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
> > Does sort_mem have to be larger than the corresponding pgsql_tmp area that
> > would be used if postgres runs out of sort_mem?
>
> Probably. At least in recent versions, the "do we still fit in
> sort_mem" logic tries to account for palloc overhead and alignment
> padding, neither of which are present in the on-disk representation
> of the same tuples. So data unloaded to disk should be more compact
> than it was in memory. You didn't say what you were sorting, but
> if it's narrow rows (like maybe just an int or two) the overhead
> could easily be more than the actual data size.
Thank you. 64M seems to be enough after all, 48M just wasn't big enough. At
64M I don't see any more usage of pgsql_tmp. The largest on disk sort was
35,020,800 bytes. So that translates to a 44%-92% space overhead.
It turns out it was the same data structure as my earlier message which puts
it at 53 byte records in practice. Two integers, a float, a varchar with up to
12 characters.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | John Smith | 2003-02-05 05:56:46 | Re: UPDATE slow [Viruschecked] |
Previous Message | Greg Stark | 2003-02-05 05:37:20 | Re: not exactly a bug report, but surprising behaviour |