| From: | James Cranch <jdc41(at)cam(dot)ac(dot)uk> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Cc: | bricklen <bricklen(at)gmail(dot)com> |
| Subject: | Re: Rapidly finding maximal rows |
| Date: | 2011-10-12 11:41:51 |
| Message-ID: | Prayer.1.3.4.1110121241510.22335@hermes-2.csi.cam.ac.uk |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Dear Bricklen,
>Try setting work_mem to something larger, like 40MB to do that sort
>step in memory, rather than spilling to disk. The usual caveats apply
>though, like if you have many users/queries performing sorts or
>aggregations, up to that amount of work_mem may be used at each step
>potentially resulting in your system running out of memory/OOM etc.
Thanks, I'll bear that in mind as a strategy. It's good to know. But since
Dave has saved me the sort altogether, I'll go with his plan.
Best wishes,
James
\/\/\
--
------------------------------------------------------------
James Cranch http://www.srcf.ucam.org/~jdc41
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Vishnu S. | 2011-10-13 12:52:32 | Tablespace files deleted automatically. |
| Previous Message | James Cranch | 2011-10-12 11:40:48 | Re: Rapidly finding maximal rows |