From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | Steve Atkins <steve(at)blighty(dot)com>, SF PostgreSQL <sfpug(at)postgresql(dot)org> |
Subject: | Re: IN question |
Date: | 2008-12-10 23:39:33 |
Message-ID: | 1228952373.2754.62.camel@dell.linuxdev.us.dell.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | sfpug |
On Wed, 2008-12-10 at 13:41 -0800, Josh Berkus wrote:
> Steve,
>
> > I'm not so sure there's such a thing as a limit that's too big.
>
> Sure there is. out-of-memory error.
>
> Actually, I'd like to see the limit set at work_mem.
You mean that it should share work_mem, or be an additional work_mem
bytes?
I think sharing is probably bad, because then passing a query near the
limit would basically mean that you have no working memory at all (the
query must be parsed/analyzed before other uses of work_mem, of course).
Maybe that's tolerable, I suppose.
And if it's additional memory, it should probably be a different GUC.
If there is an explicit limit, which sounds reasonable, I think it's
good to separate parsing limits from executor limits.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2008-12-10 23:44:45 | Re: IN question |
Previous Message | Meredith L. Patterson | 2008-12-10 22:34:02 | Re: IN question |