Skip site navigation (1) Skip section navigation (2)

Re: IN question

From: "Meredith L(dot) Patterson" <mlp(at)thesmartpolitenerd(dot)com>
To: Steve Atkins <steve(at)blighty(dot)com>
Cc: SF PostgreSQL <sfpug(at)postgresql(dot)org>
Subject: Re: IN question
Date: 2008-12-10 22:34:02
Message-ID: (view raw, whole thread or download thread mbox)
Lists: sfpug
Steve Atkins wrote:
> On Dec 10, 2008, at 2:08 PM, A. Elein Mustain wrote:
>> On Wed, Dec 10, 2008 at 01:41:01PM -0800, Josh Berkus wrote:
>>> Steve,
>>>> I'm not so sure there's such a thing as a limit that's too big.
>>> Sure there is.  out-of-memory error.
>>> Actually, I'd like to see the limit set at work_mem.
>>> --Josh
>> I write big, long queries everyday.  I would prefer the
>> default be no limit but out of memory.   If you must add
>> a limit (why????)  then it should NOT be the default.
> Well, one reason for a limit is to provide the DBA with a
> last line of defense against idiot clients. Given some of
> the dumb things automated query builders and ORMs
> are prone to do that's not such a bad idea.

Back in the pre-7.0 days, there was a limit of something like 16384
bytes, which I can see being a problem. But work_mem defaults to 1MB and
is often much larger. How large are the queries these automated query
builders produce? IO/network bottlenecks anyone? I don't care if you're
doing it over dedicated fiber, if you're passing a query larger than 1MB
you're doing it wrong.


In response to


sfpug by date

Next:From: Jeff DavisDate: 2008-12-10 23:39:33
Subject: Re: IN question
Previous:From: Steve AtkinsDate: 2008-12-10 22:12:10
Subject: Re: IN question

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group