Skip site navigation (1) Skip section navigation (2)

Re: IN question

From: "A(dot) Elein Mustain" <elein(at)varlena(dot)com>
To: "Meredith L(dot) Patterson" <mlp(at)thesmartpolitenerd(dot)com>
Cc: Steve Atkins <steve(at)blighty(dot)com>, SF PostgreSQL <sfpug(at)postgresql(dot)org>
Subject: Re: IN question
Date: 2008-12-14 00:45:34
Message-ID: (view raw, whole thread or download thread mbox)
Lists: sfpug
I know about idiot clients and I am not one of them.
People writing extra large queries on "accident" are easily

I do complex warehouse type queries and a query limit
would mean pushing things into chunked stored procedures
rather than having a single query clearly join the 
myriads of lookup and base tables it need.  

This is not TP or apps with fast lookups.  We need the 
database to be central to all functions within the company,
including warehousing and statistics.

Now that I mention it, limiting statment lengths would
also limit the length of stored procedures.  And then there
goes my complex aggregation.  It too would have to be
chunked up instead of one clear stream of logic.


On Wed, Dec 10, 2008 at 02:34:02PM -0800, Meredith L. Patterson wrote:
> Steve Atkins wrote:
> > 
> > On Dec 10, 2008, at 2:08 PM, A. Elein Mustain wrote:
> > 
> >> On Wed, Dec 10, 2008 at 01:41:01PM -0800, Josh Berkus wrote:
> >>> Steve,
> >>>
> >>>> I'm not so sure there's such a thing as a limit that's too big.
> >>>
> >>> Sure there is.  out-of-memory error.
> >>>
> >>> Actually, I'd like to see the limit set at work_mem.
> >>>
> >>> --Josh
> >>>
> >>
> >> I write big, long queries everyday.  I would prefer the
> >> default be no limit but out of memory.   If you must add
> >> a limit (why????)  then it should NOT be the default.
> > 
> > Well, one reason for a limit is to provide the DBA with a
> > last line of defense against idiot clients. Given some of
> > the dumb things automated query builders and ORMs
> > are prone to do that's not such a bad idea.
> Back in the pre-7.0 days, there was a limit of something like 16384
> bytes, which I can see being a problem. But work_mem defaults to 1MB and
> is often much larger. How large are the queries these automated query
> builders produce? IO/network bottlenecks anyone? I don't care if you're
> doing it over dedicated fiber, if you're passing a query larger than 1MB
> you're doing it wrong.
> --mlp

In response to


sfpug by date

Next:From: Meredith L. PattersonDate: 2008-12-14 08:32:28
Subject: Re: IN question
Previous:From: Jeff DavisDate: 2008-12-11 18:21:21
Subject: Re: IN question

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group