From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | Neil Conway <nrc(at)cs(dot)berkeley(dot)edu>, Steve Atkins <steve(at)blighty(dot)com>, SF PostgreSQL <sfpug(at)postgresql(dot)org> |
Subject: | Re: IN question |
Date: | 2008-12-11 18:05:35 |
Message-ID: | 4941566F.9000606@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | sfpug |
Jeff Davis wrote:
> On Wed, 2008-12-10 at 16:11 -0800, Neil Conway wrote:
>> On Wed, Dec 10, 2008 at 3:39 PM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
>>> And if it's additional memory, it should probably be a different GUC.
>> Measuring the limit in bytes makes no sense, anyway.
>>
>
> Sure it does. If you're concerned about the application generating
> infinite SQL strings and sending them to the server, a byte limit on the
> SQL string would solve it.
>
> After all, as Josh pointed out, there _is_ a limit measured in bytes:
> available memory (and some operating systems don't handle that very
> well).
Yes. For example, if the length of your query exceeds any of various
memory limits on Linux, the connection crashes with a very unfriendly
error message. If it was our limit, the error message could at least be
friendly: "Query string too long. Please edit the query or increase
work_mem."
And infinite SQL isn't hypothetical; just 4 weeks ago I fixed a problem
for a client which turned out to be their home-baked ORM building IN()
clauses up to 150,000 values long.
--Josh
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Davis | 2008-12-11 18:21:21 | Re: IN question |
Previous Message | Jeff Davis | 2008-12-11 01:01:40 | Re: IN question |