From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | SF PostgreSQL <sfpug(at)postgresql(dot)org> |
Subject: | Re: IN question |
Date: | 2008-12-10 00:06:22 |
Message-ID: | 8CEAB0B6-EEFC-4715-ACBD-0911A232B144@blighty.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | sfpug |
On Dec 9, 2008, at 3:48 PM, Mat Caughron wrote:
>
> So anyone know what circumstances caused the implementation of a 64
> kilobyte query size limit that was in Oracle 9i?
I'd guess static buffer.
There have been some arbitrary limits and outright bugs along those
lines in Oracle code forever (one I recall was replication failing if
the peer hostname was more than 64 bytes long, or somesuch). Big
enough limits that they don't cause too many things to break, but
annoying when you're the one to fall foul of them.
> I suspect there's an opportunity here to benefit from prior lessons
> learned the hard way (e.g. size limit too small or too big).
I'm not so sure there's such a thing as a limit that's too big.
Performance may vary (it used to be very expensive to use a long IN
clause, now it isn't) but I don't think that's a reason to apply
arbitrary constraints to what the user can ask for. I wouldn't use
million row insert queries a-la mysql myself, but as long as it
doesn't significantly increase development pain to support them I
wouldn't stop other people doing so - even if the performance isn't
perfect, it beats rewriting existing client code.
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2008-12-10 21:41:01 | Re: IN question |
Previous Message | Mat Caughron | 2008-12-09 23:48:16 | Re: IN question |