From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Alexey Bashtanov <bashtanov(at)imap(dot)cc>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: log bind parameter values on error |
Date: | 2019-12-09 20:47:04 |
Message-ID: | 8464.1575924424@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> Also:
> * v18 and v19 now alwys do a "strlen(s)", i.e. they scan the whole input
> string -- pointless when maxlen is given. We could avoid that for
> very large input strings by doing strnlen(maxlen + MAX_MULTIBYTE_CHAR_LEN)
> so that we capture our input string plus one additional multibyte
> char.
BTW, as far as that goes, it seems to me this patch is already suffering
from a lot of premature micro-optimization. Is there even any evidence
to justify that complicated chunk-at-a-time copying strategy, rather than
doing quote-doubling the same way we do it everywhere else? The fact that
that effectively scans the input string twice means that it's not an
ironclad win compared to the naive way, and it seems like it could lose
significantly for a case with lots of quote marks. Moreover, for the
lengths of strings I expect we're mostly dealing with here, it would be
impossible to measure any savings even assuming there is some. If I were
the committer I think I'd just flush that and do it the same way as we
do it in existing code.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2019-12-09 20:51:39 | Re: Index corruption / planner issue with one table in my pg 11.6 instance |
Previous Message | Tom Lane | 2019-12-09 20:17:13 | Re: log bind parameter values on error |