Re: leakproof

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Don Baccus <dhogaza(at)pacifier(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: leakproof
Date: 2012-02-22 21:58:11
Message-ID: CA+TgmoZXA6EZtJODdsdgZTU2TTu59E-LqhbruMAQqF2D+A3Aog@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Feb 22, 2012 at 10:21 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> Anyway, to your point, I suppose I might hesitate to mark factorial
>> leak-proof even if it didn't throw an error on overflow, because the
>> time it takes to return an answer for larger inputs does grow rather
>> rapidly.  But it's kind of a moot point because the error makes it not
>> leak-proof anyway.  So maybe we're just splitting hairs here, however
>> we decide to label this.
>
> Speaking of hair-splitting ...
>
> A strict interpretation of "no errors can be thrown" would, for example,
> rule out any function that either takes or returns a varlena datatype,
> since those are potentially going to throw out-of-memory errors if they
> can't palloc a result value or a temporary detoasted input value.
> I don't suppose that we want that, which means that this rule of thumb
> is wrong in detail, and there had better be some more subtle definition
> of what is okay or not, perhaps along the line of "must not throw any
> errors that reveal anything useful about the input value".  Have we got
> such a definition?  (I confess to not having followed this patch very
> closely.)

Not exactly; I've kind of been playing it by ear, but I agree that
out-of-memory errors based on the input value being huge are probably
not something we want to stress out about too much. In theory you
could probe for the approximate size of the value by using up nearly
all the memory on the system, leaving a varying amount behind, and
then see whether you get an out of memory error. But again, if people
are going to that kind of trouble to ferret out just the approximate
size of the data, it was probably a bad idea to let them log into the
database at all, in any way, in the first place.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-02-22 22:02:43 Re: VACUUM ANALYZE is faster than ANALYZE?
Previous Message Dimitri Fontaine 2012-02-22 21:46:02 Re: determining a type oid from the name