Re: Safe memory allocation functions

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Safe memory allocation functions
Date: 2015-01-16 16:12:17
Message-ID: 20150116161217.GH16991@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2015-01-16 12:56:18 -0300, Alvaro Herrera wrote:
> Andres Freund wrote:
> > We rely on palloc erroring out on large allocations in a couple places
> > as a crosscheck. I don't think this argument holds much water.
>
> I don't understand what that has to do with it. Surely we're not going
> to have palloc_noerror() not error out when presented with a huge
> allocation.

Precisely. That means it *does* error out in a somewhat expected path.

> My point is just that the "noerror" bit in palloc_noerror() means that
> it doesn't fail in OOM, and that there are other causes for it to
> error.

That description pretty much describes why it's a misnomer, no?

> One thought I just had is that we also have MemoryContextAllocHuge; are
> we going to consider a mixture of both things in the future, i.e. allow
> huge allocations to return NULL when OOM?

I definitely think we should. I'd even say that the usecase is larger
for huge allocations. It'd e.g. be rather nice to first try sorting with
the huge 16GB work mem and then try 8GB/4/1GB if that fails.

> It sounds a bit useless
> currently, but it doesn't seem extremely far-fetched that we will need
> further flags in the future. (Or, perhaps, we will want to have code
> that retries a Huge allocation that returns NULL with a smaller size,
> just in case it does work.) Maybe what we need is to turn these things
> into flags to a new generic function. Furthermore, I question whether
> we really need a "palloc" variant -- I mean, can we live with just the
> MemoryContext API instead? As with the "Huge" variant (which does not
> have a corresponding palloc equivalent), possible use cases seem very
> limited so there's probably not much point in providing a shortcut.

I'm fine with not providing a palloc() equivalent, but I also am fine
with having it.

> So how about something like
>
> #define ALLOCFLAG_HUGE 0x01
> #define ALLOCFLAG_NO_ERROR_ON_OOM 0x02
> void *
> MemoryContextAllocFlags(MemoryContext context, Size size, int flags);
>
> and perhaps even
>
> #define MemoryContextAllocHuge(cxt, sz) \
> MemoryContextAllocFlags(cxt, sz, ALLOCFLAG_HUGE)
> for source-level compatibility.

I don't know, this seems a bit awkward to use. Your earlier example with
the *Huge variant that returns a smaller allocation doesn't really
convince me - that'd need a separate API anyway.

I definitely do not want to push the nofail stuff via the
MemoryContextData-> API into aset.c. Imo aset.c should always return
NULL and then mcxt.c should throw the error if in the normal palloc()
function.

> (Now we all agree that palloc() itself is a very hot spot and shouldn't
> be touched at all. I don't think these new functions are used as commonly
> as that, so the fact that they are slightly slower shouldn't be too
> troublesome.)

Yea, the speed of the new functions really shouldn't matter.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Sawada Masahiko 2015-01-16 16:25:58 Re: Merging postgresql.conf and postgresql.auto.conf
Previous Message Andres Freund 2015-01-16 15:59:56 Re: PATCH: Reducing lock strength of trigger and foreign key DDL