Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Simon Riggs <simon(at)2ndQuadrant(dot)com>
Cc: Noah Misch <noah(at)leadboat(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date: 2013-06-22 13:12:31
Message-ID: 20130622131231.GF7093@tamriel.snowman.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

* Simon Riggs (simon(at)2ndQuadrant(dot)com) wrote:
> On 22 June 2013 08:46, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> >>The next limit faced by sorts is
> >> INT_MAX concurrent tuples in memory, which limits helpful work_mem to about
> >> 150 GiB when sorting int4.
> >
> > That's frustratingly small. :(
>
> But that has nothing to do with this patch, right? And is easily fixed, yes?

I don't know about 'easily fixed' (consider supporting a HashJoin of >2B
records) but I do agree that dealing with places in the code where we are
using an int4 to keep track of the number of objects in memory is outside
the scope of this patch.

Hopefully we are properly range-checking and limiting ourselves to only
what a given node can support and not solely depending on MaxAllocSize
to keep us from overflowing some int4 which we're using as an index for
an array or as a count of how many objects we've currently got in
memory, but we'll want to consider carefully what happens with such
large sets as we're adding support into nodes for these Huge
allocations (along with the recent change to allow 1TB work_mem, which
may encourage users with systems large enough to actually try to set it
that high... :)

Thanks,

Stephen

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2013-06-22 13:15:36 A better way than tweaking NTUP_PER_BUCKET
Previous Message Michael Paquier 2013-06-22 12:37:58 Re: Support for REINDEX CONCURRENTLY