Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Noah Misch <noah(at)leadboat(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date: 2013-06-27 23:37:42
Message-ID: CAMkU=1y8ZBMMapk5i1BgsMHQZsaxDCO=UEKWnu6J=XEjQ-gpAw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jun 22, 2013 at 12:46 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:

> Noah,
>
> * Noah Misch (noah(at)leadboat(dot)com) wrote:
> > This patch introduces MemoryContextAllocHuge() and repalloc_huge() that
> check
> > a higher MaxAllocHugeSize limit of SIZE_MAX/2.
>
> Nice! I've complained about this limit a few different times and just
> never got around to addressing it.
>
> > This was made easier by tuplesort growth algorithm improvements in commit
> > 8ae35e91807508872cabd3b0e8db35fc78e194ac. The problem has come up before
> > (TODO item "Allow sorts to use more available memory"), and Tom floated
> the
> > idea[1] behind the approach I've used. The next limit faced by sorts is
> > INT_MAX concurrent tuples in memory, which limits helpful work_mem to
> about
> > 150 GiB when sorting int4.
>
> That's frustratingly small. :(
>

I've added a ToDo item to remove that limit from sorts as well.

I was going to add another item to make nodeHash.c use the new huge
allocator, but after looking at it just now it was not clear to me that it
even has such a limitation. nbatch is limited by MaxAllocSize, but
nbuckets doesn't seem to be.

Cheers,

Jeff

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Claudio Freire 2013-06-27 23:39:23 Re: Hash partitioning.
Previous Message Dean Rasheed 2013-06-27 23:10:32 Re: MD5 aggregate