Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Stephen Frost <sfrost(at)snowman(dot)net>
Cc: Noah Misch <noah(at)leadboat(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize
Date: 2013-06-22 19:03:39
Message-ID: CA+Tgmobs4hWd51877WY4kfs+R4+GPSh8icTdW5j6YO+Ez0p6Hw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Sat, Jun 22, 2013 at 3:46 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> I'm not a huge fan of moving directly to INT_MAX. Are we confident that
> everything can handle that cleanly..? I feel like it might be a bit
> safer to shy a bit short of INT_MAX (say, by 1K).

Maybe it would be better to stick with INT_MAX and fix any bugs we
find. If there are magic numbers short of INT_MAX that cause
problems, it would likely be better to find out about those problems
and adjust the relevant code, rather than trying to dodge them. We'll
have to confront all of those problems eventually as we come to
support larger and larger sorts; I don't see much value in putting it
off.

Especially since we're early in the release cycle.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2013-06-22 19:10:07 Re: [Review] Re: minor patch submission: CREATE CAST ... AS EXPLICIT
Previous Message Robert Haas 2013-06-22 18:58:50 Re: A better way than tweaking NTUP_PER_BUCKET