Re: munmap() failure due to sloppy handling of hugepage size

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>
Subject: Re: munmap() failure due to sloppy handling of hugepage size
Date: 2016-10-12 22:10:05
Message-ID: 9777.1476310205@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> Tom Lane wrote:
>> According to
>> https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt
>> looking into /proc/meminfo is the longer-standing API and thus is
>> likely to work on more kernel versions. Also, if you look into
>> /sys then you are going to see multiple possible values and it's
>> not clear how to choose the right one.

> I'm not sure that this is the best rationale. In my system there are
> 2MB and 1GB huge page sizes; in systems with lots of memory (let's say 8
> GB of shared memory is requested) it seems a clear winner to allocate 8
> 1GB hugepages than 4096 2MB hugepages because the page table is so much
> smaller. The /proc interface only shows the 2MB page size, so if we go
> that route we'd not be getting the full benefit of the feature.

And you'll tell mmap() which one to do how exactly? I haven't found
anything explaining how applications get to choose which page size applies
to their request. The kernel document says that /proc/meminfo reflects
the "default" size, and I'd assume that that's what we'll get from mmap.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Merlin Moncure 2016-10-12 22:14:20 Re: munmap() failure due to sloppy handling of hugepage size
Previous Message Alvaro Herrera 2016-10-12 22:09:55 Re: logical replication connection information management