Re: Supporting huge pages on Windows

From: Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>
To: "Tsunakawa, Takayuki" <tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Supporting huge pages on Windows
Date: 2017-03-08 10:08:04
Message-ID: CAE9k0Pkz+tOiPmx2LrVePM7cZydTLNbQ6R3GqgeivurfsXyZ5w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

I tried to test v8 version of patch. Firstly, I was able to start the
postgresql server process with 'huge_pages' set to on. I had to
follow the instructions given in MSDN[1] to enable lock pages in
memory option and also had to start the postgresql server process as
admin user.

test=# show huge_pages;
huge_pages
------------
on
(1 row)

To start with, I ran the regression test-suite and didn't find any
failures. But, then I am not sure if huge_pages are getting used or
not. However, upon checking the settings for huge_pages and I found it
as 'on'. I am assuming, if huge pages is not being used due to
shortage of large pages, it should have fallen back to non-huge pages.

I also ran the pgbench tests on read-only workload and here are the
results I got.

pgbench -c 4 -j 4 - T 600 bench

huge_pages=on, TPS = 21120.768085
huge_pages=off, TPS = 20606.288995

[1] - https://msdn.microsoft.com/en-IN/library/ms190730.aspx

--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com

On Thu, Feb 23, 2017 at 12:59 PM, Tsunakawa, Takayuki
<tsunakawa(dot)takay(at)jp(dot)fujitsu(dot)com> wrote:
> From: Amit Kapila [mailto:amit(dot)kapila16(at)gmail(dot)com]
>> > Hmm, the large-page requires contiguous memory for each page, so this
>> error could occur on a long-running system where the memory is heavily
>> fragmented. For example, please see the following page and check the memory
>> with RAMMap program referred there.
>> >
>>
>> I don't have RAMMap and it might take some time to investigate what is going
>> on, but I think in such a case even if it works we should keep the default
>> value of huge_pages as off on Windows. I request somebody else having
>> access to Windows m/c to test this patch and if it works then we can move
>> forward.
>
> You are right. I modified the patch so that the code falls back to the non-huge page when CreateFileMapping() fails due to the shortage of large pages. That's what the Linux version does.
>
> The other change is to parameterize the Win32 function names in the messages in EnableLockPagePrivileges(). This is to avoid adding almost identical messages unnecessarily. I followed Alvaro's comment. I didn't touch the two existing sites that embed Win32 function names. I'd like to leave it up to the committer to decide whether to change as well, because changing them might make it a bit harder to apply some bug fixes to earlier releases.
>
> FYI, I could reproduce the same error as Amit on 32-bit Win7, where the total RAM is 3.5 GB and available RAM is 2 GB. I used the attached largepage.c. Immediately after the system boot, I could only allocate 8 large pages. When I first tried to allocate 32 large pages, the test program produced:
>
> large page size = 2097152
> allocating 32 large pages...
> CreateFileMapping failed: error code = 1450
>
> You can build the test program as follows:
>
> cl largepage.c advapi32.lib
>
> Regards
> Takayuki Tsunakawa
>
>
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Langote 2017-03-08 10:09:36 Re: dropping partitioned tables without CASCADE
Previous Message Petr Jelinek 2017-03-08 09:36:21 Re: Logical replication existing data copy