Re: Vacuum: allow usage of more than 1GB of work mem

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Claudio Freire <klaussfreire(at)gmail(dot)com>
Cc: Andres Freund <andres(at)anarazel(dot)de>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Anastasia Lubennikova <a(dot)lubennikova(at)postgrespro(dot)ru>, Anastasia Lubennikova <lubennikovaav(at)gmail(dot)com>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Vacuum: allow usage of more than 1GB of work mem
Date: 2017-04-11 19:17:09
Message-ID: CA+Tgmob+gtnKS11-Ldw96-XEXF3WobFRShho2-7=kSTcDegsgw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Apr 11, 2017 at 2:59 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Tue, Apr 11, 2017 at 3:53 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> 1TB / 8kB per page * 60 tuples/page * 20% * 6 bytes/tuple = 9216MB of
>> maintenance_work_mem
>>
>> So we'll allocate 128MB+256MB+512MB+1GB+2GB+4GB which won't be quite
>> enough so we'll allocate another 8GB, for a total of 16256MB, but more
>> than three-quarters of that last allocation ends up being wasted.
>> I've been told on this list before that doubling is the one true way
>> of increasing the size of an allocated chunk of memory, but I'm still
>> a bit unconvinced.
>
> There you're wrong. The allocation is capped to 1GB, so wastage has an
> upper bound of 1GB.

Ah, OK. Sorry, didn't really look at the code. I stand corrected,
but then it seems a bit strange to me that the largest and smallest
allocations are only 8x different. I still don't really understand
what that buys us. What would we lose if we just made 'em all 128MB?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2017-04-11 19:47:50 Re: Possible problem in Custom Scan API
Previous Message Andrew Dunstan 2017-04-11 19:13:13 Re: TAP tests take a long time