Re: Auto-tuning work_mem and maintenance_work_mem

From: Jim Nasby <jim(at)nasby(dot)net>
To: Robert Haas <robertmhaas(at)gmail(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>
Cc: Josh Berkus <josh(at)agliodbs(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Auto-tuning work_mem and maintenance_work_mem
Date: 2013-10-14 20:44:13
Message-ID: 525C579D.7050105@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 10/14/13 8:18 AM, Robert Haas wrote:
> On Sat, Oct 12, 2013 at 3:07 AM, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
>> On Oct 11, 2013 10:23 PM, "Josh Berkus" <josh(at)agliodbs(dot)com> wrote:
>>> On 10/11/2013 01:11 PM, Bruce Momjian wrote:
>>>> In summary, I think we need to:
>>>>
>>>> * decide on new defaults for work_mem and maintenance_work_mem
>>>> * add an initdb flag to allow users/packagers to set shared_bufffers?
>>>> * add an autovacuum_work_mem setting?
>>>> * change the default for temp_buffers?
>>>
>>> If we're changing defaults, bgwriter_lru_maxpages and vacuum_cost_limit
>>> could also use a bump; those thresholds were set for servers with < 1GB
>>> of RAM
>>
>> Uh, those are there to limit io and not memory, right? More memory isn't the
>> reason to increase them, more io is. For people deploying on modern server
>> hardware then yes it's often low, but for all those deploying in virtualized
>> environments with io performance reminding you of the 1990ies, I'm not so
>> sure it is...
>
> bgwriter_lru_maxpages is clearly related to the size of
> shared_buffers, although confusingly it is expressed as a number of
> buffers, while shared_buffers is expressed as a quantity of memory. I
> think we might have done better to call the GUC
> bgwriter_lru_maxpercent and make it a percentage of shared buffers.
>

Also, more memory generally means more filesystem cache which means you can do more vacuum work per round.

FWIW, on our 512G servers...

cnuapp_prod(at)postgres11(dot)obr=# select name, setting from pg_settings where name ~ 'vacuum_cost';
name | setting
------------------------------+---------
autovacuum_vacuum_cost_delay | 10
autovacuum_vacuum_cost_limit | -1
vacuum_cost_delay | 10
vacuum_cost_limit | 2000
vacuum_cost_page_dirty | 10
vacuum_cost_page_hit | 1
vacuum_cost_page_miss | 10
(7 rows)

The page_hit cost is intentionally the same as the page_dirty limit because writes to the SAN are generally far cheaper than reads that actually hit spindles. Of course with the amount of FS cache we have (512G-8G shared buffers at most) reads are often very likely to hit the FS cache, but tuning of these settings while watching IO stats has shown these settings to be minimally disruptive.
--
Jim C. Nasby, Data Architect jim(at)nasby(dot)net
512.569.9461 (cell) http://jim.nasby.net

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2013-10-14 20:48:31 Re: [PATCH] Add use of asprintf()
Previous Message Peter Eisentraut 2013-10-14 20:29:55 Re: buildfarm failures on smew and anole