Re: Should we increase the default vacuum_cost_limit?

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Julien Rouhaud <rjuju123(at)gmail(dot)com>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jeremy Schneider <schnjere(at)amazon(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we increase the default vacuum_cost_limit?
Date: 2019-03-11 09:03:11
Message-ID: CAKJS1f9Rg_dms4JsyNWiisS3BseHjhNB7LWfFJtviZMkoTyj7A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> The second patch is a delta that rounds off to the next smaller unit
> if there is one, producing a less noisy result:
>
> regression=# set work_mem = '30.1GB';
> SET
> regression=# show work_mem;
> work_mem
> ----------
> 30822MB
> (1 row)
>
> I'm not sure if that's a good idea or just overthinking the problem.
> Thoughts?

I don't think you're over thinking it. I often have to look at such
settings and I'm probably not unique in when I glance at 30822MB I can
see that's roughly 30GB, whereas when I look at 31562138kB, I'm either
counting digits or reaching for a calculator. This is going to reduce
the time it takes for a human to process the pg_settings output, so I
think it's a good idea.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message MikalaiKeida 2019-03-11 09:07:11 RE: Timeout parameters
Previous Message Rahila Syed 2019-03-11 09:02:27 Re: monitoring CREATE INDEX [CONCURRENTLY]