Re: Should we increase the default vacuum_cost_limit?

From: Julien Rouhaud <rjuju123(at)gmail(dot)com>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Jeremy Schneider <schnjere(at)amazon(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Should we increase the default vacuum_cost_limit?
Date: 2019-03-11 12:57:21
Message-ID: CAOBaU_a2tLyonOMJ62=SiDmo84Xo1fy81YA8K=B+=OtTc3sYSQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Mar 11, 2019 at 10:03 AM David Rowley
<david(dot)rowley(at)2ndquadrant(dot)com> wrote:
>
> On Mon, 11 Mar 2019 at 09:58, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > The second patch is a delta that rounds off to the next smaller unit
> > if there is one, producing a less noisy result:
> >
> > regression=# set work_mem = '30.1GB';
> > SET
> > regression=# show work_mem;
> > work_mem
> > ----------
> > 30822MB
> > (1 row)
> >
> > I'm not sure if that's a good idea or just overthinking the problem.
> > Thoughts?
>
> I don't think you're over thinking it. I often have to look at such
> settings and I'm probably not unique in when I glance at 30822MB I can
> see that's roughly 30GB, whereas when I look at 31562138kB, I'm either
> counting digits or reaching for a calculator. This is going to reduce
> the time it takes for a human to process the pg_settings output, so I
> think it's a good idea.

Definitely, rounding up will spare people from wasting time to check
what's the actual value.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruno Hass 2019-03-11 13:27:14 Best way to keep track of a sliced TOAST
Previous Message Alvaro Herrera 2019-03-11 12:41:23 Re: monitoring CREATE INDEX [CONCURRENTLY]