Re: 9.3 feature proposal: vacuumdb -j #

From: Andres Freund <andres(at)anarazel(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Cc: Josh Berkus <josh(at)agliodbs(dot)com>
Subject: Re: 9.3 feature proposal: vacuumdb -j #
Date: 2012-01-13 22:09:53
Message-ID: 201201132309.53263.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Friday, January 13, 2012 10:50:32 PM Josh Berkus wrote:
> Hackers,
>
> It occurs to me that I would find it quite personally useful if the
> vacuumdb utility was multiprocess capable.
>
> For example, just today I needed to manually analyze a database with
> over 500 tables, on a server with 24 cores. And I needed to know when
> the analyze was done, because it was part of a downtime. I had to
> resort to a python script.
>
> I'm picturing doing this in the simplest way possible: get the list of
> tables and indexes, divide them by the number of processes, and give
> each child process its own list.
That doesn't sound like a good idea. Its way too likely that you will end up
with one backend doing all the work because it got some big tables.

I don't think this task deserves using threads or subprocesses. Multiple
connections from one process seems way more sensible and mostly avoids the
above problem.

Andres

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Euler Taveira de Oliveira 2012-01-13 22:12:35 Re: 9.3 feature proposal: vacuumdb -j #
Previous Message Robert Haas 2012-01-13 22:05:34 Re: Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers