Am 13.01.2012 22:50, schrieb Josh Berkus:
> It occurs to me that I would find it quite personally useful if the
> vacuumdb utility was multiprocess capable.
> For example, just today I needed to manually analyze a database with
> over 500 tables, on a server with 24 cores. And I needed to know when
> the analyze was done, because it was part of a downtime. I had to
> resort to a python script.
> I'm picturing doing this in the simplest way possible: get the list of
> tables and indexes, divide them by the number of processes, and give
> each child process its own list.
> Any reason not to hack on this for 9.3?
I like the idea - but ...
I would prefer to have an option that the user is able to tell on how much
cores it should be shared. Something like --share-cores=N.
Default is total core number of the machine but users should be able to
say - ok -
my machine has 24 cores but I want that vacuumdb just will use 12 of them.
Especially on startups - you are able to find machines that aren't
machines. Often you find database and web server as single machine.
Also you could have run more cluster on same machine for offering your
different languages (one cluster per language). I already saw such a setup.
There might be side businesses on the cores - so it should be possible
users decides on how much cores he wants to share vacuumdb.
Dipl. Inf. Susanne Ebrecht - 2ndQuadrant
PostgreSQL Development, 24x7 Support, Training and Services
In response to
pgsql-hackers by date
|Next:||From: Andres Freund||Date: 2012-01-17 12:23:19|
|Subject: Re: 9.3 feature proposal: vacuumdb -j #|
|Previous:||From: Pavel Stehule||Date: 2012-01-17 12:03:50|
|Subject: review: psql tab completion for GRANT role|