From: | Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: 9.3 feature proposal: vacuumdb -j # |
Date: | 2012-01-13 22:03:16 |
Message-ID: | 4F10AA24.6000608@web.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Am 13.01.2012 22:50, schrieb Josh Berkus:
> It occurs to me that I would find it quite personally useful if the
> vacuumdb utility was multiprocess capable.
>
> For example, just today I needed to manually analyze a database with
> over 500 tables, on a server with 24 cores. And I needed to know when
> the analyze was done, because it was part of a downtime. I had to
> resort to a python script.
>
> I'm picturing doing this in the simplest way possible: get the list of
> tables and indexes, divide them by the number of processes, and give
> each child process its own list.
>
> Any reason not to hack on this for 9.3?
I don't see any reason not to do it, but plenty to do it.
Right now I have systems hosting many databases, I need to vacuum full
from time to time. I have wrapped vacuumdb with a shell script to
actually use all the capacity that is available. A vacuumdb -faz just
isn't that usefull on large machines anymore.
Jan
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-01-13 22:05:34 | Re: Concurrent CREATE TABLE/DROP SCHEMA leaves inconsistent leftovers |
Previous Message | Josh Berkus | 2012-01-13 21:50:32 | 9.3 feature proposal: vacuumdb -j # |