From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Dilip kumar <dilip(dot)kumar(at)huawei(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, Euler Taveira <euler(at)timbira(dot)com(dot)br> |
Subject: | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |
Date: | 2014-09-27 02:55:33 |
Message-ID: | CAMkU=1xdbaw7RSPS1pWhwj7WUiRoh+HNAhV3d2a5zuJjQo3ovQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
wrote:
> Gavin Flower wrote:
>
> > Curious: would it be both feasible and useful to have multiple
> > workers process a 'large' table, without complicating things too
> > much? The could each start at a different position in the file.
>
> Feasible: no. Useful: maybe, we don't really know. (You could just as
> well have a worker at double the speed, i.e. double vacuum_cost_limit).
>
Vacuum_cost_delay is already 0 by default. So unless you changed that,
vacuum_cost_limit does not take effect under vacuumdb.
It is pretty easy for vacuum to be CPU limited, and even easier for analyze
to be CPU limited (It does a lot of sorting). I think analyzing is the
main use case for this patch, to shorten the pg_upgrade window. At least,
that is how I anticipate using it.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Flower | 2014-09-27 04:55:22 | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |
Previous Message | Josh Berkus | 2014-09-27 01:20:14 | Re: jsonb format is pessimal for toast compression |