Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]

From: Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Dilip kumar <dilip(dot)kumar(at)huawei(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, Jan Lentfer <Jan(dot)Lentfer(at)web(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, Euler Taveira <euler(at)timbira(dot)com(dot)br>
Subject: Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]
Date: 2014-09-26 18:38:25
Message-ID: 5425B2A1.7040601@archidevsys.co.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 27/09/14 01:36, Alvaro Herrera wrote:
> Amit Kapila wrote:
>
>> Today while again thinking about the startegy used in patch to
>> parallelize the operation (vacuum database), I think we can
>> improve the same for cases when number of connections are
>> lesser than number of tables in database (which I presume
>> will normally be the case). Currently we are sending command
>> to vacuum one table per connection, how about sending multiple
>> commands (example Vacuum t1; Vacuum t2) on one connection.
>> It seems to me there is extra roundtrip for cases when there
>> are many small tables in database and few large tables. Do
>> you think we should optimize for any such cases?
> I don't think this is a good idea; at least not in a first cut of this
> patch. It's easy to imagine that a table you initially think is small
> enough turns out to have grown much larger since last analyze. In that
> case, putting one worker to process that one together with some other
> table could end up being bad for parallelism, if later it turns out that
> some other worker has no table to process. (Table t2 in your example
> could grown between the time the command is sent and t1 is vacuumed.)
>
> It's simpler to have workers do one thing at a time only.
>
> I don't think it's a very good idea to call pg_relation_size() on every
> table in the database from vacuumdb.
>
Curious: would it be both feasible and useful to have multiple workers
process a 'large' table, without complicating things too much? The
could each start at a different position in the file.

Cheers,
Gavin

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2014-09-26 18:39:16 Re: proposal: rounding up time value less than its unit.
Previous Message David Johnston 2014-09-26 18:34:20 Re: proposal: rounding up time value less than its unit.