From: | "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, "Jim C(dot) Nasby" <jim(at)nasby(dot)net>, Hackers <pgsql-hackers(at)postgresql(dot)org>, Gregory Stark <stark(at)enterprisedb(dot)com> |
Subject: | Re: autovacuum next steps, take 2 |
Date: | 2007-02-27 03:32:06 |
Message-ID: | 45E3A636.4050602@zeut.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
> BTW, to what extent might this whole problem be simplified if we adopt
> chunk-at-a-time vacuuming (compare current discussion with Galy Lee)?
> If the unit of work has a reasonable upper bound regardless of table
> size, maybe the problem of big tables starving small ones goes away.
So if we adopted chunk-at-a-time then perhaps each worker processes the
list of tables in OID order (or some unique and stable order) and does
one chunk per table that needs vacuuming. This way an equal amount of
bandwidth is given to all tables.
That does sounds simpler. Is chunk-at-a-time a realistic option for 8.3?
Matt
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-02-27 03:43:34 | Re: autovacuum next steps, take 2 |
Previous Message | Matthew T. O'Connor | 2007-02-27 03:26:49 | Re: autovacuum next steps, take 2 |