From: | "Bossart, Nathan" <bossartn(at)amazon(dot)com> |
---|---|
To: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: [Proposal] Allow users to specify multiple tables in VACUUM commands |
Date: | 2017-09-05 17:36:46 |
Message-ID: | 04F3AF54-315E-4A2D-97C6-86E9EBCB6E42@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 9/4/17, 8:16 PM, "Michael Paquier" <michael(dot)paquier(at)gmail(dot)com> wrote:
> So vacuum_multiple_tables_v14.patch is good for a committer in my
> opinion. With this patch, if the same relation is specified multiple
> times, then it gets vacuum'ed that many times. Using the same column
> multi-times results in an error as on HEAD, but that's not a new
> problem with this patch.
Thanks!
> So I would tend to think that the same column specified multiple times
> should cause an error, and that we could let VACUUM run work N times
> on a relation if it is specified this much. This feels more natural,
> at least to me, and it keeps the code simple.
I think that is a reasonable approach. Another option I was thinking
about was to de-duplicate only the individual column lists. This
alternative approach might be a bit more user-friendly, but I am
beginning to agree with you that perhaps we should not try to infer
the intent of the user in these "duplicate" scenarios.
I'll work on converting the existing de-duplication patch into
something more like what you suggested.
Nathan
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-09-05 17:38:42 | Re: PoC plpgsql - possibility to force custom or generic plan |
Previous Message | Tom Lane | 2017-09-05 17:22:55 | Re: [bug fix] Savepoint-related statements terminates connection |