From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: removal of dangling temp tables |
Date: | 2018-12-15 02:02:20 |
Message-ID: | 20181215020220.xk7j2trqbqtqmtdc@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2018-Dec-14, Robert Haas wrote:
> On Fri, Dec 14, 2018 at 12:27 PM Alvaro Herrera
> <alvherre(at)2ndquadrant(dot)com> wrote:
> > Maybe it'd be better to change temp table removal to always drop
> > max_locks_per_transaction objects at a time (ie. commit/start a new
> > transaction every so many objects).
>
> We're basically just doing DROP SCHEMA ... CASCADE, so I'm not sure
> how we'd implement that, but I agree it would be significantly better.
(Minor nit: even currently, we don't drop the schema itself, only the
objects it contains.)
I was thinking we could scan pg_depend for objects depending on the
schema, add them to an ObjectAddresses array, and do
performMultipleDeletions once every max_locks_per_transaction objects.
But in order for this to have any useful effect we'd have to commit the
transaction and start another one; maybe that's too onerous.
Maybe we could offer such a behavior as a special case to be used only
in case the regular mechanism fails. So add a PG_TRY which, in case of
failure, sends a hint to do the cleanup. Not sure this is worthwhile.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2018-12-15 02:06:32 | Re: removal of dangling temp tables |
Previous Message | Andres Freund | 2018-12-15 02:01:59 | Re: automatically assigning catalog toast oids |