BUG #16614: Stale temporary objects makes vacuum ineffective when 1 million transactions remain

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: gsaviane(at)gmail(dot)com
Subject: BUG #16614: Stale temporary objects makes vacuum ineffective when 1 million transactions remain
Date: 2020-09-10 08:45:19
Message-ID: 16614-23e5c4c4567d1576@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 16614
Logged by: Giorgio Saviane
Email address: gsaviane(at)gmail(dot)com
PostgreSQL version: 9.5.23
Operating system: Linux
Description:

We noticed the same issue described below occurring multiple times in some
of our many Postgres 9.5 deployments.

The application makes extensive use of temporary tables. Although there are
scheduled activities for regularly vacuuming the database, after some months
of uptime the database gets into the 1 million transactions limit. Even
though we perform a manual maintenance by vacuuming in single user mode, the
server keeps returning the same warning and hint:

WARNING: database "mydb" must be vacuumed within (X<1000000) transactions
HINT: To avoid a database shutdown, execute a database-wide VACUUM in
"mydb".

Any further vacuum command issued completes but the warning keeps showing
up, and the remaining transactions are decremented.

Only by removing all the stale temporary objects with

DO $$
DECLARE rec RECORD;
BEGIN FOR rec IN
SELECT 'DROP schema ' || nspname || ' CASCADE' AS stmt
FROM pg_catalog.pg_namespace
WHERE nspname LIKE 'pg_temp%' OR nspname LIKE 'pg_toast_temp%'
LOOP
EXECUTE rec.stmt;
END LOOP;
END;
$$

Makes the vacuum command complete without warnings.

We think that stale temporary objects are somehow blocking the vacuuming of
the database.

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Peter Eisentraut 2020-09-10 13:45:57 Re: BUG #15285: Query used index over field with ICU collation in some cases wrongly return 0 rows
Previous Message Peter Eisentraut 2020-09-09 18:47:36 Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when extracting epoch