Re: PG12 autovac issues

From: Andres Freund <andres(at)anarazel(dot)de>
To: Julien Rouhaud <rjuju123(at)gmail(dot)com>
Cc: Justin King <kingpin867(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org, michael(at)paquier(dot)xyz, kgrittn(at)gmail(dot)com
Subject: Re: PG12 autovac issues
Date: 2020-03-23 20:00:51
Message-ID: 20200323200051.qzyqvnips7o3ai4g@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin pgsql-general

Hi,

On 2020-03-23 20:47:25 +0100, Julien Rouhaud wrote:
> > - relfrozenxid, age(relfrozenxid) for the oldest table in the oldest database
> > SELECT oid::regclass, age(relfrozenxid), relfrozenxid FROM pg_class WHERE relfrozenxid <> 0 ORDER BY age(relfrozenxid) DESC LIMIT 1;
>
> The vm has been trashed since, and I don't have that level of detail available
> in the gathered stats unfortunately (and the available information I have is a
> little bit messy, sorry for that).
>
> > - Oldest backend xmin
> > SELECT pid, backend_xmin, age(backend_xmin) FROM pg_stat_activity WHERE backend_xmin <> 0 ORDER BY age(backend_xmin) DESC LIMIT 3;
> > - oldest replication xmin:
> > SELECT pid, backend_xmin, age(backend_xmin) FROM pg_stat_replication WHERE backend_xmin <> 0 ORDER BY age(backend_xmin) DESC LIMIT 3;
> > - oldest slot xmin:
> > SELECT slot_name, xmin, age(xmin), catalog_xmin, age(catalog_xmin) FROM pg_replication_slots WHERE xmin <> 0 OR catalog_xmin <> 0 ORDER BY greatest(age(xmin), age(catalog_xmin)) DESC LIMIT 3;
> > - oldest prepared transaction (unfortunately xmin not available)
> > SELECT gid, database, transaction FROM pg_prepared_xacts ORDER BY age(transaction) LIMIT 3;
>
> I have output from this query: https://github.com/OPMDG/check_pgactivity/blob/oldest_xmin/check_pgactivity#L5669-L5732
> (which may be dumb). Here are the non-null rows after 8AM GMT+1:

Could you share what the config of the server was?

> > > The same bench was run against pg11 many times and never triggered this issue.
> > > So far our best guess is a side effect of 2aa6e331ead7.
> >
> > In that case you'd likely see DEBUG1 output, right? Did you try running
> > with that?
>
> That's unfortunately not an option, as while the issue is reproducible, it
> happens after roughly 16h with an average of 6.5ktps, and we definitely don't
> have enough disk space for that amount of logs. We'll try next time with a
> recompiled version with the DEBUG1 promoted to WARNING, and maybe other elog.

You wouldn't need to run it with DEBUG1 the whole time, you could just
change the config and reload config once you hit the issue.

> > > Michael and I have been trying to reproduce this issue locally (drastically
> > > reducing the various freeze_age parameters) for hours, but no luck for now.
> >
> > Have you considered using gdb to investigate?
>
> Not for now, because it's hard to be around for the ~ 90min interval the system
> stays alive after atuvacuum get stuck, especially since it's a weekend bench.
>
> > > This is using a vanilla pg 12.1, with some OLTP workload. The only possibly
> > > relevant configuration changes are quite aggressive autovacuum settings on some
> > > tables (no delay, analyze/vacuum threshold to 1k and analyze/vacuum scale
> > > factor to 0, for both heap and toast).
> >
> > That, uh, is not an insignificant set of changes for an autovac
> > scheduling issues.
> >
> > It sounds like what might be happening is that you have something
> > holding back the "oldest needed transaction" horizon. Before
> > 2aa6e331ead7, if the xmin horizon hasn't increased, there'd be repeated
> > vacuums unable to increase the horizon, but afterwards they'll all
> > immediately exit without any messages.
>
> The xmin horizon is increasing AFAICS. Grepping the autovacuum logs, I can see
> for the 08:00 AM UTC - 08:59 AM UTC interval the oldest xmin going from
> 200069684 to 210363848. E.g.:

Which database is this on?

> [...]
> 266603:2020-03-14 08:49:59.927 UTC LOG: automatic aggressive vacuum to prevent wraparound of table "pg_toast.pg_toast_27683": index scans: 0
> 266604- pages: 0 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen
> 266605- tuples: 0 removed, 0 remain, 0 are dead but not yet removable, oldest xmin: 209635921
> 266606- buffer usage: 25 hits, 1 misses, 1 dirtied
> 266607- avg read rate: 15.440 MB/s, avg write rate: 15.440 MB/s
> 266608- system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> 266609:2020-03-14 08:49:59.929 UTC LOG: automatic aggressive vacuum to prevent wraparound of table "pg_toast.pg_toast_27694": index scans: 0
> 266610- pages: 0 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen
> 266611- tuples: 0 removed, 0 remain, 0 are dead but not yet removable, oldest xmin: 209635934
> 266612- buffer usage: 25 hits, 1 misses, 1 dirtied
> 266613- avg read rate: 21.058 MB/s, avg write rate: 21.058 MB/s
> 266614- system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> 266615:2020-03-14 08:49:59.931 UTC LOG: automatic aggressive vacuum to prevent wraparound of table "pg_toast.pg_toast_24845": index scans: 0
> 266616- pages: 0 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen
> 266617- tuples: 0 removed, 0 remain, 0 are dead but not yet removable, oldest xmin: 209635934
> 266618- buffer usage: 25 hits, 1 misses, 2 dirtied
> 266619- avg read rate: 8.005 MB/s, avg write rate: 16.009 MB/s
> 266620- system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
> 266621:2020-03-14 08:49:59.945 UTC LOG: automatic aggressive vacuum to prevent wraparound of table "pg_toast.pg_toast_27712": index scans: 0
> 266622- pages: 0 removed, 0 remain, 0 skipped due to pins, 0 skipped frozen
> 266623- tuples: 0 removed, 0 remain, 0 are dead but not yet removable, oldest xmin: 209635953
> [...]

Do you have any non-toast ones?

> > I wonder if what might be happening is that we're somehow missed/failed
> > to update relfrozenxid and/or datfrozenxid. If you manually vacuum some
> > table in the oldest database, but that is *NOT* the oldest table itself,
> > does the problem "resolve" itself?
>
> I'll also add pg_class snapshot for next time we run the bench. I'm not sure
> if we'll be able to catch the 1h45 interval when the system stays alive
> after the issue though.

Could you just script something to stop the benchmark once the disk is
90% full or so?

Did you see any errors / fatals around the time autovacuum stopped
working?

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Justin King 2020-03-23 21:31:21 Re: PG12 autovac issues
Previous Message Julien Rouhaud 2020-03-23 19:47:25 Re: PG12 autovac issues

Browse pgsql-general by date

  From Date Subject
Next Message Justin King 2020-03-23 21:31:21 Re: PG12 autovac issues
Previous Message Julien Rouhaud 2020-03-23 19:47:25 Re: PG12 autovac issues