Re: Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <

From: Kevin Grittner <kgrittn(at)gmail(dot)com>
To: Ants Aasma <ants(dot)aasma(at)eesti(dot)ee>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Noah Misch <noah(at)leadboat(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
Date: 2016-04-22 13:06:05
Message-ID: CACjxUsNVtTGJ9b+bqytYMpG7sY=aN_+4z0XXEagxpqU_HYDuCg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-committers pgsql-hackers

On Thu, Apr 21, 2016 at 4:13 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:

> I have your test case running, and it is not immediately
> clear why old rows are not being vacuumed away.

I have not found the reason that the vacuuming is not as aggressive
as it should be with this old_snapshot_threshold, but I left your
test case running overnight and found that it eventually did kick
in. So the question is why it was not nearly as aggressive as one
would expect.

From the server log:

kgrittn(at)Kevin-Desktop:~/pg/master$ grep -B2 -A3 'tuples: [1-9]'
Debug/data/pg_log/postgresql.log
[2016-04-21 16:21:29.658 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 2759 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 94 removed, 159928 remain, 158935 are dead but not yet removable
buffer usage: 6005 hits, 0 misses, 8 dirtied
avg read rate: 0.000 MB/s, avg write rate: 0.090 MB/s
system usage: CPU 0.00s/0.08u sec elapsed 0.69 sec
--
[2016-04-21 16:55:31.971 CDT] LOG: automatic vacuum of table
"kgrittn.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 23 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 2 removed, 515 remain, 128 are dead but not yet removable
buffer usage: 50 hits, 11 misses, 14 dirtied
avg read rate: 4.048 MB/s, avg write rate: 5.152 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.02 sec
--
[2016-04-22 00:33:11.978 CDT] LOG: automatic vacuum of table
"kgrittn.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 68 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 1016 removed, 409 remain, 22 are dead but not yet removable
buffer usage: 89 hits, 127 misses, 111 dirtied
avg read rate: 1.478 MB/s, avg write rate: 1.292 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.67 sec
[2016-04-22 00:33:18.572 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 20196 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 292030 removed, 3941 remain, 3553 are dead but not yet removable
buffer usage: 68541 hits, 14415 misses, 20638 dirtied
avg read rate: 1.674 MB/s, avg write rate: 2.396 MB/s
system usage: CPU 0.23s/1.09u sec elapsed 67.29 sec
--
[2016-04-22 00:52:13.013 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 20233 remain, 0 skipped due to pins, 19575
skipped frozen
tuples: 8463 removed, 30533 remain, 28564 are dead but not yet removable
buffer usage: 8136 hits, 4 misses, 158 dirtied
avg read rate: 0.027 MB/s, avg write rate: 1.065 MB/s
system usage: CPU 0.00s/0.03u sec elapsed 1.15 sec
--
[2016-04-22 01:28:22.812 CDT] LOG: automatic vacuum of table
"kgrittn.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 68 remain, 0 skipped due to pins, 44 skipped frozen
tuples: 26 removed, 760 remain, 108 are dead but not yet removable
buffer usage: 37 hits, 27 misses, 12 dirtied
avg read rate: 4.963 MB/s, avg write rate: 2.206 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.04 sec
--
[2016-04-22 06:51:23.042 CDT] LOG: automatic vacuum of table
"kgrittn.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 68 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 692 removed, 403 remain, 16 are dead but not yet removable
buffer usage: 90 hits, 109 misses, 76 dirtied
avg read rate: 1.646 MB/s, avg write rate: 1.148 MB/s
system usage: CPU 0.00s/0.00u sec elapsed 0.51 sec
[2016-04-22 06:52:45.174 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 28152 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 928116 removed, 14021 remain, 14021 are dead but not
yet removable
buffer usage: 88738 hits, 33068 misses, 45857 dirtied
avg read rate: 1.811 MB/s, avg write rate: 2.511 MB/s
system usage: CPU 0.43s/1.93u sec elapsed 142.68 sec
--
[2016-04-22 06:53:23.665 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 28313 remain, 0 skipped due to pins, 27853
skipped frozen
tuples: 43 removed, 9002 remain, 8001 are dead but not yet removable
buffer usage: 9795 hits, 22 misses, 28 dirtied
avg read rate: 0.159 MB/s, avg write rate: 0.202 MB/s
system usage: CPU 0.00s/0.03u sec elapsed 1.08 sec
--
[2016-04-22 07:22:25.240 CDT] LOG: automatic vacuum of table
"kgrittn.public.high_throughput": index scans: 1
pages: 0 removed, 28313 remain, 0 skipped due to pins, 25627
skipped frozen
tuples: 51 removed, 149639 remain, 147733 are dead but not yet removable
buffer usage: 14227 hits, 18 misses, 15 dirtied
avg read rate: 0.089 MB/s, avg write rate: 0.074 MB/s
system usage: CPU 0.01s/0.10u sec elapsed 1.58 sec

From the output stream of the python test script:

00:32:00] Counted 1000 rows with max 1094 in high_throughput table
[00:32:05] High throughput table size @ 31240s. Size 161624kB Last
vacuum 0:00:42.664285 ago
[00:32:10] Interfering query got error snapshot too old

[00:32:10] Waiting 3min to restart interfering query

[06:50:00] Counted 1000 rows with max 1176984 in high_throughput table
[06:50:05] High throughput table size @ 53920s. Size 225088kB Last
vacuum 0:00:23.685084 ago
[06:50:10] Interfering query got error snapshot too old

[06:50:10] Waiting 3min to restart interfering query

I don't see any evidence that it returned incorrect query results
at any point, and it did eventually limit bloat; what we have is
that it is not limiting it nearly as tightly as one would expect in
this particular test case.

I'm continuing to investigate.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-committers by date

  From Date Subject
Next Message Amit Kapila 2016-04-22 13:12:56 Re: Re: [COMMITTERS] pgsql: Avoid extra locks in GetSnapshotData if old_snapshot_threshold <
Previous Message Magnus Hagander 2016-04-22 09:22:36 pgsql: Add putenv support for msvcrt from Visual Studio 2013

Browse pgsql-hackers by date

  From Date Subject
Next Message Andreas Joseph Krogh 2016-04-22 13:12:04 Re: max_parallel_degree > 0 for 9.6 beta
Previous Message Robert Haas 2016-04-22 12:56:33 Re: max_parallel_degree > 0 for 9.6 beta