From: | Viral Shah <viralshah009(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Rene Romero Benavides <rene(dot)romero(dot)b(at)gmail(dot)com>, pgsql-sql(at)postgresql(dot)org |
Subject: | Re: pg_dump fails when a table is in ACCESS SHARE MODE |
Date: | 2020-05-02 01:06:23 |
Message-ID: | CAEVFvu3t5Vtg6PTbY9aB3Sj6-p2GsduCPVGDL4gN_X16wqHUyA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hello Tom,
I have about 7400 tables in my database. When I ran a select on pg_locks
while attempting a pg_dump, pg_locks returned with about 7400 rows all
originating from one process id that was running
select pg_catalog.pg_get_statisticsobjdef() by pg_dump.
Now, this does explain why I need to increase the max_transaction_locks so
that I can account for all the locks pg_locks is running into while pg_dump
is in progress. I am still trying to understand why
pg_get_statisticsobjdef() creates locks (AccessShare) on all the base
tables at once leading to this issue?
Thanks,
Viral Shah
Senior Data Analyst, Nodal Exchange LLC
viralshah009(at)gmail(dot)com | (240) 645 7548
On Fri, May 1, 2020 at 10:18 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Viral Shah <viralshah009(at)gmail(dot)com> writes:
> > Per the postgres documentation,
> > *max_locks_per_transaction * (max_connections +
> max_prepared_transactions)*
> > this formula determines the max no of allowed objects that can be locked
> on
> > the database. Currently my database has the following values:
> > *max_locks_per_transaction = 64 (default)*
> > *max_connections = 100*
> > *max_prepared_transactions = 0*
> > Using this value in the above formula tells that our database or rather
> > postgres server can/should handle *6400* locked objects at a time.
>
> Right.
>
> > What is surprising is why Postgres complains of insufficient locks per
> > transaction if only 10 processes (parallel jobs in pg_dump) are running
> on
> > the database while taking the dump.
>
> They're taking more than 6400 locks, evidently. How many tables are
> in your database? Have you tried looking into pg_locks while the dump
> is running?
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2020-05-02 01:24:53 | Re: pg_dump fails when a table is in ACCESS SHARE MODE |
Previous Message | Tom Lane | 2020-05-01 14:18:08 | Re: pg_dump fails when a table is in ACCESS SHARE MODE |