Re: table partitioning & max_locks_per_transaction

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Brian Karlak <zenkat(at)metaweb(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: table partitioning & max_locks_per_transaction
Date: 2009-10-11 02:56:52
Message-ID: 26276.1255229812@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Brian Karlak <zenkat(at)metaweb(dot)com> writes:
> "out of shared memory HINT: You might need to increase
> max_locks_per_transaction"

You want to do what it says ...

> 1) We've already tuned postgres to use ~2BG of shared memory -- which
> is SHMAX for our kernel. If I try to increase
> max_locks_per_transaction, postgres will not start because our shared
> memory is exceeding SHMAX. How can I increase
> max_locks_per_transaction without having my shared memory requirements
> increase?

Back off shared_buffers a bit? 2GB is certainly more than enough
to run Postgres in.

> 2) Why do I need locks for all of my subtables, anyways? I have
> constraint_exclusion on. The query planner tells me that I am only
> using three tables for the queries that are failing. Why are all of
> the locks getting allocated?

Because the planner has to look at all the subtables and make sure
that they in fact don't match the query. So it takes AccessShareLock
on each one, which is the minimum strength lock needed to be sure that
the table definition isn't changing underneath you. Without *some* lock
it's not really safe to examine the table at all.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message 旭斌 裴 2009-10-11 10:26:10 table full scan or index full scan?
Previous Message Scott Marlowe 2009-10-10 21:34:21 Re: Databases vs Schemas