Analyse - max_locks_per_transaction - why?

From: Phil Endecott <spam_from_postgresql_general(at)chezphil(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Analyse - max_locks_per_transaction - why?
Date: 2004-11-10 15:22:41
Message-ID: 41923241.9040906@chezphil.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Dear PostgreSQL experts,

This is with version 7.4.2.

My database has grown a bit recently, mostly in number of tables but
also their size, and I started to see ANALYSE failing with this message:

WARNING: out of shared memory
ERROR: out of shared memory
HINT: You may need to increase max_locks_per_transaction.

So I increased max_locks_per_transaction from 64 to 200 and, after doing
a /etc/init.d/postgresql/restart rather than a
/etc/init.d/postgresql/reload, it seems to work again.

Naively I imagined that ANALYSE looks at each table in turn,
independently. So why does it need more locks when there are more
tables? Isn't "ANALYSE" with no parameter equivalent to

for i in all_tables_in_database {
ANALYSE i;
}

I'm working in a memory-poor environment (a user-mode-linux virtual
machine) and I'm a bit concerned about the memory overhead if I have to
keep increasing max_locks_per_transaction just to keep ANALYSE happy.

As an aside, what I really need in this particular case is to analyse
all of the tables in a particular schema. Having "ANALYSE schemaname"
or "ANALYSE schemaname.*" would be great. I presume that I can write a
function to get the same effect - has anyone already done that?

Regards,

Phil Endecott.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Marc G. Fournier 2004-11-10 15:25:08 Re: I have had enough
Previous Message Marc G. Fournier 2004-11-10 15:22:14 Re: These Lists Are Being Cut To FOUR