Re: pg_dump out of shared memory

From: tfo(at)alumni(dot)brown(dot)edu (Thomas F(dot) O'Connell)
To: pgsql-general(at)postgresql(dot)org
Subject: Re: pg_dump out of shared memory
Date: 2004-06-21 15:07:00
Message-ID: 80c38bb1.0406210707.50894a15@posting.google.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

tfo(at)alumni(dot)brown(dot)edu (Thomas F. O'Connell) wrote in message news:
> postgresql.conf just has the default of 1000 shared_buffers. The
> database itself has thousands of tables, some of which have rows
> numbering in the millions. Am I correct in thinking that, despite the
> hint, it's more likely that I need to up the shared_buffers?

So the answer here, verified by Tom Lane and my own remedy to the
problem, is "no". Now I'm curious: why does pg_dump require that
max_connections * max_shared_locks_per_transaction be greater than the
number of objects in the database? Or if that's not the right
assumption about how pg_dump is working, how does pg_dump obtain its
locks, and why is the error that it runs out of shared memory? Is
there a portion of shared memory that's set aside for locks? What is
the shared lock table?

-tfo

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Chris Ochs 2004-06-21 15:07:59 Possible SET SESSION AUTHORIZATION bug
Previous Message Tom Lane 2004-06-21 14:13:25 Re: Database name in the log