From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Hans-Juergen Schoenig <postgres(at)cybertec(dot)at> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org, Zoltan Boszormenyi <zb(at)cybertec(dot)at> |
Subject: | Re: max_locks_per_transactions ... |
Date: | 2007-02-01 18:02:37 |
Message-ID: | 20055.1170352957@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hans-Juergen Schoenig <postgres(at)cybertec(dot)at> writes:
> i would suggest to replace the existing parameter but something else:
> - a switch to define the global size of the lock pool (e.g. "max_locks")
> - a switch which defines the upper limit for the current backend /
> transaction
The problem with that is that it's pretty much guaranteed to break
pg_dump, as pg_dump always needs a lot of locks. We could perhaps
change pg_dump to increase its limit value (assuming that that's not a
privileged operation), but the fact that a counterexample is so handy
makes me doubt that this is a better design than what we have.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2007-02-01 19:03:11 | Re: A more general approach (Re: Data archiving/warehousing idea) |
Previous Message | Tom Lane | 2007-02-01 17:31:47 | Re: A more general approach (Re: Data archiving/warehousing idea) |