Tom Lane wrote:
> Josh Berkus <josh(at)agliodbs(dot)com> writes:
> > Alvaro,
> >> Have you messed with max_connections and/or max_locks_per_transaction
> >> while testing this? The lock table is sized to max_locks_per_xact times
> >> max_connections, and shared memory hash tables get slower when they are
> >> full. Of course, the saturation point would depend on the avg number of
> >> locks acquired per user, which would explain why you are seeing a lower
> >> number for some users and higher for others (simpler/more complex
> >> queries).
> > That's an interesting thought. Let me check lock counts and see if this is
> > possibly the case.
> AFAIK you'd get hard failures, not slowdowns, if you ran out of lock
> space entirely;
Well, if there still is shared memory available, the lock hash can
continue to grow, but it would slow down according to this comment in
* max_size is the estimated maximum number of hashtable entries. This is
* not a hard limit, but the access efficiency will degrade if it is
* exceeded substantially (since it's used to compute directory size and
* the hash table buckets will get overfull).
For the lock hash tables this max_size is
(MaxBackends+max_prepared_xacts) * max_locks_per_xact.
So maybe this does not make much sense in normal operation, thus not
applicable to what Josh Berkus is reporting.
However I was talking to Josh Drake yesterday and he told me that
pg_dump was spending some significant amount of time in LOCK TABLE when
there are lots of tables (say 300k).
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
In response to
pgsql-performance by date
|Next:||From: Tom Lane||Date: 2007-07-19 17:45:15|
|Subject: Re: User concurrency thresholding: where do I look? |
|Previous:||From: Pat Maddox||Date: 2007-07-19 17:33:30|
|Subject: Trying to tune postgres, how is this config?|