On Thu, 2007-02-01 at 09:15 +0100, Hans-Juergen Schoenig wrote:
> Right now max_locks_per_transactions defines the average number of locks
> taken by a transaction. thus, shared memory is limited to
> max_locks_per_transaction * (max_connections + max_prepared_transactions).
> this is basically perfect. however, recently we have seen a couple of
> people having trouble with this. partitioned tables are becoming more
> and more popular so it is very likely that a single transaction can eat
> up a great deal of shared memory. some people having a lot of data
> create daily tables. if done for 3 years we already lost 1000 locks per
> i wonder if it would make sense to split max_locks_per_transaction into
> two variables: max_locks (global size) and max_transaction_locks (local
> size). if set properly this would prevent "good" short running
> transactions from running out of shared memory when some "evil" long
> running transactions start to suck up shared memory.
Do partitioned tables use a lock even when they are removed from the
plan as a result of constraint_exclusion? I thought not. So you have
lots of concurrent multi-partition scans.
I'm not sure I understand your suggestion. It sounds like you want to
limit the number of locks an individual backend can take, which simply
makes the partitioned queries fail, no?
Perhaps we should just set the default higher?
In response to
pgsql-hackers by date
|Next:||From: Pavan Deolasee||Date: 2007-02-01 11:44:56|
|Subject: Re: stack usage in toast_insert_or_update()|
|Previous:||From: Magnus Hagander||Date: 2007-02-01 11:27:05|
|Subject: Re: pg_restore fails with a custom backup file|