> > Hm. Yes. Number of locks varies quite alot (10-600). Now what to
> > investigate
> > further? We do not use explicit locks in our functions. We use quite
> > update/delete where key=something;
> > Some sample (select * from pg_locks order by pid) is below.
> The sample doesn't show any lock issues (there are no processes waiting
> for ungranted locks). The thing that typically burns people is foreign
> key conflicts. In current releases, if you have a foreign key reference
> then an insert in the referencing table takes an exclusive row lock on
> the referenced (master) row --- which means that two inserts using the
> same foreign key value block each other.
> You can alleviate the issue by making all your foreign key checks
> deferred, but that just shortens the period of time the lock is held.
> There will be a real solution in PG 8.1, which has sharable row locks.
In such case our foreign key contraint should not be an issue since it
is on msg_id which is pretty much unique among concurrent transactions.
And I noticed that "storms" happens along with higher write activity. If
bo in vmstat shows 25+MB in 2s then most likely I will get "storm" of slow
queries in serverlog. How to even write activity? fsync=off, bgwriter
And is it possible to log which query in function takes the longest time
Also do not know if it matters but PG database is on ext3 partition with
In response to
pgsql-performance by date
|Next:||From: Joel Fradkin||Date: 2005-05-16 12:45:01|
|Subject: Re: Prefetch|
|Previous:||From: Alvaro Herrera||Date: 2005-05-16 04:40:53|
|Subject: Re: checkpoint segments|