> You really should try to find out why they get "stuck". Killing stuck clients isn't going to solve your problem (aside from the fact that you probably shouldn't be using kill -9 on them, that's like using a jackhammer on a jammed door).
Well I didn't use kill -9 I used the pg_cancel_backend command.
> Some things to look into: Are those queries waiting on a lock by another daemon maybe? Are some of them keeping transactions open for a long time without committing them (or rolling them back)?
I'll take a look at that. It certainly would be simpler than attaching
a gdb session to the pid and getting a stacktrace.
> I recall you were having another problem (with deleting records). This all smells like you either are waiting for locks on records or that the statistics used for query planning aren't reflecting the actual situation.
I am having some performance issues with the database. I am also
trying to clean out a lot of records out of the system. Once all the
records I want to delete are gone perhaps the problem will go away. I
am also looking at how the application can be refactored not to use
this particular DISTINCT query.
> Have a look in pg_locks and check the query plans of some of your more problematic queries (see: explain analyse) to see what's going on there. Posting the results of those here would allow more eyes to look into your issues, in case it isn't obvious to you. There's some good documentation on these subjects too.
I did look at the analyze and basically postgres is saying the
distinct is killing me. I remove that and the query is fine. I
didn't look at the locks because the queries are read only so I didn't
think they would be effected by locks but I will look at them post
In response to
pgsql-general by date
|Next:||From: Thomas Kellerer||Date: 2010-09-28 20:17:16|
|Subject: Documentation enhancement|
|Previous:||From: Bruce Momjian||Date: 2010-09-28 20:00:25|
|Subject: Re: pg_upgrade|