Re: performance config help

From: Craig Ringer <craig(at)postnewspapers(dot)com(dot)au>
To: Bob Dusek <redusek(at)gmail(dot)com>
Cc: Matthew Wakeling <matthew(at)flymine(dot)org>, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: performance config help
Date: 2010-01-13 05:01:51
Message-ID: 4B4D53BF.4020905@postnewspapers.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 13/01/2010 2:01 AM, Bob Dusek wrote:

> The connections to postgres are happening on the localhost. Our
> application server accepts connections from the network, and the
> application queries Postgres using a standard pg_pconnect on the localhost.

Well, that's a good reason to have all those CPUs - if your app server
runs on the same host as Pg.

> We've been logging long-running queries (200 ms). That's how we know
> Postgres is degrading. We don't see any queries showing up when we have
> 40 clients running. But, we start seeing quite a bit show up after that.

It might be informative to see how fast query times are increasing with
client count. You can probably figure this out by progressively lowering
your query time logging theshold.

> Each of the concurrent clients does a series of selects, inserts,
> updates, and deletes. The requests would generally never update or
> delete the same rows in a table. However, the requests do generally
> write to the same tables. And, they are all reading from the same
> tables that they're writing to.

AFAIK None of that should interfere with each other, so long as they're
not working with the same sets of tuples.

> For the inserts, I imagine they are
> blocking on access to the sequence that controls the primary keys for
> the insert tables.

I doubt it. Sequences are outside transactional rules for that reason.
It takes an incredibly short time for nextval(...) to obtain the next
value for the sequence, and after that the sequence is unlocked and
ready for the next use.

> But, I'm not sure about locking beyond that. When we delete from the
> tables, we generally delete where "clientid=X", which deletes all of the
> rows that a particular client inserted (each client cleans up its own
> rows after it finishes what its doing). Would that be blocking inserts
> on that table for other clients?

Now would be a good time to start watching pg_locks - rather than
guessing, try to actually observe if there's lock contention.

I'd also consider looking into a connection pool so that as the number
of clients to your appserver increases you can keep the number of active
Pg connections at the "sweet spot" for your server to maximise overall
throughput.

--
Craig Ringer

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2010-01-13 05:14:00 Re: a heavy duty operation on an "unused" table kills my server
Previous Message Eduardo Piombino 2010-01-13 04:59:11 a heavy duty operation on an "unused" table kills my server