On Wed, 2008-08-06 at 08:06 +0800, Craig Ringer wrote:
> Out of interest - why 1000 connections?
> Do you really expect to have 1000 jobs concurrently active and doing
> work? If you don't, then you'll be wasting resources and slowing
> down for no reason. There is a connection overhead in PostgreSQL -
> mostly related to database-wide locking and synchronization, but also
> some memory for each backend - that means you probably shouldn't run
> vastly more backends than you intend to have actively working.
> If you described your problem, perhaps someone could give you a useful
> answer. Your mention of pgpool suggests that you're probably using a
> app and running into connection count limits, but I shouldn't have to
> guess that.
> Craig Ringer
This is actually a fantastic point. Have you considered using more than
one box to field the connections and using some sort of replication or
worker process to move them to a master database of some sort? I don't
know about the feasibility of it, but it might work out depending on
what kind of application you're trying to write.
Disclaimer: I work in a data warehousing and we only have 45 concurrent
connections right now. OLTP and/or large connection counts isn't really
what I spend my days thinking about. ;-)
In response to
pgsql-sql by date
|Next:||From: Jorge Medina||Date: 2008-08-06 16:42:54|
|Subject: Re: more than 1000 connections|
|Previous:||From: Terry Lee Tucker||Date: 2008-08-06 12:44:11|
|Subject: Re: Case Insensitive searches|