Re: Scalability in postgres

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Greg Smith" <gsmith(at)gregsmith(dot)com>, "James Mansion" <james(at)mansionfamily(dot)plus(dot)com>
Cc: "Flavio Henrique Araque Gurgel" <flavio(at)4linux(dot)com(dot)br>, "Fabrix" <fabrixio1(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Scalability in postgres
Date: 2009-06-03 14:09:04
Message-ID: 4A263DB0.EE98.0025.1@wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

James Mansion <james(at)mansionfamily(dot)plus(dot)com> wrote:

> I'm sure most of us evaluating Postgres from a background in Sybase
> or SQLServer would regard 5000 connections as no big deal.

Sure, but the architecture of those products is based around all the
work being done by "engines" which try to establish affinity to
different CPUs, and loop through the various tasks to be done. You
don't get a context switch storm because you normally have the number
of engines set at or below the number of CPUs. The down side is that
they spend a lot of time spinning around queue access to see if
anything has become available to do -- which causes them not to play
nice with other processes on the same box.

If you do connection pooling and queue requests, you get the best of
both worlds. If that could be built into PostgreSQL, it would
probably reduce the number of posts requesting support for bad
configurations, and help with benchmarks which don't use proper
connection pooling for the product; but it would actually not add any
capability which isn't there if you do your homework....

-Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Davin Potts 2009-06-03 15:42:40 poor performing plan from analyze vs. fast default plan pre-analyze on new database
Previous Message Kenneth Cox 2009-06-03 13:09:15 Re: Best way to load test a postgresql server