Re: database tuning

From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: kelvan <kicmcewen(at)windowslive(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: database tuning
Date: 2007-12-11 00:36:32
Message-ID: Pine.GSO.4.64.0712101912340.14776@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, 11 Dec 2007, kelvan wrote:

> i am going to blow up that mac and burn postgres as i need a more
> powerful dbms one that can handle multi threading.

Someone pointed this out already, but I'll repeat: PostgreSQL has a
multi-process architecture that's fully capable of taking advantage of
multiple CPUs. Whether a multi-process model is better or worse than a
multi-threaded one is a popular subject to debate, but it's certainly not
true that switching to threads will always give a performance benefit, and
you shouldn't expect a large one--processes and threads are not that
different. As a simple example benchmarks usually show the multi-process
PostgreSQL scales better to high client loads than the multi-threaded
MySQL.

The only spot where PostgreSQL has a clear performance limitation is that
no single query can be split among multiple processors usefully. Since
you seem to be working for many users doing small tasks rather than a
single large one, I wouldn't expect the scalability of the core database
code to be your problem.

> as it stands we cannot configure postgres on a mac to go over 200
> connections for god knows what reason but we have found ways around that
> using the mac

In a web application environment, there is no good reason to have that
many individual database connections. You should consider the fact that
you had trouble getting more than 200 going a warning sign. The right way
to deal with this is not to work around it, but to use some sort of
connection pooling software instead. You might use something that does
PostgreSQL-level pooling like PgBouncer
https://developer.skype.com/SkypeGarage/DbProjects/PgBouncer or you could
do higher level caching with something like memcached
http://www.danga.com/memcached/

> so as you can see ill need as fast a read time for every query as
> possible. i am using alot of codes using small int and bit in my
> database and de-normalising everying to keep the cnnections down and the
> data read ammount down but that can only do so much.

What you should be worried about here is how much of the database you can
cram into memory at once. Have you looked into using large amounts of
memory for shared_buffers? In your situation you should consider putting
multiple GB worth of memory there to hold data. Particularly with a
single disk, if you even get to the point where you need to read from disk
regularly you're not going to get anywhere close to your performance
goals.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Craig James 2007-12-11 07:50:11 libgcc double-free, backend won't die
Previous Message kelvan 2007-12-11 00:29:14 Re: database tuning