Re: Postgres scalability and performance on windows

From: Bill Moran <wmoran(at)collaborativefusion(dot)com>
To: "Gopal" <gopal(at)getmapping(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Postgres scalability and performance on windows
Date: 2006-11-24 00:24:54
Message-ID: 20061123192454.5b66e61d.wmoran@collaborativefusion.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 23 Nov 2006 22:31:40 -0000
"Gopal" <gopal(at)getmapping(dot)com> wrote:

> Hi all,
>
>
>
> I have a postgres installation thats running under 70-80% CPU usage
> while
>
> an MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load.
>
>
>
> Here's the scenario,
>
> 300 queries/second
>
> Server: Postgres 8.1.4 on win2k server
>
> CPU: Dual Xeon 3.6 Ghz,
>
> Memory: 4GB RAM
>
> Disks: 3 x 36gb , 15K RPM SCSI
>
> C# based web application calling postgres functions using npgsql 0.7.
>
> Its almost completely read-only db apart from fortnightly updates.
>
>
>
> Table 1 - About 300,000 rows with simple rectangles
>
> Table 2 - 1 million rows
>
> Total size: 300MB
>
>
>
> Functions : Simple coordinate reprojection and intersection query +
> inner join of table1 and table2.
>
> I think I have all the right indexes defined and indeed the performance
> for queries under low loads is fast.
>
>
>
>
>
> ========================================================================
> ==========
>
> postgresql.conf has following settings
>
> max_connections = 150
>
> hared_buffers = 20000 # min 16 or
> max_connections*2, 8KB each

Considering you have 4G or RAM, you might want to allocate more than 160M to
shared buffers.

> temp_buffers = 2000 # min 100, 8KB each
>
> max_prepared_transactions = 25 # can be 0 or more
>
> # note: increasing max_prepared_transactions costs ~600 bytes of shared
> memory
>
> # per transaction slot, plus lock space (see max_locks_per_transaction).
>
> work_mem = 512 # min 64, size in KB

Again, with 4G of RAM, you may get some beneifit from more than 1/2M of
work space.

> SQL server caches all the data in memory which is making it faster(uses
> about 1.2GB memory- which is fine).
>
> But postgres has everything spread across 10-15 processes, with each
> process using about 10-30MB, not nearly enough to cache all the data and
> ends up doing a lot of disk reads.

Allocate more shared buffers and PG will use it.

> I've read that postgres depends on OS to cache the files, I wonder if
> this is not happenning on windows.

Yes, but it can access data even faster if it's in the shared buffer
space. There are numerous write-ups on the Internet about this sort
of tuning.

> In any case I cannot believe that having 15-20 processes running on
> windows helps. Why not spwan of threads instead of processes, which
> might
>
> be far less expensive and more efficient. Is there any way of doing
> this?

Because every other OS (Linux, BSD, Solaris, etc) does very well with
multiple spawned processes. I expect that future versions of PG will
have some improvements to allow better performance on Windows, but you'll
be surprised how well it runs under a POSIX OS.

> My question is, should I just accept the performance I am getting as the
> limit on windows or should I be looking at some other params that I
> might have missed?

I have a feeling that some tuning would improve things for you.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Jerry Sievers 2006-11-24 00:37:55 Re: Stuck in "DELETE waiting"
Previous Message Ron Johnson 2006-11-23 22:48:53 Re: IS it a good practice to use SERIAL as Primary Key?