You probably need to determine whether the bottleneck is cpu or disk (should be
Having said that, assuming your application is insert/update intensive I would
- mount the ufs filesystems Pg uses *without* logging
- use postgresql.conf setting fsync_method=fdatasync
These changes made my Pgbench results improve by a factor or 4 (enough to catch
the big O maybe...)
Then you will need to have a look at your other postgresql.conf parameters!
(posting this file to the list might be a plan)
Quoting Mischa Sandberg <ischamay(dot)andbergsay(at)activestateway(dot)com>:
> Our product (Sophos PureMessage) runs on a Postgres database.
> Some of our Solaris customers have Oracle licenses, and they've
> commented on the performance difference between Oracle and Postgresql
> on such boxes. In-house, we've noticed the 2:1 (sometimes 5:1)
> performance difference in inserting rows (mostly 2-4K), between
> Postgresql on Solaris 8 and on Linux, for machines with comparable
> CPU's and RAM.
In response to
pgsql-performance by date
|Next:||From: Manfred Koizar||Date: 2004-09-20 07:31:11|
|Subject: Re: Large # of rows in query extremely slow, not using|
|Previous:||From: Greg Stark||Date: 2004-09-19 05:07:56|
|Subject: Re: Comparing user attributes with bitwise operators|