Re: 1 or 2 servers for large DB scenario.

From: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
To: "Matthew" <matthew(at)flymine(dot)org>
Cc: "Greg Smith" <gsmith(at)gregsmith(dot)com>, "David Brain" <dbrain(at)bandwidth(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: 1 or 2 servers for large DB scenario.
Date: 2008-01-27 10:47:30
Message-ID: 479C6142.9060502@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Matthew wrote:
> On Fri, 25 Jan 2008, Greg Smith wrote:
>> If you're seeing <100TPS you should consider if it's because you're
>> limited by how fast WAL commits can make it to disk. If you really
>> want good insert performance, there is no substitute for getting a
>> disk controller with a good battery-backed cache to work around that.
>> You could just put the WAL xlog directory on a RAID-1 pair of disks to
>> accelerate that, you don't have to move the whole database to a new
>> controller.
>
> Hey, you *just* beat me to it.
>
> Yes, that's quite right. My suggestion was to move the whole thing, but
> Greg is correct - you only need to put the WAL on a cached disc system.
> That'd be quite a bit cheaper, I'd imagine.
>
> Another case of that small SSD drive being useful, I think.

PostgreSQL 8.3 will have "asynchronous commits" feature, which should
eliminate that bottleneck without new hardware, if you can accept the
loss of last few transaction commits in case of sudden power loss:

http://www.postgresql.org/docs/8.3/static/wal-async-commit.html

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Dean Rasheed 2008-01-27 17:29:34 Re: Slow set-returning functions
Previous Message Scott Marlowe 2008-01-26 13:39:22 Re: How do I bulk insert to a table without affecting read performance on that table?