improving write performance for logging application

From: Steve Eckmann <eckmann(at)computer(dot)org>
To: pgsql-performance(at)postgresql(dot)org
Subject: improving write performance for logging application
Date: 2006-01-03 23:44:28
Message-ID: 43BB0C5C.8020207@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time simulation. We found that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225 objects/second using InnoDB tables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system: quad-Itanium2, 8GB memory, SCSI RAID, GigE connection from simulation server, nothing running except system processes and database system under test)

We also found that we could improve MySQL performance significantly using MySQL's "INSERT" command extension allowing multiple value-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second. PostgreSQL doesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows are being generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the application and uses COPY to write to the database. The application currently has two processes: the simulation and a data collector that reads events from the sim (queued in shared memory) and writes them as rows to the database, buffering as needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our data collector into two processes communicating via a pipe.

Query performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the kinds of queries our users make. The crux is writing rows to the database fast enough to keep up with the simulation.

Are there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've found include disabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but these aren't likely to produce the 3x improvement that we need. On the client side I've found only two suggestions: disable autocommit and use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to several hundred INSERT commands in each PQexec() call, and it isn’t clear that COPY is worth the effort in our application.

Thanks.

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2006-01-04 00:00:12 Re: improving write performance for logging application
Previous Message Greg Stark 2006-01-03 23:28:34 Re: Stats collector performance improvement