Skip site navigation (1) Skip section navigation (2)


From: Steven Bradley <sbradley(at)llnl(dot)gov>
To: pgsql-interfaces(at)postgresql(dot)org
Subject: Performance
Date: 1999-06-23 22:05:09
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
I'm having some problems achieving adequate performance from Postgres for a
real-time event logging application.  The way I'm interfacing to the
database may be the problem:

I have simplified the problem down to a single (non-indexed) table with
about a half-dozen columns (int4, timestamp, varchar, etc.)   I wrote a
quick and dirty C program which uses the libpq interface to INSERT records
into the table in real-time.  The best performance I could achieve was on
the order of 15 inserts per second.  What I need is something much closer
to 100 inserts per second.

I wanted to use a prepared SQL statement, but it turns out that Postgres
runs the query through the parser-planner-executor cycle on each iteration.
 There is no way to prevent this.

The next thing I though of doing was to "bulk load" several records in one
INSERT through the use of array processing.  Do any of the Postgres
interfaces support this?  (by arrays, I don't mean array columns in the

I'm currently running Postgres 6.4.2.  I've heard that 6.5 has improved
performance; does anyone have any idea what the performance improvement is

Is it unrealistic to expect Postgres to insert on the order of 100 records
per second on a Pentium 400 MHz/SCSI class machine running Linux?  (Solaris
on a comparable platform has about 1/2 the performance)

Thanks in advance...

Steven Bradley
Lawrence Livermore National Laboratory


pgsql-interfaces by date

Next:From: Phil MoorsDate: 1999-06-23 22:52:05
Subject: ECPG fetch broken after upgrade to 6.5
Previous:From: robert_hiltibidal_at_cms08405Date: 1999-06-23 20:14:50
Subject: Perl module

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group