Skip site navigation (1) Skip section navigation (2)

Need for speed

From: "John huttley" <john(at)mwk(dot)co(dot)nz>
To: "pgsql-interfaces" <pgsql-interfaces(at)postgresql(dot)org>
Subject: Need for speed
Date: 2002-09-08 23:39:46
Message-ID: 005601c25791$03ca28d0$041f1fac@termsrvr (view raw, whole thread or download thread mbox)
Lists: pgsql-interfaces
Hello, I'm writing an application in C and speed is critical.

I'm doing inserts into 2 tables, both with only a few char fields.
1 insert using 'INSERT'
5 inserts using 'COPY'

and there is a BEGIN and COMMIT around them.
There are as few indexes as I can have and still query effectively.
I cannot drop and recreate the indexes, since there will be asynchronous

I can't get more than 97 of these transactions per second.
There will be 100 Million transactions or more. presently 150,000.
So I would like x10 speed increase!!

Postmaster shows 50->65% usage, system idle is 10->15% (700Mhz Althon, 386Mb

vmstat shows about 20 Blks/in and 1500 Blks/out  per sec.

The disk is a 10000RPM SCSI.

How can I get more speed?
Should I try and get more inserts into each transaction? (Do BEGIN/COMMIT in
separate thread with timer)
Do you think its worth the effort to make that remaining INSERT into a COPY?
Would managing a pool of asynchronous calls improve the speed?
What about this fast path thing? it looks very complicated and under

Any advice greatfully received,



In response to

pgsql-interfaces by date

Next:From: Brett SchwarzDate: 2002-09-09 15:39:32
Subject: Re: Views in pgaccess
Previous:From: Gerhard HäringDate: 2002-09-08 17:35:49
Subject: [ANN] pyPgSQL 2.2 is released

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group