Need for speed

From: "John huttley" <john(at)mwk(dot)co(dot)nz>
To: "pgsql-interfaces" <pgsql-interfaces(at)postgresql(dot)org>
Subject: Need for speed
Date: 2002-09-08 23:39:46
Message-ID: 005601c25791$03ca28d0$041f1fac@termsrvr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

Hello, I'm writing an application in C and speed is critical.

I'm doing inserts into 2 tables, both with only a few char fields.
1 insert using 'INSERT'
5 inserts using 'COPY'

and there is a BEGIN and COMMIT around them.
There are as few indexes as I can have and still query effectively.
I cannot drop and recreate the indexes, since there will be asynchronous
queries.

I can't get more than 97 of these transactions per second.
There will be 100 Million transactions or more. presently 150,000.
So I would like x10 speed increase!!

Postmaster shows 50->65% usage, system idle is 10->15% (700Mhz Althon, 386Mb
RAM)

vmstat shows about 20 Blks/in and 1500 Blks/out per sec.

The disk is a 10000RPM SCSI.

How can I get more speed?
Should I try and get more inserts into each transaction? (Do BEGIN/COMMIT in
separate thread with timer)
Do you think its worth the effort to make that remaining INSERT into a COPY?
Would managing a pool of asynchronous calls improve the speed?
What about this fast path thing? it looks very complicated and under
documented..

Any advice greatfully received,

Regards

John

In response to

Browse pgsql-interfaces by date

  From Date Subject
Next Message Brett Schwarz 2002-09-09 15:39:32 Re: Views in pgaccess
Previous Message Gerhard Häring 2002-09-08 17:35:49 [ANN] pyPgSQL 2.2 is released