Hello, I'm writing an application in C and speed is critical.
I'm doing inserts into 2 tables, both with only a few char fields.
1 insert using 'INSERT'
5 inserts using 'COPY'
and there is a BEGIN and COMMIT around them.
There are as few indexes as I can have and still query effectively.
I cannot drop and recreate the indexes, since there will be asynchronous
I can't get more than 97 of these transactions per second.
There will be 100 Million transactions or more. presently 150,000.
So I would like x10 speed increase!!
Postmaster shows 50->65% usage, system idle is 10->15% (700Mhz Althon, 386Mb
vmstat shows about 20 Blks/in and 1500 Blks/out per sec.
The disk is a 10000RPM SCSI.
How can I get more speed?
Should I try and get more inserts into each transaction? (Do BEGIN/COMMIT in
separate thread with timer)
Do you think its worth the effort to make that remaining INSERT into a COPY?
Would managing a pool of asynchronous calls improve the speed?
What about this fast path thing? it looks very complicated and under
Any advice greatfully received,
In response to
pgsql-interfaces by date
|Next:||From: Brett Schwarz||Date: 2002-09-09 15:39:32|
|Subject: Re: Views in pgaccess|
|Previous:||From: Gerhard Häring||Date: 2002-09-08 17:35:49|
|Subject: [ANN] pyPgSQL 2.2 is released|