I took this discussion in Postgres performance list and came out with
conclusion that its a
client side JDBC issue - so I am psting it here.
I have a batch application that writes approx. 4 million rows into a narrow
(2 column) table. I am using JDBC addBatch/ExecuteBatch with auto commit
turned off. Batch size is 1000. So far I am seeing Postgres take
roughly five times (280 sec) the time it takes to do this in the Oracle
(60). This is on a Linux
server with Xeon woodcrest 5310 process. Plenty of memory. I have played
with many parameters on
the server side and they seem to have little effect - I am sure Postgres is
a very capable server and its
not a database server issue. Someone mentioned:
"I actually went and looked at the JDBC api and realized 'addBatch' means to
run multiple stmts at once, not batch
inserting. femski, your best bet is to lobby the JDBC folks to build
support for 'copy' into the driver for faster bulk loads (or help out in
that regard). "
Based on other responses I am convinced this is indeed the problem and I
think its a pretty serious limitation.
Why doesn't the Postgres JDBC driver use "copy" for faster bulk insert ?
What is the best way to speedup
do bulk insert at this time or in near future (I was to use standard JDBC
View this message in context: http://www.nabble.com/Poor-addBatch-performance.-Why-dosn%27t-it-use-copy---tf3616055.html#a10099075
Sent from the PostgreSQL - jdbc mailing list archive at Nabble.com.
pgsql-jdbc by date
|Next:||From: Giuseppe Sacco||Date: 2007-04-20 12:54:59|
|Subject: Re: Poor addBatch performance. Why dosn't it use copy ?|
|Previous:||From: redhada redhada||Date: 2007-04-20 09:54:36|
|Subject: hibernate & large objects|