Alan Stange wrote:
> Hello all,
> We have the same performance problems with bulk data inserts from jdbc
> as well. We used batches as well but made sure that each statement in
> the batch was large ~128KB and inserted on many rows at a time. This
> cut down on the number of round trips to to the postgresql server.
Yes, I also did it but putting togheter many inserts into a single statement
and in fact it halved the time required to perform the inserts, still, it
takes too much time anyway: 1 minute for insertion and 5 seconds to read the
> In addition to a) and b) below, I'd add that the read size off the
> sockets is too small. It's a few KB currently and this should
> definitely be bumped up to a larger number.
In fact I've tried to bump up the 8kb value that's hardwired in the code
to 16,64,128Kb but saw no improvement on a 100Mb full switched LAN...
> We're running on a gigE network and see about 50MB/s data rates coming
> off the server (using a 2GB shared memory region). This sounds nice,
> but one has to keep in mind that the data is binary encoded in text.
> Anyway, count me in to work on the jdbc client as well (in my limited
> time). To start, I have a couple of local performance hacks for which
> I should submit proper patches.
I'm eager to have a look at them :-)
In response to
pgsql-jdbc by date
|Next:||From: Guido Fiala||Date: 2004-03-31 08:51:36|
|Subject: Re: OutOfMemory|
|Previous:||From: Oliver Jowett||Date: 2004-03-30 23:07:47|
|Subject: Re: JDBC driver's (non-)handling of InputStream:s|