Skip site navigation (1) Skip section navigation (2)

Re: performance for high-volume log insertion

From: Kris Jurka <books(at)ejurka(dot)com>
To: Thomas Kellerer <spam_eater(at)gmx(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: performance for high-volume log insertion
Date: 2009-04-26 17:07:56
Message-ID: Pine.BSO.4.64.0904261301200.19948@leary.csoft.net (view raw or flat)
Thread:
Lists: pgsql-performance

On Thu, 23 Apr 2009, Thomas Kellerer wrote:

> Out of curiosity I did some tests through JDBC.
>
> Using a single-column (integer) table, re-using a prepared statement 
> took about 7 seconds to insert 100000 rows with JDBC's batch interface 
> and a batch size of 1000
>

As a note for non-JDBC users, the JDBC driver's batch interface allows 
executing multiple statements in a single network roundtrip.  This is 
something you can't get in libpq, so beware of this for comparison's sake.

> I also played around with batch size. Going beyond 200 didn't make a big 
> difference.
>

Despite the size of the batch passed to the JDBC driver, the driver breaks 
it up into internal sub-batch sizes of 256 to send to the server.  It does 
this to avoid network deadlocks from sending too much data to the server 
without reading any in return.  If the driver was written differently it 
could handle this better and send the full batch size, but at the moment 
that's not possible and we're hoping the gains beyond this size aren't too 
large.

Kris Jurka

In response to

Responses

pgsql-performance by date

Next:From: ThomasDate: 2009-04-26 17:28:23
Subject: Re: performance for high-volume log insertion
Previous:From: Adam RuthDate: 2009-04-23 22:51:52
Subject:

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group