Skip site navigation (1) Skip section navigation (2)

Re: getGeneratedKeys method

From: Ceki Gulcu <cekgul(at)yahoo(dot)com>
To: pgsql-jdbc(at)postgresql(dot)org
Subject: Re: getGeneratedKeys method
Date: 2004-05-26 16:27:51
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-jdbc

Thanks for all the replies received so far. I really
appreciate it.

One important point I failed to mention in my original
mail, was that we are using a prepared statement to
perform the initial batch of inserts via executeBatch,
followed by another prepared statement and invoking
executeBatch and a third prepared

Your suggestions imply the use of regular statements
as opposed to prepared statements. I don't know
whether the performance boost is due to the use of
prepared statements or whether it's more the
"batching", that is regrouping the SQL statements in
one transaction.

If you are interested you can find the actual code at

DBAppender is the current implementation which works
with PostgreSQL, MySQL, Oracle, DB2 and MsSQL. With
pooled-connections, it takes under 10 milliseconds to
persist a single event. Without pooling, it's 50

DBAppender2 which uses prepares statements, we can get
the figure to as low as 1 milli. However, although the
performance is much better, the results are incorrect.

It the performance improvement is linked to the use of
prepared statements, then there isn't much I can do.
However, if regular statements would also yield a
performance improvement, I could try the techniques
suggested in previous mails in this thread.

Anyway, thank you for you assistance. It has been very

Ceki Glc

     For log4j documentation consider "The complete
log4j manual" 
     ISBN: 2970036908 

--- Oliver Jowett <oliver(at)opencloud(dot)com> wrote:
> Ceki Gulcu wrote:
> > For entries in the logging_event_property and
> > logging_event_exception
> > tables to be meaningful, we absolutely need the
> > generated event_id
> > each time we insert a new logging_event row. We
> are
> > able to do this by
> > following each logging_event insertion with a
> query to
> > the database
> > asking for the event_id of the last inserted
> event.
> > This works for
> > multiple database systems.
> > 
> > However, we have discovered that batching multiple
> > insertions gives a
> > very significant boost in performance.
> Note that as it stands this isn't true for the
> postgresql driver, 
> executeBatch() actually just runs each query
> synchronously. Fixing this 
> is at the top of my todo list..
> > Thus, we would
> > like to insert
> > say 50 logging_event rows, then multiple
> > logging_event_property rows
> > and then multiple logging_event_exception rows. We
> are
> > using the
> > JDBC getGeneratedKeys method to obtain the
> event_ids.
> > Unfortunately, this
> > function is not implemented in Postgresql.
> > 
> > 
> > Looking at the archives it seems that this
> > functionality requires
> > support for the database back end which does not
> exist
> > currently. Is
> > that correct?
> It'd probably be possible to implement support for
> returning the OID of 
> inserted rows in a batch update. You can already get
> at that OID for 
> individual inserts via a postgresql-specific
> interface, 
> PGStatement.getLastOID(). However, the OID doesn't
> make a great key 
> anyway, and I think the general move is towards
> omitting OIDs from 
> tables altogether.
> Anything else is going to be harder as the server
> does not provide much 
> in the way of metadata back from a query, and the
> JDBC driver does 
> little parsing of the SQL queries.
> If you don't mind doing DB-specific implementations,
> and you have all of 
> the data for a particular event available at once,
> you might want to 
> look at using CREATE SEQUENCE for the event ID (I
> guess you're already 
> doing this if you want autogenerated keys) and use
> something like:
>    INSERT INTO logging_event(nextval('id'), ...)
>    INSERT INTO logging_event_property(currval('id'),
> "property 1", ...)
>    INSERT INTO logging_event_property(currval('id'),
> "property 2", ...)
> logging_event_exception(currval('id'), 'line 1",
> ...)
> logging_event_exception(currval('id'), "line 2",
> ...)
> You'd be able to batch those INSERTs so long as you
> keep them grouped 
> correctly.
> This is safe to do even if multiple connections are
> using the sequence 
> concurrently -- see 
> -O
> ---------------------------(end of
> broadcast)---------------------------
> TIP 6: Have you searched our list archives?

Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger. 

In response to


pgsql-jdbc by date

Next:From: FelipeDate: 2004-05-26 22:06:11
Subject: Re: cannot find org.postgres.Driver
Previous:From: Ulrich MeisDate: 2004-05-26 14:35:31
Subject: Re: cannot find org.postgres.Driver

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group