Re: Re: JDBC Performance

From: Gunnar R|nning <gunnar(at)candleweb(dot)no>
To: "Keith L(dot) Musser" <kmusser(at)idisys(dot)com>
Cc: "PGSQL-General" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Re: JDBC Performance
Date: 2000-09-29 20:16:25
Message-ID: x6og17c5t2.fsf@thor.candleweb.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

"Keith L. Musser" <kmusser(at)idisys(dot)com> writes:

> I'm thinking caching byte arrays on a per-connection basis is the way to
> go.
>
> However, how much difference do you expect this to make? How many byte
> arrays to you allocate and destroy per SQL statement? And how big are
> the arrays? How much memory will they occupy per open connection?
>

The current algorithm is greedy and it does not free up anything, so how
many arrays that are cached depends on the size of the resultset. A
resultset require one byte array for all values in all columns.

> Will this really make a big difference?

My web application improved it throughput/execution speed by 50%. I think
that is quite good considering that JDBC is not the only bottleneck of my
application. I also saw a complete shift in where the JDBC part of the
application spent the time. Earlier the most significant part was in the
allocation of byte arrays, in the new implementation this part is reduced
dramativally and the new bottlenecks are byte to char conversions(done when
you retrieve values from the result set) and reading data from the
database. I don't think the reading can be much faster, maybe cursored
results could help in some situations where you don't actually need the
entire result set. But cursors might also add overhead for other queries,
but I know to little about cursors in postgres yet to do any qualified
statement on that.

Regards,

Gunnar

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Martin A. Marques 2000-09-29 20:20:23 Re: Redhat 7 and PgSQL
Previous Message Martin A. Marques 2000-09-29 20:09:18 Re: Redhat 7 and PgSQL