Re: RowCount && UseDeclareFetch Performance

From: Ramesh Reddy <rareddy(at)redhat(dot)com>
To: BGoebel <b(dot)goebel(at)prisma-computer(dot)de>
Cc: pgsql-odbc(at)postgresql(dot)org
Subject: Re: RowCount && UseDeclareFetch Performance
Date: 2011-07-05 17:09:42
Message-ID: 1309885782.6267.18.camel@localhost.localdomain
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-odbc

I recently went through same issue. Here are my observations.

When "UseDeclareFetch=0" used and you are dealing with large result
sets, then memory usage of driver goes way up and contributes to
slowness. The response back to client application happens after a long
time. The reason for this is, the driver fetches all the rows in the
resultset to client host machine before returning a single row to the
calling application. i.e. the driver is either storing the results in
memory or in some kind of temp file until it gathers all the results.

When "UseDeclareFetch=1" and "Fetch=xxx" defined, then it uses postgres
cursor for queries, then fetches "xxx" result rows at time, and returns
the results to the calling application. Thus the client application gets
initial set of results faster. So, here since the driver is fetching
incrementally it does not know the row count.

I used "UseDeclareFetch=1" with prepare statement to get to the some of
the resultset metadata, then re-executed with non-prepared to make use
of the cursors to fetch the data. Use "Fetch=10000" or some thing large
so that you are not making too many round trips for data. Also make sure
there is absolutely no debug/trace logs turned on.

Hope this helps.

Ramesh..

On Tue, 2011-07-05 at 09:44 -0700, BGoebel wrote:
> I am trying to optimize performance on larger selects sets:
>
> If i connect with *...UseDeclareFetch=0...* SQLGetDiagField(...
> SQL_DIAG_CURSOR_ROW_COUNT...) or SQLRowCount deliver the number of the rows
> which where found by the SELECT. Reading rowcount is needed by my ODBC
> wrapper.
>
> BUT: The SELECT needs MUCH more time(vs.UseDeclareFetch=1) and a lot of
> memory(~100MByte) is eaten up until the cursor ist closed. So far as i have
> understand, the result set is read by the client, which may also a problem
> on slow connections to the server. Therefore the amount of cached rows
> should be controlled by *Fetch=XXX* assigned to the connection string. But
> whatever i assigned to "Fetch", the return time for the SELECT and memory
> usage is nearly the same.
>
> Now i have tried *...UseDeclareFetch=1...*. The return time is MUCH better.
>
> BUT: Now SQLRowCount returns -1.
>
> One idea: Executing Select ...,(Select count(*) where {MyConditions}) as
> __ROWCOUNT where {MyConditions}. I suppose that would double the time on a
> complicate evaluation.
> Any ideas how to get the RowCount?
> Or to optimize Selects with UseDeclareFetch=0.
>
> I'm quite a newbie to PostgreSQL and i am nearly sure that i have overlook
> something.
>
>
> Any help would be greatly welcome!
>
>
>
> --
> View this message in context: http://postgresql.1045698.n5.nabble.com/RowCount-UseDeclareFetch-Performance-tp4553904p4553904.html
> Sent from the PostgreSQL - odbc mailing list archive at Nabble.com.
>

In response to

Responses

Browse pgsql-odbc by date

  From Date Subject
Next Message BGoebel 2011-07-05 19:19:53 Re: RowCount && UseDeclareFetch Performance
Previous Message BGoebel 2011-07-05 16:44:30 RowCount && UseDeclareFetch Performance