Re: query overhead

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Andy Halsall <halsall_andy(at)hotmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: query overhead
Date: 2012-07-13 16:15:07
Message-ID: 3893.1342196107@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Andy Halsall <halsall_andy(at)hotmail(dot)com> writes:
> I've written an Immutable stored procedure that takes no parameters and returns a fixed value to try and determine the round trip overhead of a query to PostgreSQL. Call to sp is made using libpq. We're all local and using UNIX domain sockets.

> Client measures are suggesting ~150-200 microseconds to call sp and get the answer back

That doesn't sound out of line for what you're doing, which appears to
include parsing/planning a SELECT command. Some of that overhead could
probably be avoided by using a prepared statement instead of a plain
query. Or you could try using the "fast path" API (see libpq's PQfn)
to invoke the function directly without any SQL query involved.

Really, however, the way to make things fly is to get rid of the round
trip overhead in the first place by migrating more of your application
logic into the stored procedure. I realize that that might require
pretty significant rewrites, but if you can't tolerate per-query
overheads in the 100+ usec range, that's where you're going to end up.

If you don't like any of those answers, maybe Postgres isn't the
solution for you. You might consider an embeddable database such
as SQLLite.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message B Sreejith 2012-07-13 16:25:20 Is there a tool available to perform Data Model review, from a performance perspective?
Previous Message Claudio Freire 2012-07-13 15:53:24 Re: Poor performance problem with Materialize, 8.4 -> 9.1 (enable_material)