Re: query overhead

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Andy Halsall <halsall_andy(at)hotmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: query overhead
Date: 2012-07-16 14:43:57
Message-ID: 500428AD.5060400@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 07/16/2012 06:13 PM, Andy Halsall wrote:
> Thanks for the responses. I take the points - the times we're dealing
> with are very small. Sorry but I'm a bit confused by the suggestions
> around function types / prepared statements, but probably haven't been
> clear in my question: I'm invoking a PL/PgSQL function from libpq, for
> example the no_op mentioned in first post does:
>
> CREATE OR REPLACE FUNCTION sp_select_no_op() RETURNS integer AS
> '
> begin
> return 1;
> end
> '
> language 'plpgsql' IMMUTABLE;
>
> My understanding was that the plan for this would be prepared once and
> reused. So no addvantage in a prepared statement? Also no advantage in
> making this a plain SQL function as these don't get cached?
AFAIK SQL functions don't get cached plans - though I'm not 100% on
this. They can be lots cheaper for wrapping simple operations, though.

I'm just questioning why you're going immediately to PL/PgSQL - or
stored procs at all. It might be a bigger hammer than you need.

What sorts of operations will your application be performing? Is there
any reason it can't directly use simple INSERT, UPDATE, DELETE and
SELECT statements, possibly with PREPARE and EXECUTE at libpq level?

If you're trying to go as fast as humanly possible in emulating an
ISAM-like access model with lots of small fast accesses, PQprepare of
simple S/I/U/D statements, then proper use of PQexecPrepared, is likely
to be hard to beat.

If you're working with ISAM-like access though, cursors may well be very
helpful for you. It's a pity for your app that Pg doesn't support
cursors that see changes committed after cursor creation, since these
are ideal when emulating ISAM "next record" / "previous record" access
models. They're still suitable for tasks where you know the app doesn't
need to see concurrently modified data, though.

Can you show a typical sequence of operations for your DB?

Also, out of interest, are you migrating from a traditional shared-file
ISAM-derived database system, or something funkier?

>
> Embedded database such as SQLLite is a good idea except that we'll be
> multi-process and my understanding is that they lock the full database
> on any write, which is off-putting.

Write concurrency in SQLite is miserable, yeah, but you get very fast
shared-access reads as a trade-off and it's much closer to your app's
old DB design. It depends a lot on your workload.

--
Craig Ringer

Browse pgsql-performance by date

  From Date Subject
Next Message Robert Haas 2012-07-16 15:58:59 Re: [PERFORM] DELETE vs TRUNCATE explanation
Previous Message Mark Thornton 2012-07-16 14:06:09 Re: very very slow inserts into very large table