Re: query overhead

From: Craig Ringer <ringerc(at)ringerc(dot)id(dot)au>
To: Andy Halsall <halsall_andy(at)hotmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: query overhead
Date: 2012-07-14 03:28:26
Message-ID: 5000E75A.6030108@ringerc.id.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 07/11/2012 07:46 PM, Andy Halsall wrote:
>
> I've written an Immutable stored procedure that takes no parameters
> and returns a fixed value to try and determine the round trip overhead
> of a query to PostgreSQL. Call to sp is made using libpq. We're all
> local and using UNIX domain sockets.
>
PL/PgSQL or SQL stored proc? There's a definite calling overhead for
PL/PgSQL compared to plain SQL functions. SQL functions in turn cost
more than a direct statement.

These costs aren't big. They're massively outweighed by any kind of disk
access or any non-trivial query. They start to add up if you have a lot
of procs that wrap a simple "SELECT * FROM x WHERE x.id = $1" though.

> Client measures are suggesting ~150-200 microseconds to call sp and
> get the answer back
0.0015 to 0.002 milliseconds?

That's ... um ... fast. Presumably that's during a loop where your no-op
is run repeatedly without connection setup costs, etc.

>
> ping to loopback returns in ~20 microseconds (I assume domain sockets
> are equivalent).
UNIX domain sockets are typically at least as fast and somewhat lower
overhead.

> So it looks like a local no-op overhead of at least 150 microseconds
> which would leave us struggling.
> Could someone please let me know if this is usual and if so where the
> time's spent?
> Short of getting a faster server, is there anything I can do to
> influence this?

I'm not sure how much a faster server would help with single query
response time. It'll help with response to many parallel queries, but
may not speed up a single query, especially a tiny lightweight one,
particularly dramatically.

The Xeon 7040:
http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-(4M-Cache-3_00-GHz-667-MHz-FSB)
<http://ark.intel.com/products/27226/Intel-Xeon-Processor-7040-%284M-Cache-3_00-GHz-667-MHz-FSB%29>
is not the newest beast out there, but it's not exactly slow.

Honestly, PostgreSQL's focus is on performance with bigger units of
work, not on being able to return a response to a tiny query in
microseconds. If you are converting an application that has microsecond
response time requirements and hammers its database with millions of
tiny queries, PostgreSQL is probably not going to be your best bet.

If you're able to adapt the app to use set-oriented access patterns
instead of looping, eg instead of (pseudocode):

customer_ids = [ ... array from somewhere ... ]
for customer_id in ids:
c = SELECT c FROM customer c WHERE customer_id = :id
if c.some_var:
UPDATE customer SET c.some_other_var = 't'

you can do:

UPDATE customer SET c.some_other_var = [expression] WHERE [expression]

then you'll get much better results from Pg.

--
Craig Ringer

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2012-07-14 03:35:27 Re: PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the fastest way to clean each non-empty table and reset unique identifier column of empty ones.
Previous Message B Sreejith 2012-07-14 01:26:38 Re: Any tool/script available which can be used to measure scalability of an application's database.