client performance v.s. server statistics

From: Zhou Han <zhouhan(at)gmail(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: client performance v.s. server statistics
Date: 2012-02-15 03:59:36
Message-ID: CADtzDCkR8O5OJ5bsVs8Lmr3yoi=uuA8GbO-QuNDsnrX9--WubA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers pgsql-performance

Hi,

I am checking a performance problem encountered after porting old embeded
DB to postgreSQL. While the system is real-time sensitive, we are
concerning for per-query cost. In our environment sequential scanning
(select * from ...) for a table with tens of thousands of record costs 1 -
2 seconds, regardless of using ODBC driver or the "timing" result shown in
psql client (which in turn, relies on libpq). However, using EXPLAIN
ANALYZE, or checking the statistics in pg_stat_statement view, the query
costs only less than 100ms.

So, is it client interface (ODBC, libpq) 's cost mainly due to TCP? Has the
pg_stat_statement or EXPLAIN ANALYZE included the cost of copying tuples
from shared buffers to result sets?

Could you experts share your views on this big gap? And any suggestions to
optimise?

P.S. In our original embeded DB a "fastpath" interface is provided to read
directly from shared memory for the records, thus provides extremely
realtime access (of course sacrifice some other features such as
consistency).

Best regards,
Han

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2012-02-15 04:44:23 Re: When do we lose column names?
Previous Message Bruce Momjian 2012-02-15 01:23:10 Re: pg_test_fsync performance

Browse pgsql-performance by date

  From Date Subject
Next Message Amit Kapila 2012-02-15 05:23:04 Re: client performance v.s. server statistics
Previous Message Ivan Voras 2012-02-13 22:12:01 Re: rough benchmarks, sata vs. ssd