Improve efficiency of dblink by using libpq's new row processor API.
This patch provides a test case for libpq's row processor API.
contrib/dblink can deal with very large result sets by dumping them into
a tuplestore (which can spill to disk) --- but until now, the intermediate
storage of the query result in a PGresult meant memory bloat for any large
result. Now we use a row processor to convert the data to tuple form and
dump it directly into the tuplestore.
A limitation is that this only works for plain dblink() queries, not
dblink_send_query() followed by dblink_get_result(). In the latter
case we don't know the desired tuple rowtype soon enough. While hack
solutions to that are possible, a different user-level API would
probably be a better answer.
Kyotaro Horiguchi, reviewed by Marko Kreen and Tom Lane
contrib/dblink/dblink.c | 421 ++++++++++++++++++++++++++++++++++++++--------
doc/src/sgml/dblink.sgml | 20 ++-
2 files changed, 366 insertions(+), 75 deletions(-)
pgsql-committers by date
|Next:||From: Tom Lane||Date: 2012-04-05 01:50:42|
|Subject: pgsql: Fix plpgsql named-cursor-parameter feature for variable namecon|
|Previous:||From: Tom Lane||Date: 2012-04-04 20:15:31|
|Subject: pgsql: Remove useless PGRES_COPY_BOTH "support" in psql.|