Re: patch: SQL/MED(FDW) DDL

From: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Shigeru HANADA <hanada(at)metrosystems(dot)co(dot)jp>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, SAKAMOTO Masahiko <sakamoto(dot)masahiko(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: patch: SQL/MED(FDW) DDL
Date: 2010-10-05 15:15:00
Message-ID: 4CAB40F4.8030500@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 05.10.2010 17:56, Robert Haas wrote:
> On Tue, Oct 5, 2010 at 10:41 AM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> (I'd also say that your performance estimate is miles in advance of any
>> facts; but even if it's true, the caching ought to be inside the FDW,
>> because we have no clear idea of what it will need to cache.)
>
> I can't imagine how an FDW could possibly be expected to perform well
> without some persistent local data storage. Even assume the remote
> end is PG. To return a cost, it's going to need the contents of
> pg_statistic cached locally, for each remote table. Do you really
> think it's going to work to incur that overhead once per table per
> backend startup?

It doesn't seem completely out of the question to me. Sure, it's
expensive, but it's only incurred the first time a remote table is
accessed in a session. Local persistent storage would be nice, but a lot
of applications might prefer to not use it anyway, to ensure that fresh
statistics are used.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Craig Ringer 2010-10-05 15:21:56 Re: Re: Proposed Windows-specific change: Enable crash dumps (like core files)
Previous Message Andrew Dunstan 2010-10-05 15:13:55 Re: configure gaps