Re: Async execution of postgres_fdw.

From: Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>
To: mkellycs(at)gmail(dot)com
Cc: ashutosh(dot)bapat(at)enterprisedb(dot)com, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Async execution of postgres_fdw.
Date: 2015-01-21 07:24:48
Message-ID: 20150121.162448.23571876.horiguchi.kyotaro@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hello, thank you for looking this but sorry that the last patch
was buggy so that adaptive fetch size did not work.

The attached is the fixed patch. It apparently improves the
performance for the test case shown in the previous mail, in
which the average tuple length is about 140 bytes.

21 Jan 2015 05:22:34 +0000, Matt Kelly <mkellycs(at)gmail(dot)com> wrote in <CA+KcUkg4cvDLf4v0M9_rVv_ZuAsG1oDHPj_YvczJa6w2nSkwNQ(at)mail(dot)gmail(dot)com>
> I'm trying to compare v5 and v6 in my laptop right now. Apparently my
> laptop is quite a bit faster than your machine because the tests complete
> in roughly 3.3 seconds.
>
> I added more data and didn't see anything other than noise. (Then again
> the queries were dominated by the disk sort so I should retry with larger
> work_mem). I'll try it again when I have more time to play with it. I
> suspect the benefits would be more clear over a network.
>
> Larger than default work_mem yes, but I think one of the prime use case for
> the fdw is for more warehouse style situations (PostgresXL style use
> cases). In those cases, work_mem might reasonably be set to 1GB. Then
> even if you have 10KB rows you can fetch a million rows and still be using
> less than work_mem. A simpler change would be to vary it with respect to
> work_mem.

Agreed about the nature of the typical workload for postgres
FDW. But I think server itself including postgres_fdw should not
crash even by a sudden explosion of tuple length. The number 100
seems to be safe enough but 1000 seems suspicious, and 10000 is
looks to be danger from such standpoint.

> Half baked idea: I know its the wrong time in the execution phase, but if
> you are using remote estimates for cost there should also be a row width
> estimate which I believe is based from pg_statistic and its mean column
> width.

It reduces the chance to claim unexpected amount of memory, but
still the chance remains.

> Its actually a pity that there is no way to set fetch sizes based on "give
> me as many tuples as will fit in less than x amount of memory". Because
> that is almost always exactly what you want. Even when writing application
> code, I've never actually wanted precisely 10,000 rows; I've always wanted
> "a reasonable size chunk that could fit into memory" and then backed my way
> into how many rows I wanted. If we were to extend FETCH to support syntax
> like: FETCH FORWARD '10MB' FROM ...; then we would eliminate the need
> estimate the value on the fly.

I didn't think about hat. It makes sense, at least to me:) There
would be many cases that the *amount* of data is more crucial
than their number. I'll work on it.

> The async stuff, however, is a huge improvement over the last time I played
> with the fdw. The two active postgres processes were easily consuming a
> core and half of CPU. I think its not worth tying these two things
> together. Its probably worth it to make these two separate discussions and
> separate patches.

Yes, they can be separated and also should be. I'll split them
after this.

> - Matt Kelly
>
> *Just sanity checking myself: Shutting down the server, applying the
> different patch, 'make clean install' in postgres_fdw, and then restarting
> the server should obviously be sufficient to make sure its running the new
> code because that is all linked at runtime, right?

Yes. it's enough and I also did so. This patch touches only
postgres_fdw.

regards,

--
Kyotaro Horiguchi
NTT Open Source Software Center

Attachment Content-Type Size
0001-Asynchronous-execution-of-postgres_fdw-v7.patch text/x-patch 41.9 KB

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Noah Misch 2015-01-21 07:32:43 Re: hamerkop is stuck
Previous Message Peter Geoghegan 2015-01-21 07:22:31 Re: B-Tree support function number 3 (strxfrm() optimization)