Re: surprisingly expensive join planning query

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: surprisingly expensive join planning query
Date: 2019-12-02 22:54:11
Message-ID: 18669.1575327251@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
>> (Speaking of which, I don't quite see why this would have been a problem
>> once you got past geqo_threshold; the context resets that GEQO does
>> should've kept things under control.)

> Not sure I follow. geqo_threshold is 12 by default, and the OOM issues
> are hapenning way before that.

Ah, right. But would the peak memory usage keep growing with more than 12
rels?

> It might be that one reason why this example is so bad is that the CTEs
> have *exactly* the different join orders are bound to be costed exactly
> the same I think.

Hmm. I didn't really look into exactly why this example is so awful.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Alexander Korotkov 2019-12-03 00:00:03 Re: [PATCH] kNN for btree
Previous Message Thomas Munro 2019-12-02 22:52:31 Re: Since '2001-09-09 01:46:40'::timestamp microseconds are lost when extracting epoch