Re: Planner debug views

From: Qingqing Zhou <zhouqq(dot)postgres(at)gmail(dot)com>
To: Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Cc: Tatsuo Ishii <ishii(at)postgresql(dot)org>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Planner debug views
Date: 2015-07-28 18:19:36
Message-ID: CAJjS0u1ycvbH+590=mFgyYp8iAFUbpC7cvXfN3oOanESgC_zQA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jul 27, 2015 at 8:20 PM, Alvaro Herrera
<alvherre(at)2ndquadrant(dot)com> wrote:
>
> I think this is a pretty neat idea, but I'm not sure this user interface
> is a good one. Why not have a new option for EXPLAIN, so you would call
> "EXPLAIN (planner_stuff=on)" and it returns this as a resultset?

Thank you for the feedback.

Yeah, I agree piggy back on EXPLAIN sounds a better interface. A good
thing about GUC is that it is global, so deep in planner we can see it.
For example, in add_path(), where we add tracking of discarded paths. If
we do EXPLAIN, we may either have to borrow another global variable, or
add a flag on several planner data structures to make sure the flag can
penetrate deep.

Another thing is that with a GUC, we can mark it internal (PGC_INTERNAL),
which compatibility maintenance might be relaxed, especially for this
advanced option.

> This idea of creating random CSV files seems odd and inconvenient in the
> long run. For instance it fails if you have two sessions doing it
> simultaneously; you could tack the process ID at the end of the file
> name to prevent that problem, but then the foreign table breaks each
> time.

The reason to use CSV file is a kinda of balance. We do have other
options, like pass data to pgstat, or persist in some shared memory/heap,
but they all have their own issues. Any suggestion here?

The file name is not random, it is fixed so we can create foreign table
once and use it afterwards - I actually want to push them into
system_views.sql. The file is opened with O_APPEND, the same way as log
files, so concurrent writes are serialized. Read could be problematic
though as no atomic guarantee between read/write. This is however a
general issue of file_fdw, as the file is out of control of the core. We
shall expect query returning format errors with concurrent read/write, and
retry shall resolve the issue.

Thanks,
Qingqing

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Stephen Frost 2015-07-28 18:26:27 Re: A little RLS oversight?
Previous Message Andrew Dunstan 2015-07-28 18:19:00 Re: Shouldn't we document "don't use a mountpoint as $PGDATA"?