From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Add more information_schema columns |
Date: | 2018-02-06 07:15:49 |
Message-ID: | 20180206071549.GB74355@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Feb 05, 2018 at 08:59:31PM -0500, Peter Eisentraut wrote:
> Here is a patch that fills in a few more information schema columns, in
> particular those related to the trigger transition tables feature.
It is unfortunate that this cannot be backpatched. Here are few
comments, the logic and theh definitions look correct to me.
> - CAST(null AS cardinal_number) AS action_order,
> -- XXX strange hacks follow
> + CAST(rank() OVER (PARTITION BY n.oid, c.oid, em.num, (t.tgtype & 1 & 66) ORDER BY t.tgname) AS cardinal_number) AS action_order,
Better to use parenthesis for (t.tgtype & 1 & 66) perhaps? You may want
to comment that this is to filter per row-statement first, and then with
after/before/instead of, which are what the 1 and the 66 are for.
> - CAST(null AS sql_identifier) AS action_reference_old_table,
> - CAST(null AS sql_identifier) AS action_reference_new_table,
> + CAST(tgoldtable AS sql_identifier) AS action_reference_old_table,
> + CAST(tgnewtable AS sql_identifier) AS action_reference_new_table,
> +SELECT trigger_name, event_manipulation, event_object_schema,
> event_object_table, action_order, action_condition,
> action_orientation, action_timing, action_reference_old_table,
> action_reference_new_table FROM information_schema.triggers ORDER BY
> 1, 2;
Writing those SQL queries across multiple lines would make them easier
to read...
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Kyotaro HORIGUCHI | 2018-02-06 07:35:42 | Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem |
Previous Message | Rajkumar Raghuwanshi | 2018-02-06 06:32:51 | Re: Query running for very long time (server hanged) with parallel append |