From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Logging at schema level |
Date: | 2017-07-21 06:27:15 |
Message-ID: | CAB7nPqTivtm+DgkzV5XZg_wdoqj4Z1rmjz=pp4h=8A1r6QqdZA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Jul 21, 2017 at 8:21 AM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> if you have per schema logging, where should that get logged ?
>
> you could implement per DATABASE logging, if you A) add the database name to
> the log_prefix, and B) feed your logs to a program that understands this and
> splits them out to a log file per database. you could also do this on a
> per user basis. but, schema is something very dynamic, its a namespace
> within a database, and queries can touch multiiple schemas.
Personally, I understand that as logging the query N times, once per
schema, if it touches N schemas, making the exercise part of parsing.
I think that it would be possible to use the parser hook to achieve
that actually, as you need extra lookups for WITH clauses and such.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Mead | 2017-07-21 11:23:54 | Re: Logging at schema level |
Previous Message | Michael Paquier | 2017-07-21 06:24:50 | Re: Streaming Replication archive_command is really needed? |