Re: Logging parallel worker draught

From: "Imseih (AWS), Sami" <simseih(at)amazon(dot)com>
To: Benoit Lobréau <benoit(dot)lobreau(at)dalibo(dot)com>, "Alvaro Herrera" <alvherre(at)alvh(dot)no-ip(dot)org>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Melanie Plageman <melanieplageman(at)gmail(dot)com>
Subject: Re: Logging parallel worker draught
Date: 2023-10-15 17:48:51
Message-ID: D04977E3-9F54-452C-A4C4-CDA67F392BD1@amazon.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> I believe both cumulative statistics and logs are needed. Logs excel in
> pinpointing specific queries at precise times, while statistics provide
> a broader overview of the situation. Additionally, I often encounter
> situations where clients lack pg_stat_statements and can't restart their
> production promptly.

I agree that logging will be very useful here.
Cumulative stats/pg_stat_statements can be handled in a separate discussion.

> log_temp_files exhibits similar behavior when a query involves multiple
> on-disk sorts. I'm uncertain whether this is something we should or need
> to address. I'll explore whether the error message can be made more
> informative.

> [local]:5437 postgres(at)postgres=# SET work_mem to '125kB';
> [local]:5437 postgres(at)postgres=# SET log_temp_files TO 0;
> [local]:5437 postgres(at)postgres=# SET client_min_messages TO log;
> [local]:5437 postgres(at)postgres=# WITH a AS ( SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) , b AS (SELECT x FROM
> generate_series(1,10000) AS F(x) ORDER BY 1 ) SELECT * FROM a,b;
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.20", size
> 122880 => First sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.19", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.23", size 140000
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.22", size
> 122880 => Second sort
> LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp138850.21", size 140000

That is true.

Users should also control if they want this logging overhead or not,
The best answer is a new GUC that is OFF by default.

I am also not sure if we want to log draught only. I think it's important
to not only see which operations are in parallel draught, but to also log
operations are using 100% of planned workers.
This will help the DBA tune queries that are eating up the parallel workers.

Regards,

Sami

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Konstantin Knizhnik 2023-10-15 18:33:00 Can concurrent create index concurrently block each other?
Previous Message Tom Lane 2023-10-15 16:56:29 Re: Making aggregate deserialization (and WAL receive) functions slightly faster