Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB.

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com>
Cc: egashira(dot)yusuke(at)fujitsu(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB.
Date: 2021-10-30 16:26:43
Message-ID: 840436.1635611203@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo(dot)santamaria(at)gmail(dot)com> writes:
> Now, with 100% more patch attached.

That seems like a pretty poor solution. It will cause pg_stat_statements
to fail altogether as soon as the stats file exceeds 1GB. (Admittedly,
failing is better than crashing, but not by that much.) Worse, it causes
that to happen on EVERY platform, not only Windows where the problem is.

I think instead, we need to turn the subsequent one-off read() call into a
loop that reads no more than INT_MAX bytes at a time. It'd be possible
to restrict that to Windows, but probably no harm in doing it the same
way everywhere.

A different line of thought is that maybe we shouldn't be letting the
file get so big in the first place. Letting every backend have its
own copy of a multi-gigabyte stats file is going to be problematic,
and not only on Windows. It looks like the existing logic just considers
the number of hash table entries, not their size ... should we rearrange
things to keep a running count of the space used?

regards, tom lane

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Juan José Santamaría Flecha 2021-10-30 18:31:05 Re: BUG #17254: Crash with 0xC0000409 in pg_stat_statements when pg_stat_tmp\pgss_query_texts.stat exceeded 2GB.
Previous Message Kamigishi Rei 2021-10-30 13:19:05 Re: BUG #17245: Index corruption involving deduplicated entries