From: | Marti Raudsepp <marti(at)juffo(dot)org> |
---|---|
To: | Peter Geoghegan <pg(at)heroku(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Less than ideal error reporting in pg_stat_statements |
Date: | 2015-09-25 15:51:36 |
Message-ID: | CABRT9RCniVK2zkOpvo=v9dBRQyLpM+Znw3bAhDd6FuGUvPjicg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 23, 2015 at 3:01 AM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> I think that the real problem here is that garbage collection needs to
> deal with OOM more appropriately.
+1
I've also been seeing lots of log messages saying "LOG: out of
memory" on a server that's hosting development databases. I put off
debugging this until now because it didn't seem to have any adverse
effects on the system.
The file on my system is currently 5.1GB (!). I don't know how it got
there -- under normal circumstances we don't have any enormous
queries, but perhaps our application bugs during development triggered
that.
The configuration on this system is pg_stat_statements.max = 10000 and
pg_stat_statements.track = all.
----
The comment near gc_qtexts says:
* This won't be called often in the typical case, since it's likely that
* there won't be too much churn, and besides, a similar compaction process
* occurs when serializing to disk at shutdown or as part of resetting.
* Despite this, it seems prudent to plan for the edge case where the file
* becomes unreasonably large, with no other method of compaction likely to
* occur in the foreseeable future.
[...]
* Load the old texts file. If we fail (out of memory, for instance) just
* skip the garbage collection.
So, as I understand it: if the system runs low on memory for an
extended period, and/or the file grows beyond 1GB (MaxAlloc), garbage
collection stops entirely, meaning it starts leaking disk space until
a manual intervention.
It's very frustrating when debugging aides cause further problems on a
system. If the in-line compaction doesn't materialize (or it's decided
not to backport it), I would propose instead to add a test to
pgss_store() to avoid growing the file beyond MaxAlloc (or perhaps
even a lower limit). Surely dropping some statistics is better than
this pathology.
Regards,
Marti
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2015-09-25 16:10:53 | Re: No Issue Tracker - Say it Ain't So! |
Previous Message | Alvaro Herrera | 2015-09-25 15:45:10 | Re: WIP: Rework access method interface |