|From:||Arne Roland <A(dot)Roland(at)index(dot)de>|
|To:||Justin Pryzby <pryzby(at)telsasoft(dot)com>|
|Subject:||Re: Enforce work_mem per worker|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
I did read parts of the last one back then. But thanks for the link, I plan to reread the thread as a whole.
From what I can tell, the discussions here are the attempt by very smart people to (at least partially) solve the problem of memory allocation (without sacrificing to much on the runtime front). That problem is very hard.
What I am mostly trying to do, is to provide a reliable way of preventing the operational hazard of dealing with oom and alike, e.g. massive kernel buffer eviction. I don't want to touch the planning, which is always complex and tends to introduce weird side effects.
That way we can't hope to prevent the issue from occurring generally. I'm much more concerned with containing it, if it happens.
In the case that there is only a single pass, which tends to be the case for a lot of queries, my suggested approach would even help the offender.
But my main goal is something else. I can't explain my clients, why a chanced statistics due to autovacuum suddenly leads to oom. They would be right to question postgres qualification for any serious production system.
|Next Message||Andrei Zubkov||2021-11-29 14:04:29||[PATCH] pg_statio_all_tables: several rows per table due to invalid TOAST index|
|Previous Message||Guillaume Lelarge||2021-11-29 12:49:24||Lots of memory allocated when reassigning Large Objects|