On Thu, Sep 29, 2011 at 8:59 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Thu, Sep 29, 2011 at 9:39 AM, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>> Like I said, this doesn't really come up this often but the 'real'
>> solution in terms of postgrs is probably some kind of upper bound in
>> the amount of cache memory used plus some intelligence in the cache
>> implementation. This is tricky stuff though and so far no credible
>> proposals have been made and the demand for the feature is not very
> We (i.e. $EMPLOYER) have a customer who ran into this problem (i.e.
> relcache/syscache memory usage shooting through the roof) in testing,
> so I'm somewhat motivated to see if we can't come up with a fix. I am
> fairly sure that was on a 64-bit build, so the issue wasn't just that
> they didn't have enough address space. It seems that we used to have
> some kind of LRU algorithm to prevent excessive memory usage, but we
> rippped it out because it was too expensive (see commit
> 8b9bc234ad43dfa788bde40ebf12e94f16556b7f). I don't have a brilliant
> idea at the moment, but I wonder if we could come up with something
> that's cheap enough to manage that it doesn't materially affect
> performance in normal cases, but just kicks in when things get really
> out of control.
> A trivial algorithm would be - if you're about to run out of memory,
> flush all the caches; or evict 10% of the entries at random. Of
> course, the problem with anything like this is that it's hard to know
> when you're about to run out of memory before you actually do, and any
> hard-coded limit you care to set will sometimes be wrong. So maybe
> that's not the right approach. At the same time, I don't think that
> simply hoping the user has enough memory is an adequate answer.
> One thing to consider is that in some cases a user may plan to do
> something like touch every table in the database exactly once and then
> exit. In that case, if we knew in advance what the user's intentions
> were, we'd want to use an MRU eviction algorithm rather than LRU.
> Again, we don't know that in advance. But in such a use case it's
> reasonable for the user to expect that the amount of backend-private
> memory used for caching will not grow without bound.
I think this (cache memory usage) is a reasonable setting for a GUC,
Maybe if you keep it very simple, say only activate cache cleanup when
the limit is exceeded, you have more freedom to dump cache using
fancier methods like a calculated benefit. You'd probably have to
expose another knob to guarantee maximum cache sweep runtime though.
Perhaps even user visible cache management features (an extension of
DISCARD?) could be exposed...
Hm, what might make this complicated is that you'd probably want all
the various caches to live under the same umbrella with a central
authority making decisions about what stays and what goes.
On Thu, Sep 29, 2011 at 9:22 AM, MauMau <maumau307(at)gmail(dot)com> wrote:
> * reduce shared_buffers
> * run somefunc() and VACUUM in different psql sessions
> * process 100,000 tables in multiple psql sessions
that's a start. don't be afraid to reset the connection after
somefunc() and at appropriate times from the 'processors'.
In response to
pgsql-hackers by date
|Next:||From: Mr. Aaron W. Swenson||Date: 2011-09-29 14:31:27|
|Subject: Re: pg_upgrade - add config directory setting|
|Previous:||From: MauMau||Date: 2011-09-29 14:22:55|
|Subject: Re: Does RelCache/SysCache shrink except when relations are deleted?|