On Sat, Feb 14, 2009 at 8:23 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Alexander Staubo <alex(at)bengler(dot)no> writes:
>> Wow -- I set this to 10GB (5GB for shared buffers + another 5GB for
>> cache), and today's average write frequency went from 20MB/sec to just
>> 1MB/sec. The documentation suggests that effective_cache_size is only
>> used for query planning in conjunction with indexes. So how come it
>> affects write access?
> It *is* only used for query planning. A plausible theory is that you
> caused some join queries to change from hash or merge joining involving
> a temporary hash or sort file to an index nestloop that doesn't use any
> temporary storage. If the required hash or sort storage exceeded
> work_mem, which you have set to just 10MB, that would have created some
> write traffic.
Interesting. Is there any statistic available that can tell me whether
work_mem is being exceeded? The tools to monitor exactly what
PostgreSQL is doing -- especially on a production box -- are fairly
limited, especially since Linux does not yet have anything close to
DTrace in functionality.
> Did you happen to notice whether your queries got faster or slower when
> you did this? Watching only aggregate write traffic is a pretty limited
> view of what is happening in your database.
Unfortunately we don't log SQL queries at the moment. We do log
application response times, but they look roughly the same as before
the change. I could revert the effective_cache_size setting, turn on
SQL logging for a while, then reapply the change and compare.
In response to
pgsql-performance by date
|Next:||From: Octavio Alvarez||Date: 2009-02-14 20:15:17|
|Subject: Re: Slow queries from information schema|
|Previous:||From: Tom Lane||Date: 2009-02-14 20:02:02|
|Subject: Re: Slow queries from information schema |