Skip site navigation (1) Skip section navigation (2)

Re: Slow Query- Bad Row Estimate

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Pam Ozer" <pozer(at)automotive(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Slow Query- Bad Row Estimate
Date: 2010-10-29 21:39:56
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
"Ozer, Pam" <pozer(at)automotive(dot)com> wrote:
> Is more information needed?
Table layouts of the tables involved (including indexes) would be
interesting.  A description of the machine would be useful,
including OS, CPUs, RAM, and disk system.
I know you said you might have trouble changing the config, but some
of these seem problematic.
> shared_buffers = 500MB
> effective_cache_size = 1000MB
> max_connections = 100
> temp_buffers = 100MB
So you will allow up to 10GB to be tied up in space reserved for
temporary tables, but only expect to cache 1GB of your database? 
That hardly seems optimal.
> work_mem = 100MB
That could be another 10GB or more in work memory at any moment, if
each connection was running a query which needed one work_mem
> seq_page_cost = 1.0
> random_page_cost = 1.1
> cpu_tuple_cost = 0.1
> cpu_index_tuple_cost = 0.05
> cpu_operator_cost = 0.01
Those settings are OK if the active portion of the database is fully
cached.  Is it?
> default_statistics_target = 1000
If plan times get long with complex queries, you might want to back
that off; otherwise, OK.
> autovacuum_max_workers = 1
That seems like a bad idea.  Allowing multiple workers helps reduce
bloat and improve statistics.  If autovacuum is affecting
performance, you would be better off tweaking the autovacuum cost

In response to

pgsql-performance by date

Next:From: Ozer, PamDate: 2010-10-29 21:45:52
Subject: Re: Slow Query- Bad Row Estimate
Previous:From: Tom LaneDate: 2010-10-29 21:17:30
Subject: Re: Slow Query- Bad Row Estimate

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group