Re: Slow Query- Bad Row Estimate

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "Pam Ozer" <pozer(at)automotive(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Slow Query- Bad Row Estimate
Date: 2010-10-29 21:39:56
Message-ID: 4CCAF8DC0200002500036FD5@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Ozer, Pam" <pozer(at)automotive(dot)com> wrote:

> Is more information needed?

Table layouts of the tables involved (including indexes) would be
interesting. A description of the machine would be useful,
including OS, CPUs, RAM, and disk system.

I know you said you might have trouble changing the config, but some
of these seem problematic.

> shared_buffers = 500MB
> effective_cache_size = 1000MB
> max_connections = 100
> temp_buffers = 100MB

So you will allow up to 10GB to be tied up in space reserved for
temporary tables, but only expect to cache 1GB of your database?
That hardly seems optimal.

> work_mem = 100MB

That could be another 10GB or more in work memory at any moment, if
each connection was running a query which needed one work_mem
allocation.

> seq_page_cost = 1.0
> random_page_cost = 1.1
> cpu_tuple_cost = 0.1
> cpu_index_tuple_cost = 0.05
> cpu_operator_cost = 0.01

Those settings are OK if the active portion of the database is fully
cached. Is it?

> default_statistics_target = 1000

If plan times get long with complex queries, you might want to back
that off; otherwise, OK.

> autovacuum_max_workers = 1

That seems like a bad idea. Allowing multiple workers helps reduce
bloat and improve statistics. If autovacuum is affecting
performance, you would be better off tweaking the autovacuum cost
limits.

-Kevin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Ozer, Pam 2010-10-29 21:45:52 Re: Slow Query- Bad Row Estimate
Previous Message Tom Lane 2010-10-29 21:17:30 Re: Slow Query- Bad Row Estimate