Re: Shouldn't we have a way to avoid "risky" plans?

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Claudio Freire <klaussfreire(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Shouldn't we have a way to avoid "risky" plans?
Date: 2011-04-20 01:50:40
Message-ID: 4DAE3BF0.2060806@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 4/19/11 7:29 AM, Robert Haas wrote:
> Another thought is that we might want to consider reducing
> autovacuum_analyze_scale_factor. The root of the original problem
> seems to be that the table had some data churn but not enough to cause
> an ANALYZE. Now, if the data churn is random, auto-analyzing after
> 10% churn might be reasonable, but a lot of data churn is non-random,
> and ANALYZE is fairly cheap.

I wouldn't reduce the defaults for PostgreSQL; this is something you do
on specific tables.

For example, on very large tables I've been known to set
analyze_scale_factor to 0 and analyze_threshold to 5000.

And don't assume that analyzing is always cheap. If you have an 800GB
table, most of which is very cold data, and have statistics set to 5000
for some columns, accessing many of the older blocks could take a while.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Joby Joba 2011-04-20 07:58:10 Re: Two different execution plans for similar requests
Previous Message Greg Smith 2011-04-19 14:30:48 Re: How to configure a read-only database server?