Skip site navigation (1) Skip section navigation (2)

Re: Thoughts on statistics for continuously advancing columns

From: Chris Browne <cbbrowne(at)acm(dot)org>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Thoughts on statistics for continuously advancing columns
Date: 2009-12-30 21:15:05
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
jd(at)commandprompt(dot)com ("Joshua D. Drake") writes:
> On the other hand ANALYZE also:
> 1. Uses lots of memory
> 2. Lots of processor
> 3. Can take a long time
> We normally don't notice because most sets won't incur a penalty. We got a
> customer who
> has a single table that is over 1TB in size... We notice. Granted that is
> the extreme
> but it would only take a quarter of that size (which is common) to start
> seeing issues.

I find it curious that ANALYZE *would* take a long time to run.

After all, its sampling strategy means that, barring having SET
STATISTICS to some ghastly high number, it shouldn't need to do
materially more work to analyze a 1TB table than is required to analyze
a 1GB table.

With the out-of-the-box (which may have changed without my notice ;-))
default of 10 bars in the histogram, it should search for 30K rows,
which, while not "free," doesn't get enormously more expensive as tables
Rules  of  the  Evil  Overlord   #179.  "I  will  not  outsource  core
functions." <>

In response to


pgsql-hackers by date

Next:From: Robert HaasDate: 2009-12-30 21:37:55
Subject: Re: quoting psql varible as identifier
Previous:From: Robert HaasDate: 2009-12-30 21:13:21
Subject: Re: pg_read_file() and non-ascii input file

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group