Re: ANALYZE sampling is too good

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: Greg Stark <stark(at)mit(dot)edu>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: ANALYZE sampling is too good
Date: 2013-12-05 23:50:58
Message-ID: 52A11162.8090104@agliodbs.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 12/03/2013 03:30 PM, Greg Stark wrote:
> It means if your table is anywhere up to 240MB you're effectively
> doing a full table scan and then throwing out nearly all the data
> read.

There are lots of issues with our random sampling approach for ANALYZE.
This is why, back in our Greenplum days, Simon proposed changing to a
block-based sampling approach, where we would sample random *pages*
instead of random *rows*. That would allow us to do things like sample
5% of the table, but only read 5% of the table, although we might have
to play some with OS-FS operations to make sure of that. In addition to
solving the issue you cite above, it would let us get MUCH more accurate
estimates for very large tables, where currently we sample only about
0.1% of the table.

There are fairly well researched algorithms for block-based sampling
which estimate for the skew introduced by looking at consecutive rows in
a block. In general, a minimum sample size of 5% is required, and the
error is no worse than our current system. However, the idea was shot
down at the time, partly because I think other hackers didn't get the math.

I believe that both Oracle and MSSQL use block-based sampling, but of
course, I don't know which specific algo they use.

--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2013-12-05 23:59:20 Re: Proposal: variant of regclass
Previous Message Tatsuo Ishii 2013-12-05 23:44:12 Re: Proposal: variant of regclass