Skip site navigation (1) Skip section navigation (2)

hist boundary duplicates bug in head and 8.3

From: "Nathan Boley" <npboley(at)gmail(dot)com>
To: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: hist boundary duplicates bug in head and 8.3
Date: 2009-01-06 01:15:46
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
For heavy tailed distributions, it is possible for analyze to
duplicate histogram boundaries.

Here is the output from a test against HEAD; I've attached the test data.

=# create table bug(f float);
COPY 100000
=# copy bug from '/tmp/test_data.txt';
COPY 100000
=# analyze bug;
=# select histogram_bounds from pg_stats where tablename='bug';



(1 row)

Analyze assumes that if a value has a sample count less than
samplesize/num_buckets, ( maxmincount near line 2170 in
commands/analyze.c) then it is safe to include it in a histogram.
However, because the histogram only contains non mcv's, this is

As far as I can see, there are 4 solutions:

1) track all of the distinct values
This wouldn't be *too* expensive in analyze, especially considering we
are tracking all of the sampled values as it is. However, this opens
up the possibility of having huge mcv's lists in the worst case.To see
this, consider a distribution such that the most common value was 20%
of the table, the next mcv was 20% of the remaining entries, etc.
Clearly, for stats targets greater than 5, every value in the table
would overrun a histogram boundary, leading to an mcv list that
contained every distinct value in the sample.

2) reduce number_of_bins if values exist with frequency greater than 1/nbins
This would fix the bug, but at the cost of reducing the utility of the
histogram ( it would introduce a large skew to the ndistinct
distribution, which is assumed to be uniform over non-mcvs ).

3) use variable width histogram bins over all values.
This is probably the cleanest solution, but the most invasive.

4) Fix the binary search in ineqsel to correctly find the boundaries,
even with duplicates
This would also be relatively clean, but are the hist boundaries
assumption of being strictly increasing being satisfied anywhere else
besides ineqsel?

I've attached a patch that is a compromise between 1 and 2. It puts a
hard limit on the number of mcv's at 2x the stats target, and then, if
there are still values with too high a frequency, it reduces the
number of histogram buckets.


Attachment: test_data.txt.gz
Description: application/x-gzip (91.1 KB)
Attachment: hist_bndry_bug_fix.patch
Description: text/x-patch (3.3 KB)


pgsql-hackers by date

Next:From: Jeff DavisDate: 2009-01-06 01:22:14
Subject: Re: Time to finalize patches for 8.4 beta
Previous:From: Josh BerkusDate: 2009-01-06 01:12:27
Subject: Re: Time to finalize patches for 8.4 beta

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group