O(samplesize) tuple sampling, proof of concept

From: Manfred Koizar <mkoi-pg(at)aon(dot)at>
To: pgsql-patches(at)postgresql(dot)org
Subject: O(samplesize) tuple sampling, proof of concept
Date: 2004-04-05 17:50:09
Message-ID: 350370dstvq8fri59g7c819udraeja6549@email.aon.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-patches

Here is the first preview of the two-stage tuple sampling as discussed
on -hackers last week. It is for review only, please do not apply.

As I didn't have a current CVS checkout when I worked on the patch, it
is based on 7.4.2 sources. I hope this is ok for the purpose of
reviewing.

The old sampling method is still there. I have added a GUC variable
sampling_method to make testing and benchmarking easier.

Sampling_method 0 is the old method, it has now an additional debug
message telling us how many pages and tuples have been accessed.

Sampling_method 1 is the new method (sample rows out of a sample of
blocks). Once a block is selected for inspection, all tuples of this
block are accessed to get a good estimation of the live : dead row
ratio.

Because I was afraid that method 1 might be too expensive in terms of
CPU cycles, I implemented a small variation that skips tuples without
checking them for visibility; this is sampling_method 2.

Tests:

\timing
SET client_min_messages = debug2;
SET sampling_method = 1; ANALYSE tablename;
...

tenk1 in the regression database is too small to give reliable results.
I made a new table

CREATE TABLE x AS SELECT * FROM tenk1;

and repeatedly

INSERT INTO x SELECT * FROM x;

There were also some relatively small UPDATES. A second set of tests
was performed with a table with much smaller tuples.

Results: Up to a relation size of a few thousand pages it is hard to
get consistent timing. What you get is dominated by the effects of the
scan having to pay the write back costs for a previous update or
benefiting from finding its pages in shared buffers or the OS cache.
For 15000 pages and above (tests were done with the default sample size
of 3000) things start to look reasonable and repeatable.

For small relations with less than samplesize pages the new method (both
variants) accesses all pages and therefore *must* be slower than the old
method. At 1000 pages there is only a difference of approximately 1%,
at 3000 pages the difference has its maximum of ca. 15%. We can sell
this by pointing to the better quality of the statistics.

Above 3000 pages the new method looks increasingly better, because it
never reads more than samplesize pages. Depending on tuple size the
point where the old method accesses 3000 pages is between 3500 and 4000.

Between 3000 and 8000 pages the old method still seems to be faster, but
differences between runs of the same method are not much smaller than
between runs of different methods.

At 15000 pages all methods are approximately equally fast. Above this
relation size running times for the new methods stay constant and for
the old method continue to grow with the number of pages.

The full boring list of test results is at
http://www.pivot.at/pg/SamplePerf.sxc.

Comments are welcome. If I see any indication that a patch of this kind
will be accepted, I'll prepare a new version based on current sources.
This will contain only one sampling method and it will clean up the
Vitter function a bit. Are there any spots in the documentation that
should be updated?

Servus
Manfred

Attachment Content-Type Size
unknown_filename text/plain 11.5 KB

Responses

Browse pgsql-patches by date

  From Date Subject
Next Message Tom Lane 2004-04-05 19:37:07 Re: O(samplesize) tuple sampling, proof of concept
Previous Message Tom Lane 2004-04-05 17:37:06 Re: hint infrastructure setup (v3)