Currently, we use correlation to estimate the I/O costs of an index
scan. However, this has some problems:
I worked with Nathan Boley to come up with what we think is a better
metric for measuring this cost. It is based on the number of times in
the ordered sample that you have to physically backtrack (i.e. the data
value increases, but the physical position is earlier).
For example, if the table's physical order is
6 7 8 9 10 1 2 3 4 5
then the new metric will see one backtrack -- after reading 5 it
backtracks to the beginning to read 6. The old code would have produced
a correlation near -0.5. The new code will yield a "correlation" of 8/9.
The old code would have produced a run_cost of 1/4*min_IO_cost +
3/4*max_IO_cost, while the new code produces a run_cost of
8/9*min_IO_cost + 1/9*max_IO_cost.
Attached is a patch. We replaced "correlation", but left the name the
same (even though that's now a misnomer). We can change it if needed.
pgsql-hackers by date
|Next:||From: Pavel Stehule||Date: 2008-10-26 10:54:28|
|Subject: WIP: default values for function parameters|
|Previous:||From: Robert Haas||Date: 2008-10-26 03:29:49|
|Subject: BufferAccessStrategy for bulk insert|