Skip site navigation (1) Skip section navigation (2)

new correlation metric

From: Jeff Davis <pgsql(at)j-davis(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Cc: npboley(at)gmail(dot)com
Subject: new correlation metric
Date: 2008-10-26 08:38:02
Message-ID: 1225010283.21241.33.camel@localhost.localdomain (view raw or flat)
Thread:
Lists: pgsql-hackers
Currently, we use correlation to estimate the I/O costs of an index
scan. However, this has some problems:

http://archives.postgresql.org/pgsql-hackers/2008-01/msg01084.php

I worked with Nathan Boley to come up with what we think is a better
metric for measuring this cost. It is based on the number of times in
the ordered sample that you have to physically backtrack (i.e. the data
value increases, but the physical position is earlier).

For example, if the table's physical order is

6 7 8 9 10 1 2 3 4 5

then the new metric will see one backtrack -- after reading 5 it
backtracks to the beginning to read 6. The old code would have produced
a correlation near -0.5. The new code will yield a "correlation" of 8/9.
The old code would have produced a run_cost of 1/4*min_IO_cost +
3/4*max_IO_cost, while the new code produces a run_cost of
8/9*min_IO_cost + 1/9*max_IO_cost. 

Attached is a patch. We replaced "correlation", but left the name the
same (even though that's now a misnomer). We can change it if needed.

Comments?

Regards,
	Jeff Davis

Attachment: correlation.diff.gz
Description: application/x-gzip (1.8 KB)

Responses

pgsql-hackers by date

Next:From: Pavel StehuleDate: 2008-10-26 10:54:28
Subject: WIP: default values for function parameters
Previous:From: Robert HaasDate: 2008-10-26 03:29:49
Subject: BufferAccessStrategy for bulk insert

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group