Skip site navigation (1) Skip section navigation (2)

Re: TB-sized databases

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Gregory Stark <stark(at)enterprisedb(dot)com>
Cc: "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "Csaba Nagy" <nagy(at)ecircle-ag(dot)com>, "Bill Moran" <wmoran(at)collaborativefusion(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: TB-sized databases
Date: 2007-11-29 15:45:31
Message-ID: 13851.1196351131@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
Gregory Stark <stark(at)enterprisedb(dot)com> writes:
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
>> Tom's previous concerns were along the lines of "How would know what to
>> set it to?", given that the planner costs are mostly arbitrary numbers.

> Hm, that's only kind of true.

The units are not the problem.  The problem is that you are staking
non-failure of your application on the planner's estimates being
pretty well in line with reality.  Not merely in line enough that
it picks a reasonably cheap plan, but in line enough that if it
thinks plan A is 10x more expensive than plan B, then the actual
ratio is indeed somewhere near 10.

Given that this list spends all day every day discussing cases where the
planner is wrong, I'd have to think that that's a bet I wouldn't take.

You could probably avoid this risk by setting the cutoff at something
like 100 or 1000 times what you really want to tolerate, but how
useful is it then?

			regards, tom lane

In response to

Responses

pgsql-performance by date

Next:From: Andrew SullivanDate: 2007-11-29 16:02:19
Subject: Re: 7.4 Checkpoint Question
Previous:From: Tom LaneDate: 2007-11-29 15:32:23
Subject: Re: Query only slow on first run

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group