Skip site navigation (1) Skip section navigation (2)

Re: TB-sized databases

From: Csaba Nagy <nagy(at)ecircle-ag(dot)com>
To: postgres performance list <pgsql-performance(at)postgresql(dot)org>
Cc: Gregory Stark <stark(at)enterprisedb(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Bill Moran <wmoran(at)collaborativefusion(dot)com>
Subject: Re: TB-sized databases
Date: 2007-11-29 16:54:32
Message-ID: 1196355273.31315.43.camel@PCD12478 (view raw or flat)
Thread:
Lists: pgsql-performance
On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:
> Given that this list spends all day every day discussing cases where the
> planner is wrong, I'd have to think that that's a bet I wouldn't take.
> 
> You could probably avoid this risk by setting the cutoff at something
> like 100 or 1000 times what you really want to tolerate, but how
> useful is it then?

It would still be useful in the sense that if the planner is taking
wrong estimates you must correct it somehow... raise statistics target,
rewrite query or other tweaking, you should do something. An error is
sometimes better than gradually decreasing performance because of too
low statistics target for example. So if the error is thrown because of
wrong estimate, it is still a valid error raising a signal that the DBA
has to do something about it.

It's still true that if the planner estimates too low, it will raise no
error and will take the resources. But that's just what we have now, so
it wouldn't be a regression of any kind...

Cheers,
Csaba.



In response to

pgsql-performance by date

Next:From: Alex HochbergerDate: 2007-11-29 18:28:35
Subject: Configuring a Large RAM PostgreSQL Server
Previous:From: Stephen FrostDate: 2007-11-29 16:42:53
Subject: Re: TB-sized databases

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group