Skip site navigation (1) Skip section navigation (2)

Re: TB-sized databases

From: "Trevor Talbot" <quension(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: TB-sized databases
Date: 2007-11-30 10:15:45
Message-ID: 90bce5730711300215x3ff6c68ekf1c55ce799e48578@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On 11/29/07, Gregory Stark <stark(at)enterprisedb(dot)com> wrote:
> "Simon Riggs" <simon(at)2ndquadrant(dot)com> writes:
> > On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:

> >> In fact an even more useful option would be to ask the planner to throw
> >> error if the expected cost exceeds a certain threshold...

> > Tom's previous concerns were along the lines of "How would know what to
> > set it to?", given that the planner costs are mostly arbitrary numbers.

> Hm, that's only kind of true.

> Obviously few people know how long such a page read takes but surely you would
> just run a few sequential reads of large tables and set the limit to some
> multiple of whatever you find.
>
> This isn't going to precise to the level of being able to avoid executing any
> query which will take over 1000ms. But it is going to be able to catch
> unconstrained cross joins or large sequential scans or such.

Isn't that what statement_timeout is for? Since this is entirely based
on estimates, using arbitrary fuzzy numbers for this seems fine to me;
precision isn't really the goal.

In response to

Responses

pgsql-performance by date

Next:From: Csaba NagyDate: 2007-11-30 10:29:43
Subject: Re: TB-sized databases
Previous:From: Robert TreatDate: 2007-11-30 09:15:09
Subject: Re: Training Recommendations

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group