Re: TB-sized databases

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Bill Moran" <wmoran(at)collaborativefusion(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: TB-sized databases
Date: 2007-11-28 13:55:02
Message-ID: 871waah3ux.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Bill Moran" <wmoran(at)collaborativefusion(dot)com> writes:

> In response to Matthew <matthew(at)flymine(dot)org>:
>
>> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
>> > it would be nice to do something with selects so we can recover a rowset
>> > on huge tables using a criteria with indexes without fall running a full
>> > scan.
>>
>> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
>> this table. It's silly. I would rather the query failed than have to wait
>> for a sequential scan of the entire table."
>>
>> Yes, that would be really useful, if you have huge tables in your
>> database.
>
> Is there something wrong with:
> set enable_seqscan = off
> ?

This does kind of the opposite of what you would actually want here. What you
want is that if you give it a query which would be best satisfied by a
sequential scan it should throw an error since you've obviously made an error
in the query.

What this does is it forces such a query to use an even *slower* method such
as a large index scan. In cases where there isn't any other method it goes
ahead and does the sequential scan anyways.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's PostGIS support!

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Bill Moran 2007-11-28 13:59:35 Re: TB-sized databases
Previous Message Bill Moran 2007-11-28 13:54:41 Re: TB-sized databases