From: | Pablo Alcaraz <pabloa(at)laotraesquina(dot)com(dot)ar> |
---|---|
To: | pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: TB-sized databases |
Date: | 2007-11-28 14:15:11 |
Message-ID: | 474D77EF.8090605@laotraesquina.com.ar |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Matthew wrote:
> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
>
>> it would be nice to do something with selects so we can recover a rowset
>> on huge tables using a criteria with indexes without fall running a full
>> scan.
>>
>
> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
> this table. It's silly. I would rather the query failed than have to wait
> for a sequential scan of the entire table."
>
> Yes, that would be really useful, if you have huge tables in your
> database.
>
Thanks. That would be nice too. I want that Postgres does not fall so
easy to do sequential scan if a field are indexed. if it concludes that
the index is *huge* and it does not fit in ram I want that Postgresql
uses the index anyway because the table is *more than huge* and a
sequential scan will take hours.
I ll put some examples in a next mail.
Regards
Pablo
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew | 2007-11-28 14:40:41 | Re: TB-sized databases |
Previous Message | Csaba Nagy | 2007-11-28 14:03:44 | Re: TB-sized databases |