Skip site navigation (1) Skip section navigation (2)

Re: TB-sized databases

From: Pablo Alcaraz <pabloa(at)laotraesquina(dot)com(dot)ar>
To: pgsql-performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: TB-sized databases
Date: 2007-11-28 14:15:11
Message-ID: (view raw or whole thread)
Lists: pgsql-performance
Matthew wrote:
> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
>> it would be nice to do something with selects so we can recover a rowset
>> on huge tables using a criteria with indexes without fall running a full
>> scan.
> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
> this table. It's silly. I would rather the query failed than have to wait
> for a sequential scan of the entire table."
> Yes, that would be really useful, if you have huge tables in your
> database.

Thanks. That would be nice too. I want that Postgres does not fall so 
easy to do sequential scan if a field are indexed. if it concludes that 
the index is *huge* and it does not fit in ram I want that Postgresql 
uses the index anyway because the table is *more than huge* and a 
sequential scan will take hours.

I ll put some examples in a next mail.



In response to

pgsql-performance by date

Next:From: MatthewDate: 2007-11-28 14:40:41
Subject: Re: TB-sized databases
Previous:From: Csaba NagyDate: 2007-11-28 14:03:44
Subject: Re: TB-sized databases

Privacy Policy | About PostgreSQL
Copyright © 1996-2015 The PostgreSQL Global Development Group