Re: TB-sized databases

Поиск
Список
Период
Сортировка
От Pablo Alcaraz
Тема Re: TB-sized databases
Дата
Msg-id 474D77EF.8090605@laotraesquina.com.ar
обсуждение исходный текст
Ответ на Re: TB-sized databases  (Matthew <matthew@flymine.org>)
Список pgsql-performance
Matthew wrote:
> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
>
>> it would be nice to do something with selects so we can recover a rowset
>> on huge tables using a criteria with indexes without fall running a full
>> scan.
>>
>
> You mean: Be able to tell Postgres "Don't ever do a sequential scan of
> this table. It's silly. I would rather the query failed than have to wait
> for a sequential scan of the entire table."
>
> Yes, that would be really useful, if you have huge tables in your
> database.
>

Thanks. That would be nice too. I want that Postgres does not fall so
easy to do sequential scan if a field are indexed. if it concludes that
the index is *huge* and it does not fit in ram I want that Postgresql
uses the index anyway because the table is *more than huge* and a
sequential scan will take hours.

I ll put some examples in a next mail.

Regards

Pablo

В списке pgsql-performance по дате отправления:

Предыдущее
От: Csaba Nagy
Дата:
Сообщение: Re: TB-sized databases
Следующее
От: Matthew
Дата:
Сообщение: Re: TB-sized databases