Re: Dealing with big tables

Поиск
Список
Период
Сортировка
От Merlin Moncure
Тема Re: Dealing with big tables
Дата
Msg-id b42b73150712031414y7b66a5cn5987e0a26e6b617@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Dealing with big tables  ("Mindaugas" <ml@kilimas.com>)
Список pgsql-performance
On Dec 2, 2007 7:35 AM, Mindaugas <ml@kilimas.com> wrote:
>   I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.
>
>   Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't
they?

The PostgreSQL architecture does not allow that...some of the MVCC
info is stored in the tuple.  An all index access strategy would only
help anyways if all the information being looked up was in the
index...this probably isn't practical in really big table anyways (I
think said feature is overrated).  What would help, at least
temporarily, would be to cluster the table on your index so at least
your lookups would have some locality.  Just be aware that cluster is
a one shot deal...if there are changes to the table it will gradually
fall out of order.  Otherwise it would be an ordered table, which is
what everyone wants...some of the big commercial databases have this.

In any case, you should still at least post an explain analyze to make
sure something funky isn't going on.  Barring that, I would be looking
at some type of caching to optimize around the problem...perhaps look
at the table design?

merlin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Robert Treat
Дата:
Сообщение: Re: Training Recommendations
Следующее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Training Recommendations