Re: Performance issues when the number of records are around 10 Million

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: Performance issues when the number of records are around 10 Million
Дата
Msg-id 4BEA6D2902000025000315F9@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: Performance issues when the number of records are around 10 Million  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Ответы Re: Performance issues when the number of records are around 10 Million
Список pgsql-performance
venu madhav <venutaurus539@gmail.com> wrote:

>> > If the records are more in the interval,
>>
>> How do you know that before you run your query?
>>
> I calculate the count first.

This and other comments suggest that the data is totally static
while this application is running.  Is that correct?

> If generate all the pages at once, to retrieve all the 10 M
> records at once, it would take much longer time

Are you sure of that?  It seems to me that it's going to read all
ten million rows once for the count and again for the offset.  It
might actually be faster to pass them just once and build the pages.

Also, you didn't address the issue of storing enough information on
the page to read off either edge in the desired sequence with just a
LIMIT and no offset.  "Last page" or "page up" would need to reverse
the direction on the ORDER BY.  This would be very fast if you have
appropriate indexes.  Your current technique can never be made very
fast.

-Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Kevin Grittner"
Дата:
Сообщение: Re: Performance issues when the number of records are around 10 Million
Следующее
От: Craig James
Дата:
Сообщение: Re: Performance issues when the number of records are around 10 Million