Re: browsing table with 2 million records

Поиск
Список
Период
Сортировка
От Scott Marlowe
Тема Re: browsing table with 2 million records
Дата
Msg-id 1130360798.2872.57.camel@state.g2switchworks.com
обсуждение исходный текст
Ответ на browsing table with 2 million records  (aurora <aurora00@gmail.com>)
Список pgsql-performance
On Wed, 2005-10-26 at 15:41, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is mainly a
> FIFO structure with maybe 200,000 new records coming in each day that
> displace the older records.
>
> We have a GUI that let user browser through the record page by page at
> about 25 records a time. (Don't ask me why but we have to have this
> GUI). This translates to something like
>
>   select count(*) from table   <-- to give feedback about the DB size
>   select * from table order by date limit 25 offset 0
>
> Tables seems properly indexed, with vacuum and analyze ran regularly.
> Still this very basic SQLs takes up to a minute run.
>
> I read some recent messages that select count(*) would need a table
> scan for Postgre. That's disappointing. But I can accept an
> approximation if there are some way to do so. But how can I optimize
> select * from table order by date limit x offset y? One minute
> response time is not acceptable.

Have you run your script without the select count(*) part and timed it?

What does

explain analyze select * from table order by date limit 25 offset 0

say?

Is date indexed?

В списке pgsql-performance по дате отправления:

Предыдущее
От: Mark Lewis
Дата:
Сообщение: Re: browsing table with 2 million records
Следующее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: browsing table with 2 million records