Re: browsing table with 2 million records
От | Mark Lewis |
---|---|
Тема | Re: browsing table with 2 million records |
Дата | |
Msg-id | 1130360385.1156.12.camel@archimedes обсуждение исходный текст |
Ответ на | browsing table with 2 million records (aurora <aurora00@gmail.com>) |
Ответы |
Re: browsing table with 2 million records
|
Список | pgsql-performance |
Do you have an index on the date column? Can you post an EXPLAIN ANALYZE for the slow query? -- Mark Lewis On Wed, 2005-10-26 at 13:41 -0700, aurora wrote: > I am running Postgre 7.4 on FreeBSD. The main table have 2 million > record (we would like to do at least 10 mil or more). It is mainly a > FIFO structure with maybe 200,000 new records coming in each day that > displace the older records. > > We have a GUI that let user browser through the record page by page at > about 25 records a time. (Don't ask me why but we have to have this > GUI). This translates to something like > > select count(*) from table <-- to give feedback about the DB size > select * from table order by date limit 25 offset 0 > > Tables seems properly indexed, with vacuum and analyze ran regularly. > Still this very basic SQLs takes up to a minute run. > > I read some recent messages that select count(*) would need a table > scan for Postgre. That's disappointing. But I can accept an > approximation if there are some way to do so. But how can I optimize > select * from table order by date limit x offset y? One minute > response time is not acceptable. > > Any help would be appriciated. > > Wy > >
В списке pgsql-performance по дате отправления: