Re: browsing table with 2 million records

Поиск
Список
Период
Сортировка
От Alex Turner
Тема Re: browsing table with 2 million records
Дата
Msg-id 33c6269f0510261428s1d320fc6w5e0ea16cfa333175@mail.gmail.com
обсуждение исходный текст
Ответ на Re: browsing table with 2 million records  ("Joshua D. Drake" <jd@commandprompt.com>)
Список pgsql-performance
You could also create your own index so to speak as a table that
simply contains a list of primary keys and an order value field that
you can use as your offset.  This can be kept in sync with the master
table using triggers pretty easily.  2 million is not very much if you
only have a integer pkey, and an integer order value, then you can
join it against the main table.

create table my_index_table (
primary_key_value int,
order_val int,
primary key (primary_key_value));

create index my_index_table_order_val_i on index_table (order_val);

select * from main_table a, my_index_table b where b.order_val>=25 and
b.order_val<50 and a.primary_key_id=b.primary_key_id

If the data updates alot then this won't work as well though as the
index table will require frequent updates to potentialy large number
of records (although a small number of pages so it still won't be
horrible).

Alex Turner
NetEconomist

On 10/26/05, Joshua D. Drake <jd@commandprompt.com> wrote:
>
> > We have a GUI that let user browser through the record page by page at
> > about 25 records a time. (Don't ask me why but we have to have this
> > GUI). This translates to something like
> >
> >   select count(*) from table   <-- to give feedback about the DB size
>
> Do you have a integer field that is an ID that increments? E.g; serial?
>
> >   select * from table order by date limit 25 offset 0
>
> You could use a cursor.
>
> Sincerely,
>
> Joshua D. Drake
>
>
> >
> > Tables seems properly indexed, with vacuum and analyze ran regularly.
> > Still this very basic SQLs takes up to a minute run.
> >
> > I read some recent messages that select count(*) would need a table
> > scan for Postgre. That's disappointing. But I can accept an
> > approximation if there are some way to do so. But how can I optimize
> > select * from table order by date limit x offset y? One minute
> > response time is not acceptable.
> >
> > Any help would be appriciated.
> >
> > Wy
> >
> >
> --
> The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> Managed Services, Shared and Dedicated Hosting
> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>

В списке pgsql-performance по дате отправления:

Предыдущее
От: aurora
Дата:
Сообщение: Re: browsing table with 2 million records
Следующее
От: Tom Lane
Дата:
Сообщение: Re: browsing table with 2 million records