Re: Performance with very large tables

Поиск
Список
Период
Сортировка
От Shoaib Mir
Тема Re: Performance with very large tables
Дата
Msg-id bf54be870701150324j3ec5126blcb02c362c73dbff6@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Performance with very large tables  (Richard Huxton <dev@archonet.com>)
Ответы Re: Performance with very large tables  (Richard Huxton <dev@archonet.com>)
Список pgsql-general
You can also opt for partitioning the tables and this way select will only get the data from the required partition.

--------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com )

On 1/15/07, Richard Huxton <dev@archonet.com> wrote:
Jan van der Weijde wrote:
> Hello all,
>
> one of our customers is using PostgreSQL with tables containing millions
> of records. A simple 'SELECT * FROM <table>'  takes way too much time in
> that case, so we have advised him to use the LIMIT and OFFSET clauses.

That won't reduce the time to fetch millions of rows.

It sounds like your customer doesn't want millions of rows at once, but
rather a few rows quickly and then to fetch more as required. For this
you want to use a cursor. You can do this via SQL, or perhaps via your
database library.

In SQL:
http://www.postgresql.org/docs/8.2/static/sql-declare.html
http://www.postgresql.org/docs/8.2/static/sql-fetch.html
In pl/pgsql:
http://www.postgresql.org/docs/8.2/static/plpgsql-cursors.html

HTH
--
   Richard Huxton
   Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

В списке pgsql-general по дате отправления:

Предыдущее
От: Richard Huxton
Дата:
Сообщение: Re: Performance with very large tables
Следующее
От: "Jan van der Weijde"
Дата:
Сообщение: Re: Performance with very large tables