Performance with very large tables

Поиск
Список
Период
Сортировка
От Jan van der Weijde
Тема Performance with very large tables
Дата
Msg-id 4B9C73D1EB78FE4A81475AE8A553B3C67DC532@exch-lei1.attachmate.com
обсуждение исходный текст
Ответы Re: Performance with very large tables  (Richard Huxton <dev@archonet.com>)
Re: Performance with very large tables  (Bruno Wolff III <bruno@wolff.to>)
Список pgsql-general
Hello all,
 
one of our customers is using PostgreSQL with tables containing millions of records. A simple 'SELECT * FROM <table>'  takes way too much time in that case, so we have advised him to use the LIMIT and OFFSET clauses. However now he has a concurrency problem. Records deleted, added or updated in one process have an influence on the OFFSET value of another process such that records are either skipped of read again.
The solution to that problem is to use transactions with isolation level serializable. But to use transactions around a loop that reads millions of records is far from ideal I think.
Does anyone have a suggestion for this problem ? Is there for instance an alternative to LIMIT/OFFSET so that SELECT on large tables has a good performance ?
 
Thank you for your help
 
Jan van der Weijde
 

В списке pgsql-general по дате отправления:

Предыдущее
От: "Shoaib Mir"
Дата:
Сообщение: Re: Backup the part of postgres database
Следующее
От: Richard Huxton
Дата:
Сообщение: Re: Performance with very large tables