Re: very very slow inserts into very large table

Поиск
Список
Период
Сортировка
От Samuel Gendler
Тема Re: very very slow inserts into very large table
Дата
Msg-id CAEV0TzDnzop=dZMwynDzPhukiNkksZfW7dzN4O-co43ZTWchNw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: very very slow inserts into very large table  (Mark Thornton <mthornton@optrak.com>)
Ответы Re: very very slow inserts into very large table  (Jon Nelson <jnelson+pgsql@jamponi.net>)
Re: very very slow inserts into very large table  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-performance
On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton <mthornton@optrak.com> wrote:

Every insert updates four indexes, so at least 3 of those will be in random order. The indexes don't fit in memory, so all those updates will involve reading most of the relevant b-tree pages from disk (or at least the leaf level). A total of 10ms of random read from disk (per inserted row) wouldn't surprise me ... which adds up to more than 10 days for your 93 million rows.

Which is the long way of saying that you will likely benefit from partitioning that table into a number of smaller tables, especially if queries on that table tend to access only a subset of the data that can be defined to always fit into a smaller number of partitions than the total.  At the very least, inserts will be faster because individual indexes will be smaller.  But unless all queries can't be constrained to fit within a subset of partitions, you'll also see improved performance on selects.

--sam


В списке pgsql-performance по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: query overhead
Следующее
От: Jon Nelson
Дата:
Сообщение: Re: very very slow inserts into very large table