Re: Big data INSERT optimization - ExclusiveLock on extension of the table

Поиск
Список
Период
Сортировка
От pinker
Тема Re: Big data INSERT optimization - ExclusiveLock on extension of the table
Дата
Msg-id 1471559195448-5917136.post@n5.nabble.com
обсуждение исходный текст
Ответ на Re: Big data INSERT optimization - ExclusiveLock on extension of the table  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Ответы Re: Re: Big data INSERT optimization - ExclusiveLock on extension of the table  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Список pgsql-performance

> 1. rename table t01 to t02
OK...
> 2. insert into t02 1M rows in chunks for about 100k
Why not just insert into t01??

Because of cpu utilization, it speeds up when load is divided

> 3. from t01 (previously loaded table) insert data through stored procedure
But you renamed t01 so it no longer exists???
> to b01 - this happens parallel in over a dozen sessions
b01?

that's another table - permanent one

> 4. truncate t01
Huh??

The data were inserted to permanent storage so the temporary table can be
truncated and reused.

Ok, maybe the process is not so important; let's say the table is loaded,
then data are fetched and reloaded to other table through stored procedure
(with it's logic), then the table is truncated and process goes again. The
most important part is holding ExclusiveLocks ~ 1-5s.




--
View this message in context:
http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781p5917136.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


В списке pgsql-performance по дате отправления:

Предыдущее
От: Jim Nasby
Дата:
Сообщение: Re: Estimates on partial index
Следующее
От: Ashish Kumar Singh
Дата:
Сообщение: Re: Estimates on partial index