Re: Bulk Insert into PostgreSQL

Поиск
Список
Период
Сортировка
От Srinivas Karthik V
Тема Re: Bulk Insert into PostgreSQL
Дата
Msg-id CAEfuzeSHg9d3C6F5SyJRSQDFXyh-xhqjp91=n+2B-rN_oomSOQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Bulk Insert into PostgreSQL  (Don Seiler <don@seiler.us>)
Ответы Re: Bulk Insert into PostgreSQL  (Craig Ringer <craig@2ndquadrant.com>)
RE: Bulk Insert into PostgreSQL  ("Tsunakawa, Takayuki" <tsunakawa.takay@jp.fujitsu.com>)
Список pgsql-hackers
I was using copy command to load. Removing the primary key constraint on the table and then loading it helps a lot. In fact, a 400GB table was loaded and the primary constraint was added in around 15 hours.  Thanks for the wonderful suggestions. 

Regards,
Srinivas Karthik

On 28 Jun 2018 2:07 a.m., "Don Seiler" <don@seiler.us> wrote:
On Wed, Jun 27, 2018 at 6:25 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:


Other parameters are set to default value. Moreover, I have specified the primary key constraint during table creation. This is the only possible index being created before data loading and I am sure there are no other indexes apart from the primary key column(s).

When doing initial bulk data loads, I would suggest not applying ANY constraints or indexes on the table until after the data is loaded. Especially unique constraints/indexes, those will slow things down A LOT.
 

The main factor is using COPY instead INSERTs.


+1 to COPY.


--
Don Seiler
www.seiler.us

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: Explain buffers wrong counter with parallel plans
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: pgsql: Fix "base" snapshot handling in logical decoding