RE: Bulk Insert into PostgreSQL
От | Tsunakawa, Takayuki |
---|---|
Тема | RE: Bulk Insert into PostgreSQL |
Дата | |
Msg-id | 0A3221C70F24FB45833433255569204D1FA27379@G01JPEXMBYT05 обсуждение исходный текст |
Ответ на | Re: Bulk Insert into PostgreSQL (Srinivas Karthik V <skarthikv.iitb@gmail.com>) |
Ответы |
Re: Bulk Insert into PostgreSQL
|
Список | pgsql-hackers |
From: Srinivas Karthik V [mailto:skarthikv.iitb@gmail.com] > I was using copy command to load. Removing the primary key constraint on > the table and then loading it helps a lot. In fact, a 400GB table was loaded > and the primary constraint was added in around 15 hours. Thanks for the > wonderful suggestions. 400 GB / 15 hours = 7.6 MB/s That looks too slow. I experienced a similar slowness. While our user tried to INSERT (not COPY) a billion record, theyreported INSERTs slowed down by 10 times or so after inserting about 500 million records. Periodic pstack runs on Linuxshowed that the backend was busy in btree operations. I didn't pursue the cause due to other businesses, but theremight be something to be improved. Regards Takayuki Tsunakawa
В списке pgsql-hackers по дате отправления: