Re: LOCK TABLE & speeding up mass data loads
От | Shridhar Daithankar |
---|---|
Тема | Re: LOCK TABLE & speeding up mass data loads |
Дата | |
Msg-id | 3E354CFB.32732.A38120C@localhost обсуждение исходный текст |
Ответ на | Re: LOCK TABLE & speeding up mass data loads (Ron Johnson <ron.l.johnson@cox.net>) |
Ответы |
Re: LOCK TABLE & speeding up mass data loads
Re: LOCK TABLE & speeding up mass data loads |
Список | pgsql-performance |
On 27 Jan 2003 at 3:08, Ron Johnson wrote: > Here's what I'd like to see: > COPY table [ ( column [, ...] ) ] > FROM { 'filename' | stdin } > [ [ WITH ] > [ BINARY ] > [ OIDS ] > [ DELIMITER [ AS ] 'delimiter' ] > [ NULL [ AS ] 'null string' ] ] > [COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<< > [SKIP ... ROWS] <<<<<<<<<<<<< > > This way, if I'm loading 25M rows, I can have it commit every, say, > 1000 rows, and if it pukes 1/2 way thru, then when I restart the > COPY, it can SKIP past what's already been loaded, and proceed apace. IIRc, there is a hook to \copy, not the postgreSQL command copy for how many transactions you would like to see. I remember to have benchmarked that and concluded that doing copy in one transaction is the fastest way of doing it. DOn't have a postgresql installation handy, me being in linux, but this is definitely possible.. Bye Shridhar -- I still maintain the point that designing a monolithic kernel in 1991 is afundamental error. Be thankful you are not my student. You would not get ahigh grade for such a design :-)(Andrew Tanenbaum to Linus Torvalds)
В списке pgsql-performance по дате отправления: