Re: LOCK TABLE & speeding up mass data loads
От | Ron Johnson |
---|---|
Тема | Re: LOCK TABLE & speeding up mass data loads |
Дата | |
Msg-id | 1043661264.9231.8.camel@haggis обсуждение исходный текст |
Ответ на | Re: LOCK TABLE & speeding up mass data loads ("Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>) |
Ответы |
Re: LOCK TABLE & speeding up mass data loads
|
Список | pgsql-performance |
On Mon, 2003-01-27 at 03:45, Shridhar Daithankar wrote: > On 27 Jan 2003 at 3:08, Ron Johnson wrote: > > > Here's what I'd like to see: > > COPY table [ ( column [, ...] ) ] > > FROM { 'filename' | stdin } > > [ [ WITH ] > > [ BINARY ] > > [ OIDS ] > > [ DELIMITER [ AS ] 'delimiter' ] > > [ NULL [ AS ] 'null string' ] ] > > [COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<< > > [SKIP ... ROWS] <<<<<<<<<<<<< > > > > This way, if I'm loading 25M rows, I can have it commit every, say, > > 1000 rows, and if it pukes 1/2 way thru, then when I restart the > > COPY, it can SKIP past what's already been loaded, and proceed apace. > > IIRc, there is a hook to \copy, not the postgreSQL command copy for how many I'll have to look into that. > transactions you would like to see. I remember to have benchmarked that and > concluded that doing copy in one transaction is the fastest way of doing it. Boy Scout motto: Be prepared!! (Serves me well as a DBA.) So it takes a little longer. In case of failure, the time would be more than made up. Also, wouldn't the WAL grow hugely if many millions of rows were inserted in one txn? -- +---------------------------------------------------------------+ | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | "Fear the Penguin!!" | +---------------------------------------------------------------+
В списке pgsql-performance по дате отправления: