Re: Better copy/import
От | Gary Stainburn |
---|---|
Тема | Re: Better copy/import |
Дата | |
Msg-id | 01072315090906.27033@gary.ringways.co.uk обсуждение исходный текст |
Ответ на | Better copy/import (Steven Lane <stevelcmc@mindspring.com>) |
Список | pgsql-admin |
Hi Steve, I don't no of a direct method, e.g. turning off the 'die' behaviour of copy etc., but when I had the similar situation, I found it easier to just load the file in vi and clean the data first. Admittedly I only had a couple of hundred lines tho'. Alternatively, you could write a small parser in your favourite language (perl). Just turn on auto-comit, prepare the insert and then execute the prepared insert once per line. Write any line with a bad result to a seperate file. This file, you probably could just vi to update and re-run those lines throught the same script. Gary On Monday 23 July 2001 2:24 pm, Steven Lane wrote: > Hello all: > > Sorry for the bad subject line on the last version of this post. > > I'm trying to load about 10M rows of data into a simple postgres table. The > data is straightforward and fairly clean, but does have glitches every few > tens of thousands of rows. My problem is that when COPY hits a bad row it > just aborts, leaving me to go back, delete or clean up the row and try > again. > > Is there any way to import records that could just skip the bad ones and > notify me which ones they are? Loading this much data is pretty > time-consuming, especially when I keep having to repeat it to find each new > bad row. Is there a better way? > > -- sgl > > > > ---------------------------(end of broadcast)--------------------------- > TIP 4: Don't 'kill -9' the postmaster -- Gary Stainburn This email does not contain private or confidential material as it may be snooped on by interested government parties for unknown and undisclosed purposes - Regulation of Investigatory Powers Act, 2000
В списке pgsql-admin по дате отправления: