Re: Populating large tables with occasional bad values

Поиск
Список
Период
Сортировка
От John T. Dow
Тема Re: Populating large tables with occasional bad values
Дата
Msg-id 200806111658.m5BGwRZX057718@web2.nidhog.com
обсуждение исходный текст
Ответ на Re: Populating large tables with occasional bad values  (Craig Ringer <craig@postnewspapers.com.au>)
Ответы Re: Populating large tables with occasional bad values
Re: Populating large tables with occasional bad values
Список pgsql-jdbc
Latency it is.

I just had no idea it would add up so fast. I guess I was thinking that you could pump a lot of data over the Internet
withoutrealizing the overhead when the data is broken down into little chunks. 

I'm not sure what the best solution is. I do this rarely, usually when first loading the data from the legacy. When
readyto go live, my (remote) client will send the data, I'll massage it for loading, then load it to their (remote)
postgresserver. This usually takes place over a weekend, but last time was in an evening which lasted until 4AM.  

If I did this regularly, three options seem easiest.

1 - Load locally to get clean data and then COPY. This requires the server to have access local access to the file to
becopied, and if the server is hosted by an isp, it depends on them whether you can do this easily. 

2 - Send the data to the client to run the Java app to insert over their LAN (this only works if the database server is
localto them and not at an ISP). 

3 - If the only problem is duplicate keys, load into a special table without the constraint, issue update commands to
rewritethe keys as needed, then select/insert to the correct table. 

Thanks

John


В списке pgsql-jdbc по дате отправления:

Предыдущее
От: Craig Ringer
Дата:
Сообщение: Re: Populating large tables with occasional bad values
Следующее
От: Craig Ringer
Дата:
Сообщение: Re: Populating large tables with occasional bad values