Re: optimising data load

Поиск
Список
Период
Сортировка
От John Taylor
Тема Re: optimising data load
Дата
Msg-id 02052216350900.03723@splash.hq.jtresponse.co.uk
обсуждение исходный текст
Ответ на optimising data load  (John Taylor <postgres@jtresponse.co.uk>)
Список pgsql-novice
On Wednesday 22 May 2002 16:29, Patrick Hatcher wrote:
> Dump the records from the other dbase to a text file and then use the COPY
> command for Pg.  I update tables nightly with 400K+ records and it only
> takes 1 -2 mins.  You should drop and re-add your indexes and then do a
> vacuum analyze
>

I'm looking into that at the moment.
I'm getting some very variable results.
There are some tables that it is easy to do this for.

However for some tables, I don't get data in the right format, so I need to
perform some queries to get the right values to use when populating.

In this situation I'm not sure if I should drop the indexes to make make the insert faster,
or keep them to make the selects faster.


Thanks
JohnT

В списке pgsql-novice по дате отправления:

Предыдущее
От: "Joshua b. Jore"
Дата:
Сообщение: Re: pl/perl Documentation
Следующее
От: Marc Spitzer
Дата:
Сообщение: Re: Better way to bulk-load millions of CSV records into postgres?