Re: Alternative to drop index, load data, recreate index?
| От | Jason L. Buberel |
|---|---|
| Тема | Re: Alternative to drop index, load data, recreate index? |
| Дата | |
| Msg-id | 46E968BB.8070800@buberel.org обсуждение исходный текст |
| Ответ на | Re: Alternative to drop index, load data, recreate index? (hubert depesz lubaczewski <depesz@depesz.com>) |
| Список | pgsql-general |
Depesz,
Thank you for the suggestion- I thought I had read up on that tool earlier but had somehow managed to forget about it when starting this phase of my investigation.
Needless to say, I can confirm the claims made on the project homepage when using very large data sets.
- Loading 1.2M records into an indexed table:
- pg_bulkload: 5m 29s
- copy to: 53m 20s
These results were obtained using pg-8.2.4 with pg_bulkload-2.2.0.
-jason
hubert depesz lubaczewski wrote:
Thank you for the suggestion- I thought I had read up on that tool earlier but had somehow managed to forget about it when starting this phase of my investigation.
Needless to say, I can confirm the claims made on the project homepage when using very large data sets.
- Loading 1.2M records into an indexed table:
- pg_bulkload: 5m 29s
- copy to: 53m 20s
These results were obtained using pg-8.2.4 with pg_bulkload-2.2.0.
-jason
hubert depesz lubaczewski wrote:
On Mon, Sep 10, 2007 at 05:06:35PM -0700, Jason L. Buberel wrote:I am considering moving to date-based partitioned tables (each table = one month-year of data, for example). Before I go that far - is there any other tricks I can or should be using to speed up my bulk data loading?did you try pgbulkload? (http://pgbulkload.projects.postgresql.org/) depesz
В списке pgsql-general по дате отправления: