Re: Better way to bulk-load millions of CSV records into postgres?

Поиск
Список
Период
Сортировка
От Josh Berkus
Тема Re: Better way to bulk-load millions of CSV records into postgres?
Дата
Msg-id 200205211639.25237.josh@agliodbs.com
обсуждение исходный текст
Ответ на Better way to bulk-load millions of CSV records into postgres?  (Ron Johnson <ron.l.johnson@cox.net>)
Список pgsql-novice
Ron,

> Currently, I've got a python script using pyPgSQL that
> parses the CSV record, creates a string that is a big
> "INSERT INTO VALUES (...)" command, then, execute() it.

What's wrong with the COPY command?

> top shows that this method uses postmaster with ~70% CPU
> utilization, and python with ~15% utilization.
>
> Still, it's only inserting ~190 recs/second.  Is there a
> better way to do this, or am I constrained by the hardware?

This sounds pretty good for an ATA system.   Upgrading to SCSI-RAID will also
improve your performance.

-Josh Berkus

В списке pgsql-novice по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Large tables being split at 1GB boundary
Следующее
От: Victor Manuel Torres Aguirre
Дата:
Сообщение: PostgreSQL+Access97+Linux: How to..