Re: LOCK TABLE & speeding up mass data loads

Поиск
Список
Период
Сортировка
От Curt Sampson
Тема Re: LOCK TABLE & speeding up mass data loads
Дата
Msg-id Pine.NEB.4.51.0301271921270.393@angelic.cynic.net
обсуждение исходный текст
Ответ на Re: LOCK TABLE & speeding up mass data loads  (Ron Johnson <ron.l.johnson@cox.net>)
Список pgsql-performance
On Mon, 27 Jan 2003, Ron Johnson wrote:

> > I don't see how the amount of data manipulation makes a difference.
> > Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,
> > just print the data for the columns, separated by tabs. Where you now
> > issue a COMMIT, end the copy.
>
> Yes, create an input file for COPY.  Great idea.

That's not quite what I was thinking of. Don't create an input file,
just send the commands directly to the server (if your API supports it).
If worst comes to worst, you could maybe open up a subprocess for a psql
and write to its standard input.

> However, If I understand you correctly, then if I want to be able
> to not have to roll-back and re-run and complete COPY (which may
> entail millions of rows), then I'd have to have thousands of seperate
> input files (which would get processed sequentially).

Right.

But you can probably commit much less often than 1000 rows. 10,000 or
100,000 would probably be more practical.

cjs
--
Curt Sampson  <cjs@cynic.net>   +81 90 7737 2974   http://www.netbsd.org
    Don't you know, in this new Dark Age, we're all light.  --XTC

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Shridhar Daithankar"
Дата:
Сообщение: Re: LOCK TABLE & speeding up mass data loads
Следующее
От: Curt Sampson
Дата:
Сообщение: Re: 7.3.1 New install, large queries are slow