Re: how to monitor the progress of really large bulk operations?

Поиск
Список
Период
Сортировка
От Pavel Stehule
Тема Re: how to monitor the progress of really large bulk operations?
Дата
Msg-id CAFj8pRDff+Uc__n0a46kv8Nw1FYFEBpwetq4M=Tpf=s_J6anuA@mail.gmail.com
обсуждение исходный текст
Ответ на how to monitor the progress of really large bulk operations?  ("Mike Sofen" <msofen@runbox.com>)
Ответы Re: how to monitor the progress of really large bulk operations?  (Pavel Stehule <pavel.stehule@gmail.com>)
Список pgsql-general
Hi

2016-09-27 23:03 GMT+02:00 Mike Sofen <msofen@runbox.com>:

Hi gang,

 

On PG 9.5.1, linux, I’m running some large ETL operations, migrate data from a legacy mysql system into PG, upwards of 250m rows in a transaction (it’s on a big box).  It’s always a 2 step operation – extract raw mysql data and pull it to the target big box into staging tables that match the source, the second step being read the landed dataset and transform it into the final formats, linking to newly generated ids, compressing big subsets into jsonb documents, etc.

 

While I could break it into smaller chunks, it hasn’t been necessary, and it doesn’t eliminate my need:  how to view the state of a transaction in flight, seeing how many rows have been read or inserted (possible for a transaction in flight?), memory allocations across the various PG processes, etc.

 

Possible or a hallucination?

 

Mike Sofen (Synthetic Genomics)


Regards

Pavel

В списке pgsql-general по дате отправления:

Предыдущее
От: dudedoe01
Дата:
Сообщение: Re: isnull() function in pgAdmin3
Следующее
От: Pavel Stehule
Дата:
Сообщение: Re: how to monitor the progress of really large bulk operations?