Re: Improving performance of merging data between tables

Поиск
Список
Период
Сортировка
От Maxim Boguk
Тема Re: Improving performance of merging data between tables
Дата
Msg-id CAK-MWwSt8JA67EiGhAzO7vHhjfUd=2Pm8=oKqGkFw9BZq3EjVA@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Improving performance of merging data between tables  (Pawel Veselov <pawel.veselov@gmail.com>)
Ответы Re: Improving performance of merging data between tables  (Pawel Veselov <pawel.veselov@gmail.com>)
Список pgsql-general


On Wed, Dec 31, 2014 at 11:10 AM, Pawel Veselov <pawel.veselov@gmail.com> wrote
 
[skipped] 

2) try pg_stat_statements, setting "pg_stat_statements.track = all".  see:
http://www.postgresql.org/docs/9.4/static/pgstatstatements.html

I have used this to profile some functions, and it worked pretty well. Mostly I use it on a test box, but once ran it on the live, which was scary, but worked great.

That looks promising. Turned it on, waiting for when I can turn the server at the next "quiet time".

I have to say this turned out into a bit of a disappointment for this use case. It only measures total time spent in a call. So, it sends up operations that waited a lot on some lock. It's good, but it would be great if total_time was provided along with wait_time (and io_time may be as well, since I also see operations that just naturally have to fetch a lot of data)

​1) pg_stat_statements provide an information about io_time of each statement but you should have track_io_timing ​
 
​enabled for that.

2) About locking I suggest enable log_lock_waits and set deadlock_timeout to say 100ms (just for testing purposes), and than any lock waiting more than 100ms will be logged with some useful additional info.

PS: your setup look pretty complicated and hard to analyze without seeing all involved table structures, transaction/query flow, and (especially) involved procedures source code.

PPS: btw, please check the database logs for deadlocks messages, your setup around "and then call a pgsql function to merge the data from its tables into the common tables" part could be easily deadlock prone.

PPPS: and the last suggestion, after you finished with the "write all the data into its own tables", then application should perform analyze of these own tables (or you could have weird/inefficient plans during last stage).



--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim.boguk@gmail.com
МойКруг: http://mboguk.moikrug.ru/

"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


В списке pgsql-general по дате отправления:

Предыдущее
От: Pawel Veselov
Дата:
Сообщение: Re: Improving performance of merging data between tables
Следующее
От: John Casey
Дата:
Сообщение: Re: bdr_init_copy fails when starting 2nd BDR node