Re: Enhancing Memory Context Statistics Reporting

Поиск
Список
Период
Сортировка
От Rahila Syed
Тема Re: Enhancing Memory Context Statistics Reporting
Дата
Msg-id CAH2L28uayhv+AxgPLThexJ21NA8j7XFiYqu6rgsZSSNosvPjvg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Enhancing Memory Context Statistics Reporting  (Tomas Vondra <tomas@vondra.me>)
Ответы Re: Enhancing Memory Context Statistics Reporting
Список pgsql-hackers

I think something is not quite right, because if I try running a simple
pgbench script that does pg_get_process_memory_contexts() on PIDs of
random postgres process (just like in the past), I immediately get this:

Thank you for testing. This issue occurs when a process that previously attached
to a DSA area for publishing its own context statistics tries to attach to it again while
querying statistics from another backend. Previously, I was not detaching at the end
of publishing the statistics. I have now changed it to detach from the area after the
statistics are published. The fix is included in the updated patch.
 
Perhaps the backends need to synchronize creation of the DSA?

This has been implemented in the patch.
 
Sounds good. Do you have any measurements how much this reduced the size
of the entries written to the DSA? How many entries will fit into 1MB of
shared memory?

The size of the entries has approximately halved after dynamically allocating the 
strings and a datum array.
Also, previously, I was allocating the entire memory for all contexts in one large chunk
from DSA. I have now separated them into smaller allocations 
per context. The integer counters are still allocated at once for all contexts, but  
the size of an allocated chunk will not exceed approximately 128 bytes * total_num_of_contexts.
Average total number of contexts is in the hundreds.

PFA the updated and rebased patches.

Thank you,
Rahila Syed

Вложения

В списке pgsql-hackers по дате отправления: