Re: Get memory contexts of an arbitrary backend process

Поиск
Список
Период
Сортировка
От Kasahara Tatsuhito
Тема Re: Get memory contexts of an arbitrary backend process
Дата
Msg-id CAP0=ZV+XcPnX+m9ROG07hs=DqwQCOLAojgvYCRogw=bFAv1+Eg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Get memory contexts of an arbitrary backend process  (torikoshia <torikoshia@oss.nttdata.com>)
Ответы Re: Get memory contexts of an arbitrary backend process  (torikoshia <torikoshia@oss.nttdata.com>)
Список pgsql-hackers
Hi,

On Thu, Dec 10, 2020 at 10:48 AM torikoshia <torikoshia@oss.nttdata.com> wrote:
>
> On 2020-12-04 19:16, torikoshia wrote:
> > On 2020-12-03 10:36, Tom Lane wrote:
> >> Fujii Masao <masao.fujii@oss.nttdata.com> writes:
> >>> I'm starting to study how this feature behaves. At first, when I
> >>> executed
> >>> the following query, the function never returned. ISTM that since
> >>> the autovacuum launcher cannot respond to the request of memory
> >>> contexts dump, the function keeps waiting infinity. Is this a bug?
> >>> Probably we should exclude non-backend proceses from the target
> >>> processes to dump? Sorry if this was already discussed.
> >>
> >>>      SELECT pg_get_backend_memory_contexts(pid) FROM
> >>> pg_stat_activity;
> >
> > Thanks for trying it!
> >
> > It was not discussed explicitly, and I was going to do it later
> > as commented.
> >
> >> +            /* TODO: Check also whether backend or not. */
> >
> >>
> >> FWIW, I think this patch is fundamentally unsafe.  It's got a
> >> lot of the same problems that I complained about w.r.t. the
> >> nearby proposal to allow cross-backend stack trace dumping.
> >> It does avoid the trap of thinking that it can do work in
> >> a signal handler, but instead it supposes that it can do
> >> work involving very high-level objects such as shared hash tables
> >> in anyplace that might execute CHECK_FOR_INTERRUPTS.  That's
> >> never going to be safe: the only real expectation the system
> >> has is that CHECK_FOR_INTERRUPTS is called at places where our
> >> state is sane enough that a transaction abort can clean up.
> >> Trying to do things like taking LWLocks is going to lead to
> >> deadlocks or worse.  We need not even get into the hard questions,
> >> such as what happens when one process or the other exits
> >> unexpectedly.
> >
> > Thanks for reviewing!
> >
> > I may misunderstand something, but the dumper works not at
> > CHECK_FOR_INTERRUPTS but during the client read, i.e.,
> > ProcessClientReadInterrupt().
> >
> > Is it also unsafe?
> >
> >
> > BTW, since there was a comment that the shared hash table
> > used too much memory, I'm now rewriting this patch not to use
> > the shared hash table but a simpler static shared memory struct.
>
> Attached a rewritten patch.
Thanks for updating patch.

But when I had applyed the patch to the current HEAD and did make, I
got an error due to duplicate OIDs.
You need to rebase the patch.

> Accordingly, I also slightly modified the basic design as below.
>
> ---
> # Communication flow between the dumper and the requester
> - (1) When requesting memory context dumping, the dumper changes
> the struct on the shared memory state from 'ACCEPTABLE' to
> 'REQUESTING'.
> - (2) The dumper sends the signal to the dumper process and wait on
> the latch.
> - (3) When the dumper noticed the signal, it changes the state to
> 'DUMPING'.
> - (4) When the dumper completes dumping, it changes the state to
> 'DONE' and set the latch.
> - (5) The requestor reads the dump file and shows it to the user.
> Finally, the requestor removes the dump file and reset the shared
> memory state to 'ACCEPTABLE'.
>
> # Query cancellation
> - When the requestor cancels dumping, e.g. signaling using ctrl-C,
> the requestor changes the state of the shared memory to 'CANCELING'.
> - The dumper checks the state when it tries to change the state to
> 'DONE' at (4), and if the state is 'CANCELING', it initializes the
> dump file and reset the shared memory state to 'ACCEPTABLE'.
>
> # Cleanup dump file and the shared memory
> - In the normal case, the dumper removes the dump file and resets
> the shared memory entry as described in (5).
> - When something like query cancellation or process termination
> happens on the dumper after (1) and before (3), in other words,
> the state is 'REQUESTING', the requestor does the cleanup.
> - When something happens on the dumper or the requestor after (3)
> and before (4), in other words, the state is 'DUMPING', the dumper
> does the cleanup. Specifically, if the requestor cancels the query,
> it just changes the state to 'CANCELING' and the dumper notices it
> and cleans up things later.
> OTOH, when the dumper fails to dump, it cleans up the dump file and
> reset the shared memory state.
> - When something happens on the requestor after (4), i.e., the state
> is 'DONE', the requestor does the cleanup.
> - In the case of receiving SIGKILL or power failure, all dump files
> are removed in the crash recovery process.
> ---
If the dumper is terminated before it dumps, the requestor will appear
to enter an
infinite loop because the status of mcxtdumpShmem will not change.
The following are the steps to reproduce.

 - session1
   BEGIN; LOCK TABLE t;
   - session2
     SELECT * FROM t; -- wait
     - session3
       select pg_get_target_backend_memory_contexts(<pid of session2>); -- wait
 - session1
   select pg_terminate_backend(<pid of session2>); -- kill session2
     - session3 waits forever.

Therefore, you may need to set mcxtdumpShmem->dump_status to
MCXTDUMPSTATUS_CANCELING
or other status before the dumper terminates.

Also, although I have not been able to reproduce it, I believe that
with the current design,
if the requestor disappears right after the dumper dumps the memory information,
the dump file will remain.
Since the current design appears to allow only one requestor per
instance, when the requestor
requests a dump, it might be a good idea to delete any remaining dump
files, if any.

The following are comments on the code.

+   proc = BackendPidGetProc(dst_pid);
+
+   if (proc == NULL)
+   {
+       ereport(WARNING,
+               (errmsg("PID %d is not a PostgreSQL server process", dst_pid)));
+
+       return (Datum) 1;
+   }
For now, background writer, checkpointer and WAL writer are belong to
the auxiliary process.
Therefore, if we specify the PIDs of these processes for
pg_get_target_backend_memory_contexts(),
"PID xxxx is not a PostgreSQL server process" whould be outoput.
This confuses the user.
How about use AuxiliaryPidGetProc() to determine these processes?

+               ereport(INFO,
+                   (errmsg("The request has failed and now PID %d is
requsting dumping.",
+                       mcxtdumpShmem->src_pid)));
+
+               LWLockRelease(McxtDumpLock);
You can release LWLock before ereport.

+   Assert(mcxtdumpShmem->dump_status = MCXTDUMPSTATUS_REQUESTING);
typo?
It might be "mcxtdumpShmem->dump_status == MCXTDUMPSTATUS_REQUESTING".

+           ereport(INFO,
+                   (errmsg("PID %d is looked up by another process", pid)));
This message is not accurate.
Because, in the current design, only one session can request a dump,
so if there are multiple requests, this message will be output even if the
request is for a PID that is not looked up.

+   if (fpout == NULL)
+   {
+       ereport(LOG,
+               (errcode_for_file_access(),
+                errmsg("could not write memory context file \"%s\": %m",
+                       dumpfile)));
It would be better to use "dump file" instead of "memory context file".

+static void
+McxtReqKill(int code, Datum arg)
The function McxtReqKill looks like it should have a different name.
(Since it doesn't do any kill).
How about McxtReqcCleanup or something similar?

Best regards,

> >> I also find the idea that this should be the same SQL function
> >> as pg_get_backend_memory_contexts to be a seriously bad decision.
> >> That means that it's not possible to GRANT the right to examine
> >> only your own process's memory --- with this proposal, that means
> >> granting the right to inspect every other process as well.
> >>
> >> Beyond that, the fact that there's no way to restrict the capability
> >> to just, say, other processes owned by the same user means that
> >> it's not really safe to GRANT to non-superusers anyway.  Even with
> >> such a restriction added, things are problematic, since for example
> >> it would be possible to inquire into the workings of a
> >> security-definer function executing in another process that
> >> nominally is owned by your user.
> >
> > I'm going to change the function name and restrict the executor to
> > superusers. Is it enough?
>
> In the attached patch, I changed the function name to
> pg_get_target_backend_memory_contexts() for now.
>
>
> Regards,



-- 
Tatsuhito Kasahara
kasahara.tatsuhito _at_ gmail.com



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Kyotaro Horiguchi
Дата:
Сообщение: pg_waldump: Limit the number of lines shown at the start
Следующее
От: Dilip Kumar
Дата:
Сообщение: Re: [HACKERS] Custom compression methods