<p dir="ltr">On Sep 3, 2015 10:14 PM, "Pavel Stehule" <<a
href="mailto:pavel.stehule@gmail.com">pavel.stehule@gmail.com</a>>wrote:<br /> >>><br /> >>>
Pleasefind attached a v3.<br /> >>><br /> >>> It uses a shared memory queue and also has the ability
tocapture plans nested deeply in the call stack. Not sure about using the executor hook, since this is not an
extension...<br/> >>><br /> >>> The LWLock is used around initializing/cleaning the shared struct and
themessage queue, the IO synchronization is handled by the message queue itself.<br /> >><br /> >> I am not
prettyhappy from this design. Only one EXPLAIN PID/GET STATUS in one time can be executed per server - I remember lot
ofqueries that doesn't handle CANCEL well ~ doesn't handle interrupt well, and this can be unfriendly. Cannot to say if
itis good enough for first iteration. This is functionality that can be used for diagnostic when you have overloaded
serverand this risk looks too high (for me). The idea of receive slot can to solve this risk well (and can be used
elsewhere).The difference from this code should not be too big - although it is not trivial - needs work with PGPROC.
Theopinion of our multiprocess experts can be interesting. Maybe I am too careful.<p dir="ltr">Sorry, but I still don't
seehow the slots help this issue - could you please elaborate?<p dir="ltr">>> Other smaller issues:<br />
>><br/> >> * probably sending line by line is useless - shm_mq_send can pass bigger data when nowait =
false<pdir="ltr">I'm not sending it like that because of the message size - I just find it more convenient. If you
thinkit can be problematic, its easy to do this as before, by splitting lines on the receiving side.<p
dir="ltr">>>* pg_usleep(1000L); - it is related to single point resource<p dir="ltr">But not a highly concurrent
one.<pdir="ltr">-<br /> Alex