Re: shared-memory based stats collector

Поиск
Список
Период
Сортировка
От Kyotaro HORIGUCHI
Тема Re: shared-memory based stats collector
Дата
Msg-id 20180926.095509.182252925.horiguchi.kyotaro@lab.ntt.co.jp
обсуждение исходный текст
Ответ на Re: shared-memory based stats collector  (Andres Freund <andres@anarazel.de>)
Ответы Re: shared-memory based stats collector  (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>)
Список pgsql-hackers
Hello. Thank you for the comments.

At Thu, 20 Sep 2018 10:37:24 -0700, Andres Freund <andres@anarazel.de> wrote in
<20180920173724.5w2n2nwkxtyi4azw@alap3.anarazel.de>
> Hi,
> 
> On 2018-09-20 09:55:27 +0200, Antonin Houska wrote:
> > I've spent some time reviewing this version.
> > 
> > Design
> > ------
> > 
> > 1. Even with your patch the stats collector still uses an UDP socket to
> >    receive data. Now that the shared memory API is there, shouldn't the
> >    messages be sent via shared memory queue? [1] That would increase the
> >    reliability of message delivery.
> > 
> >    I can actually imagine backends inserting data into the shared hash tables
> >    themselves, but that might make them wait if the same entries are accessed
> >    by another backend. It should be much cheaper just to insert message into
> >    the queue and let the collector process it. In future version the collector
> >    can launch parallel workers so that writes by backends do not get blocked
> >    due to full queue.
> 
> I don't think either of these is right. I think it's crucial to get rid
> of the UDP socket, but I think using a shmem queue is the wrong
> approach. Not just because postgres' shm_mq is single-reader/writer, but
> also because it's plainly unnecessary.  Backends should attempt to
> update the shared hashtable, but acquire the necessary lock
> conditionally, and leave the pending updates of the shared hashtable to
> a later time if they cannot acquire the lock.

Ok, I just intended to avoid reading many bytes from a file and
thought that writer-side can be resolved later.

Currently locks on the shared stats table is acquired by dshash
mechanism in a partition-wise manner. The number of the
partitions is currently fixed to 2^7 = 128, but writes for the
same table confilicts each other regardless of the number of
partitions. As the first step, I'm going to add
conditional-locking capability to dsahsh_find_or_insert and each
backend holds a queue of its pending updates.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: transction_timestamp() inside of procedures
Следующее
От: "Iwata, Aya"
Дата:
Сообщение: RE: libpq debug log