RE: [Patch] Optimize dropping of relation buffers using dlist

Поиск
Список
Период
Сортировка
От k.jamison@fujitsu.com
Тема RE: [Patch] Optimize dropping of relation buffers using dlist
Дата
Msg-id OSBPR01MB2341CFE353B077565B1AD139EF710@OSBPR01MB2341.jpnprd01.prod.outlook.com
обсуждение исходный текст
Ответ на Re: [Patch] Optimize dropping of relation buffers using dlist  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Ответы Re: [Patch] Optimize dropping of relation buffers using dlist  (Konstantin Knizhnik <knizhnik@garret.ru>)
Список pgsql-hackers
On Wednesday, July 29, 2020 4:55 PM, Konstantin Knizhnik wrote:
> On 17.06.2020 09:14, k.jamison@fujitsu.com wrote:
> > Hi,
> >
> > Since the last posted version of the patch fails, attached is a rebased version.
> > Written upthread were performance results and some benefits and challenges.
> > I'd appreciate your feedback/comments.
> >
> > Regards,
> > Kirk Jamison

> As far as i understand this patch can provide significant improvement of
> performance only in case of recovery  of truncates of large number of tables. You
> have added shared hash of relation buffers and certainly if adds some extra
> overhead. According to your latest results this overhead is quite small. But it will
> be hard to prove that there will be no noticeable regression at some workloads.

Thank you for taking a look at this.

Yes, one of the aims is to speed up recovery of truncations, but at the same time the
patch also improves autovacuum, vacuum and relation truncate index executions. 
I showed results of pgbench results above for different types of workloads,
but I am not sure if those are validating enough...

> I wonder if you have considered case of local hash (maintained only during
> recovery)?
> If there is after-crash recovery, then there will be no concurrent access to shared
> buffers and this hash will be up-to-date.
> in case of hot-standby replica we can use some simple invalidation (just one flag
> or counter which indicates that buffer cache was updated).
> This hash also can be constructed on demand when DropRelFileNodeBuffers is
> called first time (so w have to scan all buffers once, but subsequent drop
> operation will be fast.
> 
> i have not thought much about it, but it seems to me that as far as this problem
> only affects recovery, we do not need shared hash for it.
> 

The idea of the patch is to mark the relation buffers to be dropped after scanning
the whole shared buffers, and store them into shared memory maintained in a dlist,
and traverse the dlist on the next scan.
But I understand the point that it is expensive and may cause overhead, that is why
I tried to define a macro to limit the number of pages that we can cache for cases
that lookup cost can be problematic (i.e. too many pages of relation).

#define BUF_ID_ARRAY_SIZE 100
int buf_id_array[BUF_ID_ARRAY_SIZE];
int forknum_indexes[BUF_ID_ARRAY_SIZE];

In DropRelFileNodeBuffers
do
{
    nbufs = CachedBlockLookup(..., forknum_indexes, buf_id_array, lengthof(buf_id_array));
    for (i = 0; i < nbufs; i++)
    {
        ...
    }
} while (nbufs == lengthof(buf_id_array));


Perhaps the patch affects complexities so we want to keep it simpler, or commit piece by piece?
I will look further into your suggestion of maintaining local hash only during recovery.
Thank you for the suggestion.

Regards,
Kirk Jamison

В списке pgsql-hackers по дате отправления:

Предыдущее
От: legrand legrand
Дата:
Сообщение: RE: Is it useful to record whether plans are generic or custom?
Следующее
От: "k.jamison@fujitsu.com"
Дата:
Сообщение: Fix minor source code comment mistake