Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

Поиск
Список
Период
Сортировка
От Masahiko Sawada
Тема Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager
Дата
Msg-id CAD21AoBZwRjH_OcCp3SZ+KsWeqGA_HMkFeJPrg9EfKr+d+TQ2g@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager  (Amit Kapila <amit.kapila16@gmail.com>)
Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
On Fri, Apr 27, 2018 at 4:25 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Apr 26, 2018 at 3:10 PM, Andres Freund <andres@anarazel.de> wrote:
>>> I think the real question is whether the scenario is common enough to
>>> worry about.  In practice, you'd have to be extremely unlucky to be
>>> doing many bulk loads at the same time that all happened to hash to
>>> the same bucket.
>>
>> With a bunch of parallel bulkloads into partitioned tables that really
>> doesn't seem that unlikely?
>
> It increases the likelihood of collisions, but probably decreases the
> number of cases where the contention gets really bad.
>
> For example, suppose each table has 100 partitions and you are
> bulk-loading 10 of them at a time.  It's virtually certain that you
> will have some collisions, but the amount of contention within each
> bucket will remain fairly low because each backend spends only 1% of
> its time in the bucket corresponding to any given partition.
>

I share another result of performance evaluation between current HEAD
and current HEAD with v13 patch(N_RELEXTLOCK_ENTS = 1024).

Type of table: normal table, unlogged table
Number of child tables : 16, 64 (all tables are located on the same tablespace)
Number of clients : 32
Number of trials : 100
Duration: 180 seconds for each trials

The hardware spec of server is Intel Xeon 2.4GHz (HT 160cores), 256GB
RAM, NVMe SSD 1.5TB.
Each clients load 10kB random data across all partitioned tables.

Here is the result.

 childs |   type   | target  |  avg_tps   | diff with HEAD
--------+----------+---------+------------+------------------
     16 | normal   | HEAD    |   1643.833 |
     16 | normal   | Patched |  1619.5404 |      0.985222
     16 | unlogged | HEAD    |  9069.3543 |
     16 | unlogged | Patched |  9368.0263 |      1.032932
     64 | normal   | HEAD    |   1598.698 |
     64 | normal   | Patched |  1587.5906 |      0.993052
     64 | unlogged | HEAD    |  9629.7315 |
     64 | unlogged | Patched | 10208.2196 |      1.060073
(8 rows)

For normal tables, loading tps decreased 1% ~ 2% with this patch
whereas it increased 3% ~ 6% for unlogged tables. There were
collisions at 0 ~ 5 relation extension lock slots between 2 relations
in the 64 child tables case but it didn't seem to affect the tps.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Thomas Munro
Дата:
Сообщение: Re: Excessive CPU usage in StandbyReleaseLocks()
Следующее
От: Jeevan Ladhe
Дата:
Сообщение: Re: "Access privileges" is missing after pg_dumpall