RE: Speed up transaction completion faster after many relations areaccessed in a transaction

Поиск
Список
Период
Сортировка
От Tsunakawa, Takayuki
Тема RE: Speed up transaction completion faster after many relations areaccessed in a transaction
Дата
Msg-id 0A3221C70F24FB45833433255569204D1FBF2C2B@G01JPEXMBYT05
обсуждение исходный текст
Ответ на Re: Speed up transaction completion faster after many relations areaccessed in a transaction  (Andres Freund <andres@anarazel.de>)
Ответы Re: Speed up transaction completion faster after many relations areaccessed in a transaction  ('Andres Freund' <andres@anarazel.de>)
Список pgsql-hackers
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> On the whole I don't think there's an adequate case for committing
> this patch.

From: Andres Freund [mailto:andres@anarazel.de]
> On 2019-04-05 23:03:11 -0400, Tom Lane wrote:
> > If I reduce the number of partitions in Amit's example from 8192
> > to something more real-world, like 128, I do still measure a
> > performance gain, but it's ~ 1.5% which is below what I'd consider
> > a reproducible win.  I'm accustomed to seeing changes up to 2%
> > in narrow benchmarks like this one, even when "nothing changes"
> > except unrelated code.
> 
> I'm not sure it's actually that narrow these days. With all the
> partitioning improvements happening, the numbers of locks commonly held
> are going to rise. And while 8192 partitions is maybe on the more
> extreme side, it's a workload with only a single table, and plenty
> workloads touch more than a single partitioned table.

I would feel happy if I could say such a many-partitions use case is narrow or impractical and ignore it, but it's not
narrow. Two of our customers are actually requesting such usage: one uses 5,500 partitions and is trying to migrate
froma commercial database on Linux, and the other requires 200,000 partitions to migrate from a legacy database on a
mainframe. At first, I thought such many partitions indicate a bad application design, but it sounded valid (or at
leastI can't insist that's bad).  PostgreSQL is now expected to handle such huge workloads.
 


From: Andres Freund [mailto:andres@anarazel.de]
> I'm not sure I'm quite that concerned. For one, a good bit of that space
> was up for grabs until the recent reordering of LOCALLOCK and nobody
> complained. But more importantly, I think commonly the amount of locks
> around is fairly constrained, isn't it? We can't really have that many
> concurrently held locks, due to the shared memory space, and the size of
> a LOCALLOCK isn't that big compared to say relcache entries.  We also
> probably fairly easily could win some space back - e.g. make nLocks 32
> bits.

+1



From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
> I'd also point out that this is hardly the only place where we've
> seen hash_seq_search on nearly-empty hash tables become a bottleneck.
> So I'm not thrilled about attacking that with one-table-at-time patches.
> I'd rather see us do something to let hash_seq_search win across
> the board.
> 
> I spent some time wondering whether we could adjust the data structure
> so that all the live entries in a hashtable are linked into one chain,
> but I don't quite see how to do it without adding another list link to
> struct HASHELEMENT, which seems pretty expensive.

I think the linked list of LOCALLOCK approach is natural, simple, and good.  In the Jim Gray's classic book
"Transactionprocessing: concepts and techniques", we can find the following sentence in "8.4.5 Lock Manager Internal
Logic." The sample implementation code in the book uses a similar linked list to remember and release a transaction's
acquiredlocks.
 

"All the locks of a transaction are kept in a list so they can be quickly found and released at commit or rollback."

And handling this issue with the LOCALLOCK linked list is more natural than with the hash table resize.  We just want
toquickly find all grabbed locks, so we use a linked list.  A hash table is a mechanism to find a particular item
quickly. So it was merely wrong to use the hash table to iterate all grabbed locks.  Also, the hash table got big
becausesome operation in the session needed it, and some subsequent operations in the same session may need it again.
Sowe wouldn't be relieved with shrinking the hash table.
 


Regards
Takayuki Tsunakawa








В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Jamison, Kirk"
Дата:
Сообщение: RE: Transaction commits VS Transaction commits (with parallel) VSquery mean time
Следующее
От: 'Andres Freund'
Дата:
Сообщение: Re: Speed up transaction completion faster after many relations areaccessed in a transaction