SSI SLRU strategy choices

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема SSI SLRU strategy choices
Дата
Msg-id 4D1A0C130200002500038C44@gw.wicourts.gov
обсуждение исходный текст
Ответы Re: SSI SLRU strategy choices  (Heikki Linnakangas <heikki.linnakangas@enterprisedb.com>)
Список pgsql-hackers
I'm now deep enough into the SLRU techniques to see what my options
are for storing the data appropriate for SLRU.  This consists of
uint64 commitSeqNo (which is overkill enough that I'd be comfortable
stealing a bit or two from the high end in SLRU usage) which needs
to be associated with an xid.  The xids would have gaps, since we
only need to track committed serializable transactions which still
matter because of a long-running transaction weren't subject to
early cleanup based on previously posted rules.  These will be
looked up by xid.
The options I see are:
(1)  Store the xid and commitSeqNo in each SLRU entry -- with
alignment, that's 16 bytes per entry.  Simple, but requires
sequential search for the xid.  Wouldn't scale well.
(2)  Use 8 byte SLRU entries and map the xid values over the SLRU
space, with each spot allowing two different xid values.  At first
blush that looks good, because transaction ID wrap-around techniques
mean that the two values for any one spot couldn't be active at the
same time.  The high bit could flag that the xid is "present" with
the rest of the bits being from the commitSeqNo.  The problem is
that the SLRU code appears to get confused about there being
wrap-around when the SLRU space is half-full, so we would get into
trouble if we burned through more than 2^30 transactions during one
long-running serializable read write transaction.  I still like this
option best, with resort to killing the long-running transaction at
that point.
(3)  Use two SLRU spaces.  You'd look up randomly into the first one
based on xid, and get a position in the second one which would hold
the commitSeqNo, which would be assigned to sequential slots.  This
would potentially allow us to burn through more transactions because
some are likely to be subject to early cleanup.  The marginal
extension of the failure point doesn't seem like it merits the extra
complexity.
(4)  Change SLRU to tolerate more entries.  At most this raises the
number of transactions we can burn through during a long-running
transaction from 2^30 to 2^31.  That hardly seems worth the
potential to destabilize a lot of critical code.
Does (2) sound good to anyone else?  Other ideas?  Does it sound
like I'm totally misunderstanding anything?
-Kevin


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andrew Dunstan
Дата:
Сообщение: Re: pg_dump --split patch
Следующее
От: Joel Jacobson
Дата:
Сообщение: Re: pg_dump --split patch