On Sat, Mar 19, 2016 at 1:41 AM, Robert Haas <
robertmhaas@gmail.com> wrote:
>
> On Tue, Mar 1, 2016 at 9:43 AM, Aleksander Alekseev
> <
a.alekseev@postgrespro.ru> wrote:
> >
> > So answering your question - it turned out that we _can't_ reduce
> > NUM_FREELISTS this way.
>
> That's perplexing. I would have expected that with all of the mutexes
> packed in back-to-back like this, we would end up with a considerable
> amount of false sharing. I don't know why it ever helps to have an
> array of bytes all in the same cache line of which each one is a
> heavily-trafficked mutex. Anybody else have a theory?
>
> One other thing that concerns me mildly is the way we're handling
> nentries. It should be true, with this patch, that the individual
> nentries sum to the right value modulo 2^32. But I don't think
> there's any guarantee that the values are positive any more, and in
> theory after running long enough one of them could manage to overflow
> or underflow.
>
Won't in theory, without patch as well nentries can overflow after running for very long time? I think with patch it is more prone to overflow because we start borrowing from other free lists as well.
So at a very minimum I think we need to remove the
> Assert() the value is not negative. But really, I wonder if we
> shouldn't rework things a little more than that.
>
> One idea is to jigger things so that we maintain a count of the total
> number of entries that doesn't change except when we allocate, and
> then for each freelist partition we maintain the number of entries in
> that freelist partition. So then the size of the hash table, instead
> of being sum(nentries) is totalsize - sum(nfree).
>
To me, your idea sounds much better than current code in terms of understanding the free list concept as well. So, +1 for changing the code in this way.