Re: hash_create API changes (was Re: speedup tidbitmap patch: hash BlockNumber)
От | Tom Lane |
---|---|
Тема | Re: hash_create API changes (was Re: speedup tidbitmap patch: hash BlockNumber) |
Дата | |
Msg-id | 4814.1419097879@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: hash_create API changes (was Re: speedup tidbitmap patch: hash BlockNumber) (Andres Freund <andres@2ndquadrant.com>) |
Ответы |
Re: hash_create API changes (was Re: speedup tidbitmap
patch: hash BlockNumber)
|
Список | pgsql-hackers |
Andres Freund <andres@2ndquadrant.com> writes: > On 2014-12-19 22:03:55 -0600, Jim Nasby wrote: >> What I am thinking is not using all of those fields in their raw form to calculate the hash value. IE: something analogousto: >> hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode) ^ relNode) << 32 | blockNum) >> >> perhaps that actual code wouldn't work, but I don't see why we couldn't do something similar... am I missing something? > I don't think that'd improve anything. Jenkin's hash does have a quite > mixing properties, I don't believe that the above would improve the > quality of the hash. I think what Jim is suggesting is to intentionally degrade the quality of the hash in order to let it be calculated a tad faster. We could do that but I doubt it would be a win, especially in systems with lots of buffers. IIRC, when we put in Jenkins hashing to replace the older homebrew hash function, it improved performance even though the hash itself was slower. regards, tom lane
В списке pgsql-hackers по дате отправления: