Re: [HACKERS] Automatically setting work_mem
От | Simon Riggs |
---|---|
Тема | Re: [HACKERS] Automatically setting work_mem |
Дата | |
Msg-id | 1143013699.24487.553.camel@localhost.localdomain обсуждение исходный текст |
Ответы |
Re: [HACKERS] Automatically setting work_mem
Re: [HACKERS] Automatically setting work_mem |
Список | pgsql-patches |
On Tue, 2006-03-21 at 17:47 -0500, Tom Lane wrote: > I'm fairly unconvinced about Simon's underlying premise --- that we > can't make good use of work_mem in sorting after the run building phase > --- anyway. We can make good use of memory, but there does come a point in final merging where too much is of no further benefit. That point seems to be at about 256 blocks per tape; patch enclosed for testing. (256 blocks per tape roughly doubles performance over 32 blocks at that stage). That is never the case during run building - more is always better. > If we cut back our memory usage Simon inserts the words: "too far" > then we'll be forcing a > significantly more-random access pattern to the temp file(s) during > merging, because we won't be able to pre-read as much at a time. Yes, thats right. If we have 512MB of memory that gives us enough for 2000 tapes, yet the initial runs might only build a few runs. There's just no way that all 512MB of memory is needed to optimise the performance of reading in a few tapes at time of final merge. I'm suggesting we always keep 2MB per active tape, or the full allocation, whichever is lower. In the above example that could release over 500MB of memory, which more importantly can be reused by subsequent sorts if/when they occur. Enclose two patches: 1. mergebuffers.patch allows measurement of the effects of different merge buffer sizes, current default=32 2. reassign2.patch which implements the two kinds of resource deallocation/reassignment proposed. Best Regards, Simon Riggs
Вложения
В списке pgsql-patches по дате отправления: