Re: Sort memory not being released
От | Jim C. Nasby |
---|---|
Тема | Re: Sort memory not being released |
Дата | |
Msg-id | 20030617212500.GO40542@flake.decibel.org обсуждение исходный текст |
Ответ на | Re: Sort memory not being released (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: Sort memory not being released
(Tom Lane <tgl@sss.pgh.pa.us>)
|
Список | pgsql-general |
On Tue, Jun 17, 2003 at 10:45:39AM -0400, Tom Lane wrote: > Martijn van Oosterhout <kleptog@svana.org> writes: > > For large allocations glibc tends to mmap() which does get unmapped. There's > > a threshold of 4KB I think. Ofcourse, thousands of allocations for a few > > bytes will never trigger it. > > But essentially all our allocation traffic goes through palloc, which > bunches small allocations together. In typical scenarios malloc will > only see requests of 8K or more, so we should be in good shape on this > front. > > (Not that this is very relevant to Jim's problem, since he's not using > glibc...) Maybe it would be helpful to describe why I noticed this... I've been doing some things that require very large sorts. I generally have very few connections though, so I thought I'd set sort_mem to about 1/3 of my memory. My thought was that it's better to suck down a ton of memory and blow out the disk cache if it means we can avoid hitting the disk for a sort at all. Of course I wasn't planning on sucking down a bunch of memory and holding on to it. :) I've read through the sort code and it seems that the pre-buffering once you go to disk will probably hurt with a huge sort_mem setting, since the data could be double or even triple buffered (in memtuples[], in pgsql's shared buffers, and by the OS). I think that a more ideal scenario (which I've been meaning to email hackers about) would be something like this: If the OS is running low on free physical memory, a sort will use less than sort_mem, as an attempt to avoid swapping. sort_mem is the maximum amount of sort memory a single sort (or maybe a single connection) can take. If sort_mem is over X size, then use only Y for pre-buffering (How much does a large sort_mem help if you have to spill to disk?) If it's pretty clear that the sort won't fit in memory (due to sort_mem or system free memory being low), I think it might help if tuplesort just went to disk right away, instead of waiting until all the memory was used up, but again, I'm not sure how the sort algorithm works when it goes to tape. This should mean that you can set the system up to allow very large sorts before spilling to disk... if there's not a lot of sorts sucking down memory, a large sort will be able to avoid overflowing to disk, which is obviously a huge performance gain. If the system is busy/memory bound though, sorts will overflow to disk, rather than using swap space which I'm sure would be a lot worse. -- Jim C. Nasby (aka Decibel!) jim@nasby.net Member: Triangle Fraternity, Sports Car Club of America Give your computer some brain candy! www.distributed.net Team #1828 Windows: "Where do you want to go today?" Linux: "Where do you want to go tomorrow?" FreeBSD: "Are you guys coming, or what?"
В списке pgsql-general по дате отправления: