Re: [HACKERS] \dt and disk access
| От | The Hermit Hacker |
|---|---|
| Тема | Re: [HACKERS] \dt and disk access |
| Дата | |
| Msg-id | cf07c0e477c425ad9cbc9a105508e02e обсуждение исходный текст |
| Ответ на | [HACKERS] \dt and disk access (Bruce Momjian <maillist@candle.pha.pa.us>) |
| Список | pgsql-hackers |
On Sat, 21 Jun 1997, Bruce Momjian wrote:
> I will wait for 6.1p1, and then apply the fix. I plan to just use
> qsort() on an array of tuple pointers if the size of the ORDER BY result
> is under 256k bytes, which is only 1/2 of the default shared buffer
> size. I will run tests to see if there is major speed improvement at
> that size. If not, I will knock it down to 128k or 64k, just to be safe
> that it will not swamp the shared pool on a busy system. I may even key
> the trigger value on the number of shared buffers allocated. This has
> got to be faster than what it does now.
Why use shared memory for this? Why not use mmap() for it? Come
to think of it, mmap()'ng it would have better scalability, no? If you
already know the result size (ie. 256k), you could that have the code try
to mmap() that amount of memory to do the sort in. If the mmap() fails,
revert to using a file...now the result size of the SELECT doesn't have a
compiled in "limit" to the size of memory used for the sort...its restricted
to the amount of memory on the machine (ie. if I double the RAM, I should be
able to have double the results sorted in memory instead of one disk)
Add a flag over top of that, like -B, that states the *max* result
size to do an in-memory sort with...or, rather, the *max* to try to do one
with.
Marc G. Fournier
Systems Administrator @ hub.org
primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org
------------------------------
В списке pgsql-hackers по дате отправления: