Re: Physical sites handling large data

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Re: Physical sites handling large data
Дата
Msg-id 29036.1032104867@sss.pgh.pa.us
обсуждение исходный текст
Ответ на Re: Physical sites handling large data  (Ericson Smith <eric@did-it.com>)
Ответы Re: Physical sites handling large data  ("Shridhar Daithankar" <shridhar_daithankar@persistent.co.in>)
Список pgsql-general
Ericson Smith <eric@did-it.com> writes:
> Using the bigmem kernel and RH7.3, we were able to set Postgresql shared
> memory to 3.2Gigs (out of 6GB Ram). Does this mean that Postgresql will
> only use the first 2Gigs?

I think you are skating on thin ice there --- there must have been some
integer overflows in the shmem size calculations.  It evidently worked
as an unsigned result, but...

IIRC we have an open bug report from someone who tried to set
shared_buffers so large that the shmem size would have been ~5GB;
the overflowed size request was ~1GB and then it promptly dumped
core from trying to access memory beyond that.  We need to put in
some code to detect overflows in those size calculations.

In any case, pushing PG's shared memory to 50% of physical RAM is
completely counterproductive.  See past discussions (mostly on
-hackers and -admin if memory serves) about appropriate sizing of
shared buffers.  There are different schools of thought about this,
but I think everyone agrees that a shared-buffer pool that's roughly
equal to the size of the kernel's disk buffer cache is a waste of
memory.  One should be much bigger than the other.  I personally think
it's appropriate to let the kernel cache do most of the work, and so
I favor a shared_buffers setting of just a few thousand.

            regards, tom lane

В списке pgsql-general по дате отправления:

Предыдущее
От: Ericson Smith
Дата:
Сообщение: Re: Physical sites handling large data
Следующее
От: "Nigel J. Andrews"
Дата:
Сообщение: Re: Can't run configure