Re: Very poor performance loading 100M of sql data using copy

Поиск
Список
Период
Сортировка
От Greg Smith
Тема Re: Very poor performance loading 100M of sql data using copy
Дата
Msg-id Pine.GSO.4.64.0804291149450.8414@westnet.com
обсуждение исходный текст
Ответ на Re: Very poor performance loading 100M of sql data using copy  (John Rouillard <rouilj@renesys.com>)
Список pgsql-performance
On Tue, 29 Apr 2008, John Rouillard wrote:

> So swap the memory usage from the OS cache to the postgresql process.
> Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a
> better setting. So I'll try 300000 to start (1/8 of memory) and see
> what it does to the other processes on the box.

That is potentially a good setting.  Just be warned that when you do hit a
checkpoint with a high setting here, you can end up with a lot of data in
memory that needs to be written out, and under 8.2 that can cause an ugly
spike in disk writes.  The reason I usually threw out 30,000 as a
suggested starting figure is that most caching disk controllers can buffer
at least 256MB of writes to keep that situation from getting too bad.
Try it out and see what happens, just be warned that's the possible
downside of setting shared_buffers too high and therefore you might want
to ease into that more gradually (particularly if this system is shared
with other apps).
x
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

В списке pgsql-performance по дате отправления:

Предыдущее
От: Chris Browne
Дата:
Сообщение: Re: Replication Syatem
Следующее
От: Shane Ambler
Дата:
Сообщение: Re: Replication Syatem