Re: exceptionally large UPDATE

Поиск
Список
Период
Сортировка
От Vick Khera
Тема Re: exceptionally large UPDATE
Дата
Msg-id AANLkTikQRvni6ZmeHFJ7KBi-5aYsHYk8N-c1BZ-tJVZ7@mail.gmail.com
обсуждение исходный текст
Ответ на Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Ответы Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Список pgsql-general
On Thu, Oct 28, 2010 at 1:06 PM, Ivan Sergio Borgonovo
<mail@webthatworks.it> wrote:
> What I'm planning to do is:
> max_connections = 5
> shared_buffers = 240M
> work_mem = 90MB
> maintenance_work_mem = 1GB
> max_fsm_pages = 437616
> max_fsm_relations = 1200
> checkpoint_segments = 70
> default_statistics_target = 30
> #log_min_duration_statement = 1000
>

default_statistics_target = 100 is the new "default" for newer
postgres, and with good reason... try that.

if you boost your checkpoint_segments, also twiddle the
checkpoint_timeout (increase it) and checkpoint_completion_target
(something like 0.8 would be good, depending on how fast your disks
are) values to try to smooth out your I/O (ie, keep it from bursting
at checkpoint timeout).  Is 5 connections really enough for you?

And like I said before, you can set the work_mem and/or
maintenance_work_mem on a per-connection basis as needed, so for your
big update you can increase those values just during that work without
affecting the rest of the system.

В списке pgsql-general по дате отправления:

Предыдущее
От: Dean Gibson AE7Q
Дата:
Сообщение: 9.0 replication -- multiple hot_standby servers
Следующее
От: Tom Lane
Дата:
Сообщение: Re: create table as select VS create table; insert as select