Re: exceptionally large UPDATE

Поиск
Список
Период
Сортировка
От Vick Khera
Тема Re: exceptionally large UPDATE
Дата
Msg-id AANLkTi==_bprDq-yhYCuRBUuESuAh0nHAFom4bvpFuhM@mail.gmail.com
обсуждение исходный текст
Ответ на exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Ответы Re: exceptionally large UPDATE  (Ivan Sergio Borgonovo <mail@webthatworks.it>)
Список pgsql-general
On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
<mail@webthatworks.it> wrote:
> I'm increasing maintenance_work_mem to 180MB just before recreating
> the gin index. Should it be more?
>

You can do this on a per-connection basis; no need to alter the config
file.  At the psql prompt (or via your script) just execute the query

SET maintenance_work_mem="180MB"

If you've got the RAM, just use more of it.  'd suspect your server
has plenty of it, so use it!  When I reindex, I often give it 1 or 2
GB.  If you can fit the whole table into that much space, you're going
to go really really fast.

Also, if you are going to update that many rows you may want to
increase your checkpoint_segments.  Increasing that helps a *lot* when
you're loading big data, so I would expect updating big data may also
be helped.  I suppose it depends on how wide your rows are.  1.5
Million rows is really not all that big unless you have lots and lots
of text columns.

В списке pgsql-general по дате отправления:

Предыдущее
От: Thom Brown
Дата:
Сообщение: Re: Can't take base back up with Postgres 9.0 on Solaris 10
Следующее
От: trevor1940
Дата:
Сообщение: PostGIS return multiple points