Re: significant vacuum issues - looking for suggestions

Поиск
Список
Период
Сортировка
От Decibel!
Тема Re: significant vacuum issues - looking for suggestions
Дата
Msg-id 20070827210041.GN54309@nasby.net
обсуждение исходный текст
Ответ на Re: significant vacuum issues - looking for suggestions  (Bill Moran <wmoran@collaborativefusion.com>)
Ответы Re: significant vacuum issues - looking for suggestions  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Re: significant vacuum issues - looking for suggestions  (Kevin Kempter <kevin@kevinkempterllc.com>)
Список pgsql-performance
On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:
> In response to Kevin Kempter <kevin@kevinkempterllc.com>:
>
> > Hi List;
> >
> > I've just started working with a client that has been running Postgres (with
> > no DBA) for a few years. They're running  version 8.1.4 on 4-way dell boxes
> > with 4Gig of memory on each box attached to RAID-10 disk arrays.
> >
> > Some of their key config settings are here:
> > shared_buffers = 20480
> > work_mem = 16384
> > maintenance_work_mem = 32758
>
> Before you do any of those other things, bump shared_buffers to about
> 120000 and maintenance_work_mem to 250000 or so -- unless this box
> has other applications on it using significant amounts of those 4G of
> RAM.  You may find that these changes alone are enough to get vacuum
> to complete.  You'll need to restart the server for the shared_buffers
> setting to take effect.

For the really bloated table, you might need to go even higher than
250000 for maint_work_mem. IIRC vacuum needs 6 bytes per dead tuple, so
that means 43M rows... with 5M dead pages, that means less than 10 rows
per page, which is unlikely. Keep in mind that if you do a vacuum
verbose, you'll be able to see if vacuum runs out of
maintenance_work_mem, because you'll see multiple passes through all the
indexes.

You could also potentially use this to your benefit. Set maint_work_mem
low enough so that vacuum will have to start it's cleaning pass after
only an hour or so... depending on how big/bloated the indexes are on
the table, it might take another 2-3 hours to clean everything. I
believe that as soon as you see it start on the indexes a second time
you can kill it... you'll have wasted some work, but more importantly
you'll have actually vacuumed part of the table.

But all of that's a moot point if they're running the default free space
map settings, which are way, way, way to conservative in 8.1. If you've
got one table with 5M dead pages, you probably want to set fsm_pages to
at least 50000000 as a rough guess, at least until this is under
control. Keep in mind that does equate to 286M of memory, though.

As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *
FROM bloated_table? That would likely be much faster than messing around
with pg_dump.

What kind of disk hardware is this running on? A good raid 10 array with
write caching should be able to handle a 200G database fairly well; at
least better than it is from what I'm hearing.
--
Decibel!, aka Jim Nasby                        decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Вложения

В списке pgsql-performance по дате отправления:

Предыдущее
От: Decibel!
Дата:
Сообщение: Re: asynchronous commit feature
Следующее
От: "Merlin Moncure"
Дата:
Сообщение: Re: asynchronous commit feature