Re: limiting performance impact of wal archiving.

Поиск
Список
Период
Сортировка
От Laurent Laborde
Тема Re: limiting performance impact of wal archiving.
Дата
Msg-id 8a1bfe660911100852l5573ab30ne91c6d8937dc836b@mail.gmail.com
обсуждение исходный текст
Ответ на Re: limiting performance impact of wal archiving.  (Greg Smith <greg@2ndquadrant.com>)
Ответы Re: limiting performance impact of wal archiving.
Re: limiting performance impact of wal archiving.
Re: limiting performance impact of wal archiving.
Список pgsql-performance
On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <greg@2ndquadrant.com> wrote:
> Laurent Laborde wrote:
>>
>> It is on a separate array which does everything but tablespace (on a
>> separate array) and indexspace (another separate array).
>>
>
> On Linux, the types of writes done to the WAL volume (where writes are
> constantly being flushed) require the WAL volume not be shared with anything
> else for that to perform well.  Typically you'll end up with other things
> being written out too because it can't just selectively flush just the WAL
> data.  The whole "write barriers" implementation should fix that, but in
> practice rarely does.
>
> If you put many drives into one big array, somewhere around 6 or more
> drives, at that point you might put the WAL on that big volume too and be OK
> (presuming a battery-backed cache which you have).  But if you're carving up
> array sections so finely for other purposes, it doesn't sound like your WAL
> data is on a big array.  Mixed onto a big shared array or single dedicated
> disks (RAID1) are the two WAL setups that work well, and if I have a bunch
> of drives I personally always prefer a dedicated drive mainly because it
> makes it easy to monitor exactly how much WAL activity is going on by
> watching that drive.

On the "new" slave i have 6 disk in raid-10 and 2 disk in raid-1.
I tought about doing the same thing with the master.


>> Well, actually, i also change the configuration to synchronous_commit=off
>> It probably was *THE* problem with checkpoint and archiving :)
>>
>
> This is basically turning off the standard WAL implementation for one where
> you'll lose some data if there's a crash.  If you're OK with that, great; if
> not, expect to lose some number of transactions if the server ever goes down
> unexpectedly when configured like this.

I have 1 spare dedicated to hot standby, doing nothing but waiting for
the master to fail.
+ 2 spare candidate for cluster mastering.

In theory, i could even disable fsync and all "safety" feature on the master.
In practice, i'd like to avoid using the slony's failover capabilities
if i can avoid it :)

> Generally if checkpoints and archiving are painful, the first thing to do is
> to increase checkpoint_segments to a very high amount (>100), increase
> checkpoint_timeout too, and push shared_buffers up to be a large chunk of
> memory.

Shared_buffer is 2GB.
I'll reread domcumentation about checkpoint_segments.
thx.

> Disabling synchronous_commit should be a last resort if your
> performance issues are so bad you have no choice but to sacrifice some data
> integrity just to keep things going, while you rearchitect to improve
> things.
>
>> eg: historically, we use JFS with LVM on linux. from the good old time
>> when IO wasn't a problem.
>> i heard that ext3 is not better for postgresql. what else ? xfs ?
>>
>
> You never want to use LVM under Linux if you care about performance.  It
> adds a bunch of overhead that drops throughput no matter what, and it's
> filled with limitations.  For example, I mentioned write barriers being one
> way to interleave WAL writes without other types without having to write the
> whole filesystem cache out.  Guess what:  they don't work at all regardless
> if you're using LVM.  Much like using virtual machines, LVM is an approach
> only suitable for low to medium performance systems where your priority is
> easier management rather than speed.

*doh* !!
Everybody told me "nooo ! LVM is ok, no perceptible overhead, etc ...)
Are you 100% about LVM ? I'll happily trash it :)

> Given the current quality of Linux code, I hesitate to use anything but ext3
> because I consider that just barely reliable enough even as the most popular
> filesystem by far.  JFS and XFS have some benefits to them, but none so
> compelling to make up for how much less testing they get.  That said, there
> seem to be a fair number of people happily running high-performance
> PostgreSQL instances on XFS.

Thx for the info :)

--
ker2x

В списке pgsql-performance по дате отправления:

Предыдущее
От: Greg Smith
Дата:
Сообщение: Re: limiting performance impact of wal archiving.
Следующее
От: Scott Marlowe
Дата:
Сообщение: Re: limiting performance impact of wal archiving.