Re: wal-size limited to 16MB - Performance issue for subsequent backup

Поиск
Список
Период
Сортировка
От Jeff Janes
Тема Re: wal-size limited to 16MB - Performance issue for subsequent backup
Дата
Msg-id CAMkU=1w8H_NhWfaTFJVx2Ya53e6dH1iwEYzBRh2oxtZtuuxs3A@mail.gmail.com
обсуждение исходный текст
Ответ на wal-size limited to 16MB - Performance issue for subsequent backup  (jesper@krogh.cc)
Список pgsql-hackers
On Mon, Oct 20, 2014 at 12:03 PM, <jesper@krogh.cc> wrote:
Hi.

One of our "production issues" is that the system generates lots of
wal-files, lots is like 151952 files over the last 24h, which is about
2.4TB worth of WAL files. I wouldn't say that isn't an issue by itself,
but the system does indeed work fine. We do subsequently gzip the files to
limit actual disk-usage, this makes the files roughly 30-50% in size.

That being said, along comes the backup, scheduled ones a day and tries to
read off these wal-files, which to the backup looks like "an awfull lot of
small files", our backup utillized a single thread to read of those files
and levels of at reading through 30-40MB/s from a 21 drive Raid50 of
rotating drives, which is quite bad. That causes a daily incremental run
to take in the order of 24h. Differential picking up larger deltas and
full are even worse.

Why not have archive_command (which gets the files while they are still cached) put the files directly into their final destination on the backup server?


Suggestions are welcome. An archive-command/restore command that could
combine/split wal-segments might be the easiest workaround, but how about
crash-safeness?

I think you would just have to combine them by looking at the file name and seeking to a specific spot in the large file (rather than just appending to it) so that if the archive_command fails and gets rerun, it will still end up in the correct place.  I don't see what other crash-safeness issues you would have, other than the ones you already have.  You would want to do the compression afterward combining, not before, so that all segments are of predictable size.

It should be pretty easy as long as want your combined files to consist of either 16 or 256 (or 255 in older versions) WAL files.

You would have to pass through directly any files not matching the filename pattern of ordinary WAL files.

Cheers,

Jeff

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Peter Eisentraut
Дата:
Сообщение: Re: Allow format 0000-0000-0000 in postgresql MAC parser
Следующее
От: Jeff Janes
Дата:
Сообщение: Re: Autovacuum fails to keep visibility map up-to-date in mostly-insert-only-tables