Re: BUG #7521: Cannot disable WAL log while using pg_dump

Поиск
Список
Период
Сортировка
От Gezeala M. Bacuño II
Тема Re: BUG #7521: Cannot disable WAL log while using pg_dump
Дата
Msg-id CAJKO3mXPD0JSK8pMF42zvGZtB8cLC7-cwEAMVYK+JhKYsaY61A@mail.gmail.com
обсуждение исходный текст
Ответ на Re: BUG #7521: Cannot disable WAL log while using pg_dump  (Pavan Deolasee <pavan.deolasee@gmail.com>)
Ответы Re: BUG #7521: Cannot disable WAL log while using pg_dump  (Pavan Deolasee <pavan.deolasee@gmail.com>)
Re: BUG #7521: Cannot disable WAL log while using pg_dump  (Pavan Deolasee <pavan.deolasee@gmail.com>)
Список pgsql-bugs
On Thu, Sep 6, 2012 at 8:48 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
>
>
> On Fri, Sep 7, 2012 at 2:43 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
> wrote:
>>
>> Excerpts from Boy de Laat's message of jue sep 06 17:24:35 -0300 2012:
>> >
>> > At the time my backup starts i see much WAL logs being generated?
>>
>> I guess we'd need to see what the generated WAL logs are, either with
>> xlogdump or XLOG_DEBUG turned on ...
>>
>
> Can a long running pg_dump transaction have some ill effect on the amount of
> WAL generation ? It can definitely cause unusual additional bloat if the
> database is being continuously updated during that period, but not sure it
> can cause additional WAL.
>
> OP probably needs to tell us whats the size of the database, how's the
> access pattern, how long does it take to dump the entire database and how
> many more WAL files generated during this process compared to average rate.
>
> Thanks,
> Pavan
>

DB size: 3400509331216 (3.09TB)
Full db pg_dump takes around 20 to 21hrs with options "-Fc -Z 1"
pg_dump size: 391161548000 (364.3GB)
checkpoint_segments = 80
#checkpoint_timeout = 5min (default)
#checkpoint_completion_target = 0.5 (default)
version: PostgreSQL 8.4.12

>From pg_controldata right after starting the instance:
Latest checkpoint location:           4441/5E681F38
Prior checkpoint location:            4441/5E67D140
Latest checkpoint's REDO location:    4441/5E681F38
Latest checkpoint's TimeLineID:       1
Latest checkpoint's NextXID:          0/419543166
Latest checkpoint's NextOID:          653512568
Latest checkpoint's NextMultiXactId:  107873
Latest checkpoint's NextMultiOffset:  219841

>From pg_controldata after pg_dump:
Latest checkpoint location:           4450/7A14F280
Prior checkpoint location:            4450/7A14E018
Latest checkpoint's REDO location:    4450/7A14F280
Latest checkpoint's TimeLineID:       1
Latest checkpoint's NextXID:          0/419543166
Latest checkpoint's NextOID:          653512568
Latest checkpoint's NextMultiXactId:  107873
Latest checkpoint's NextMultiOffset:  219841

Background:
The cluster used for the backup was cloned from a zfs snapshot (taken
while the db is running without issuing pg_start/stop_backup). WALs
are replayed and cluster starts up usually in a minute or so. After
pg_dump, the clone's zfs USED property value is 285G -- that's how
huge block changes have grown for the entirety of the pg_dump process.
Yesterday's backup clone was 280G.

Aside from pg_dump, a cron job issues a 'select * from
pg_stat_activity' every 9mins. The instance is cloned solely for
backup purposes with zero modifications and autovacuum disabled.



В списке pgsql-bugs по дате отправления:

Предыдущее
От: Radu Ovidiu Ilies
Дата:
Сообщение: Re: BUG #7522: ecpg build error
Следующее
От: Murray Cumming
Дата:
Сообщение: Re: BUG #7514: postgres -k no longer works with spaces in the path