Re: pg_dump and XID limit

Поиск
Список
Период
Сортировка
От Vladimir Rusinov
Тема Re: pg_dump and XID limit
Дата
Msg-id AANLkTikZjY0i2Rxoe1-NdyzBwN2yiD4DnCM+fTXH-z_8@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_dump and XID limit  (Elliot Chance <elliotchance@gmail.com>)
Ответы Re: pg_dump and XID limit  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Список pgsql-admin


On Wed, Nov 24, 2010 at 12:59 PM, Elliot Chance <elliotchance@gmail.com> wrote:
> Elliot Chance <elliotchance@gmail.com> writes:
>> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.
>
>> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more tables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions per second.
>
>> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all the tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion) the daily backup takes 2 hours.
>
>> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen to your backup and/or database?
>
> The DB will shut down to prevent wraparound once it gets 2 billion XIDs
> in front of the oldest open snaphot.
>
>                       regards, tom lane

Wouldn't that mean at some point it would be advisable to be using 64bit transaction IDs? Or would that change too much of the codebase?

I think it would be advisable not to use pg_dump on such load. Use fs- or storage-level snapshots instead.

--
Vladimir Rusinov
http://greenmice.info/

В списке pgsql-admin по дате отправления:

Предыдущее
От: Elliot Chance
Дата:
Сообщение: Re: pg_dump and XID limit
Следующее
От: Düster Horst
Дата:
Сообщение: Re: Deny access materialzsed view