Обсуждение: pg_dump and XID limit

Поиск
Список
Период
Сортировка

pg_dump and XID limit

От
Elliot Chance
Дата:
Hi,

This is a hypothetical problem but not an impossible situation. Just curious about what would happen.

Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more
tableson super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions
persecond. 

Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all
thetables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion)
thedaily backup takes 2 hours. 

Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen
toyour backup and/or database? 

- Elliot

Re: pg_dump and XID limit

От
Tom Lane
Дата:
Elliot Chance <elliotchance@gmail.com> writes:
> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.

> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or
moretables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million
transactionsper second. 

> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of
allthe tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network
congestion)the daily backup takes 2 hours. 

> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to
happento your backup and/or database? 

The DB will shut down to prevent wraparound once it gets 2 billion XIDs
in front of the oldest open snaphot.

            regards, tom lane

Re: pg_dump and XID limit

От
Elliot Chance
Дата:
On 24/11/2010, at 5:07 PM, Tom Lane wrote:

> Elliot Chance <elliotchance@gmail.com> writes:
>> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.
>
>> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or
moretables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million
transactionsper second. 
>
>> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of
allthe tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network
congestion)the daily backup takes 2 hours. 
>
>> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to
happento your backup and/or database? 
>
> The DB will shut down to prevent wraparound once it gets 2 billion XIDs
> in front of the oldest open snaphot.
>
>             regards, tom lane

Wouldn't that mean at some point it would be advisable to be using 64bit transaction IDs? Or would that change too much
ofthe codebase? 

Re: pg_dump and XID limit

От
Vladimir Rusinov
Дата:


On Wed, Nov 24, 2010 at 12:59 PM, Elliot Chance <elliotchance@gmail.com> wrote:
> Elliot Chance <elliotchance@gmail.com> writes:
>> This is a hypothetical problem but not an impossible situation. Just curious about what would happen.
>
>> Lets say you have an OLTP server that keeps very busy on a large database. In this large database you have one or more tables on super fast storage like a fusion IO card which is handling (for the sake of argument) 1 million transactions per second.
>
>> Even though only one or a few tables are using almost all of the IO, pg_dump has to export a consistent snapshot of all the tables to somewhere else every 24 hours. But because it's such a large dataset (or perhaps just network congestion) the daily backup takes 2 hours.
>
>> Heres the question, during that 2 hours more than 4 billion transactions could of occurred - so what's going to happen to your backup and/or database?
>
> The DB will shut down to prevent wraparound once it gets 2 billion XIDs
> in front of the oldest open snaphot.
>
>                       regards, tom lane

Wouldn't that mean at some point it would be advisable to be using 64bit transaction IDs? Or would that change too much of the codebase?

I think it would be advisable not to use pg_dump on such load. Use fs- or storage-level snapshots instead.

--
Vladimir Rusinov
http://greenmice.info/

Re: pg_dump and XID limit

От
"Kevin Grittner"
Дата:
Vladimir Rusinov <vladimir@greenmice.info> wrote:

> I think it would be advisable not to use pg_dump on such load.

Agreed.

> Use fs- or storage-level snapshots instead.

Or PITR backup techniques.  Or hot/warm standby.  Or streaming
replication.  Or one of the many good trigger-based replication
products.  Just about any of the alternatives would be better than
pg_dump in the described scenario.  If you leave any transaction
open for hours during a heavy OLTP transaction load you're probably
going to get unpleasant bloat in your database.

-Kevin

Re: pg_dump and XID limit

От
Tom Lane
Дата:
Elliot Chance <elliotchance@gmail.com> writes:
> Wouldn't that mean at some point it would be advisable to be using 64bit transaction IDs? Or would that change too
muchof the codebase? 

It's not so much "too much of the codebase" as "nobody wants another 8
bytes added to per-row overhead".  Holding a transaction open for
circa 2G transactions would be disastrous anyway, because of table bloat
from inability to reclaim dead rows.

            regards, tom lane