Обсуждение: Out of space making backup
When making a backup of my database I run out of space. I tell it to put the backup file on my K: drive, which has tons of free space, but during the backup postgresql creates a temporary folder on my C: Drive where it initially writes data. I don't have enough space on my C: drive for all the data and so I get an out of space error. Is there a way to have postgre use one my drives with lots of free space for the temporary folder? I'm using version 8.3.9-1. Thanks.
Farhan Malik wrote: > When making a backup of my database I run out of space. I tell it to > put the backup file on my K: drive, which has tons of free space, but > during the backup postgresql creates a temporary folder on my C: Drive > where it initially writes data. I don't have enough space on my C: > drive for all the data and so I get an out of space error. Is there a > way to have postgre use one my drives with lots of free space for the > temporary folder? I'm using version 8.3.9-1. > what tool are you using for this backup process?
Farhan Malik <malikpiano@gmail.com> writes: > When making a backup of my database I run out of space. �I tell it to > put the backup file on my K: drive, which has tons of free space, but > during the backup postgresql creates a temporary folder on my C: Drive > where it initially writes data. �I don't have enough space on my C: > drive for all the data and so I get an out of space error. �Is there a > way to have postgre use one my drives with lots of free space for the > temporary folder? �I'm using version 8.3.9-1. Reading between the lines, I suspect you are trying to use 'tar' output format, which does have a need to make temp files that can be large. If I guessed right, I'd suggest using 'custom' format instead. There really is no advantage to tar format, and several disadvantages besides this one. regards, tom lane
> Reading between the lines, I suspect you are trying to use 'tar' output > format, which does have a need to make temp files that can be large. > If I guessed right, I'd suggest using 'custom' format instead. There > really is no advantage to tar format, and several disadvantages besides > this one. > > regards, tom lane That sounds right. The error I get from the software is 2009/12/25 10:21:40.812: [00001EA8][ThreadBackupRestore] Restore Error: pg_dump: [tar archiver] could not write to output file: No space left on device Is there a way to have postgre put those large temp files on a different drive? I only have 4GB free on my C: drive and once the temp files go over that I get an out of space error. I have tons of free space on other drives, including the one where I am asking that the final backup.zip file goes. As for changing the backup to a custom format, I will pass that on to the developer of the software.
2009/12/25 Farhan Malik <malikpiano@gmail.com>: >> Reading between the lines, I suspect you are trying to use 'tar' output >> format, which does have a need to make temp files that can be large. >> If I guessed right, I'd suggest using 'custom' format instead. There >> really is no advantage to tar format, and several disadvantages besides >> this one. >> >> regards, tom lane > > That sounds right. The error I get from the software is 2009/12/25 > 10:21:40.812: [00001EA8][ThreadBackupRestore] Restore Error: pg_dump: > [tar archiver] could not write to output file: No space left on device > > Is there a way to have postgre put those large temp files on a > different drive? I only have 4GB free on my C: drive and once the > temp files go over that I get an out of space error. I have tons of > free space on other drives, including the one where I am asking that > the final backup.zip file goes. > > As for changing the backup to a custom format, I will pass that on to > the developer of the software. I do backups semi-manually: use select pg_start_backup('some-name') (in psql logged in a postres) then start a tar of /var/lib/pgsql/data/, to stdout and pipe this to tar on another server using ssh then finally select pg_stop_backup() e.g. my two scripts (backup.sh calls back1.sh [root@www pgsql]# cat back1.sh #/bin/bash cd /var/lib/pgsql ssh lead touch /var/lib/postgresql/backups/start_backup tar zcf - data |ssh lead "cat > /var/lib/postgresql/backups/20091223.tgz" echo "DONE" [root@www pgsql]# cat backup.sh #!/bin/bash cd /var/lib/pgsql ./back1.sh > backups/backup.log 2>&1 </dev/null & [root@www pgsql]# > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Brian Modra Land line: +27 23 5411 462 Mobile: +27 79 69 77 082 5 Jan Louw Str, Prince Albert, 6930 Postal: P.O. Box 2, Prince Albert 6930 South Africa http://www.zwartberg.com/
Farhan Malik wrote: > That sounds right. The error I get from the software is 2009/12/25 > 10:21:40.812: [00001EA8][ThreadBackupRestore] Restore Error: pg_dump: > [tar archiver] could not write to output file: No space left on device > > Is there a way to have postgre put those large temp files on a > different drive? I only have 4GB free on my C: drive and once the > temp files go over that I get an out of space error. I have tons of > free space on other drives, including the one where I am asking that > the final backup.zip file goes. > > As for changing the backup to a custom format, I will pass that on to > the developer of the software. > wild guess says, the value of the TEMP environment variable when the backup software is started will determine the path of where that temporary file is written. In MS Windows this usually defaults to %USERPROFILE%\Local Settings\Temp\
Thanks. Changing the environmental variable has solved that issue. It ultimately required 13GB of free space to create a 2GB backup file. On Fri, Dec 25, 2009 at 1:19 PM, John R Pierce <pierce@hogranch.com> wrote: > Farhan Malik wrote: >> >> That sounds right. The error I get from the software is 2009/12/25 >> 10:21:40.812: [00001EA8][ThreadBackupRestore] Restore Error: pg_dump: >> [tar archiver] could not write to output file: No space left on device >> >> Is there a way to have postgre put those large temp files on a >> different drive? I only have 4GB free on my C: drive and once the >> temp files go over that I get an out of space error. I have tons of >> free space on other drives, including the one where I am asking that >> the final backup.zip file goes. >> >> As for changing the backup to a custom format, I will pass that on to >> the developer of the software. >> > > wild guess says, the value of the TEMP environment variable when the backup > software is started will determine the path of where that temporary file is > written. In MS Windows this usually defaults to %USERPROFILE%\Local > Settings\Temp\ > > -- Please note my new email address malikpiano@gmail.com
On 26/12/2009 12:44 AM, Brian Modra wrote: > use select pg_start_backup('some-name') (in psql logged in a postres) > then start a tar of /var/lib/pgsql/data/, to stdout and pipe this to > tar on another server using ssh This won't work on a Windows machine. Windows does not permit files that are open for write by one process to be opened by another, unless the first process makes special efforts to permit it. In general, people get around this by using the Volume Shadow Copy Service (VSS) via dedicated backup software. This takes a consistent snapshot of the file system and permits the backup software to access that snapshot. If you were going to take filesystem-level backups on Windows, that'd be how you'd want to do it - have a pre hook in your backup software that called pg_start_backup() and a post hook that called pg_stop_backup(), letting the backup software handle the snapshot and filesystem copy. -- Craig Ringer