Обсуждение: Table update: restore or replace?
I just updated a column in a table (postgresql-10.5) and accidently applied it to all 185 rows rather than just one. There is no active transaction to roll back. Each night I do a backup and just started working on the database a few minutes ago. If I restore all contents of /var/lib/pgsql/10/data/ will this restore the database's state before the botched update and two previous updates today? Rich
On 5/14/19 1:46 PM, Rich Shepard wrote: > I just updated a column in a table (postgresql-10.5) and accidently applied > it to all 185 rows rather than just one. There is no active transaction to > roll back. > > Each night I do a backup and just started working on the database a few A file level backup or database dump? > minutes ago. If I restore all contents of > /var/lib/pgsql/10/data/ will this restore the database's state before the > botched update and two previous updates today? > > Rich > > > -- Adrian Klaver adrian.klaver@aklaver.com
On Tue, 14 May 2019, Adrian Klaver wrote: > A file level backup or database dump? Adrian, File level. April 20th is my most recent database dump because I forgot to run it last Friday afternoon. Rich
On 5/14/19 1:59 PM, Rich Shepard wrote: > On Tue, 14 May 2019, Adrian Klaver wrote: > >> A file level backup or database dump? > > Adrian, > > File level. April 20th is my most recent database dump because I forgot to > run it last Friday afternoon. Does the table you overwrote the data change much? If not it might be safer to just fetch it from the April 20th dump and then apply the changes since then. If you where to restore at the file level I would: 1) Stop the server. 2) Move the existing content of /var/lib/pgsql/10/data/ somewhere else just to be safe. 3) Copy in the backup. > > Rich > > > -- Adrian Klaver adrian.klaver@aklaver.com
Adrian Klaver <adrian.klaver@aklaver.com> writes: > If you where to restore at the file level I would: > 1) Stop the server. > 2) Move the existing content of /var/lib/pgsql/10/data/ somewhere else > just to be safe. > 3) Copy in the backup. Yeah. You can't just selectively copy files, because the data files are dependent on the contents of the pg_xact transaction log; it's all or nothing. Also, I don't know what method you've been using to make file-level backups, but they're really pretty worthless unless you (a) stop the server or (b) use a filesystem snapshot. Otherwise you're very likely to have inconsistent data. regards, tom lane
On Tue, 14 May 2019, Adrian Klaver wrote: > Does the table you overwrote the data change much? Adrian, Yes. It's in my business tracking database so it's updated almost every day. > If not it might be safer to just fetch it from the April 20th dump and > then apply the changes since then. The column I FUBAR'd holds e-mail addresses. What I'll do is set all to NULL and find the e-mail addresses I have external to the database and enter them in a large UPDATE TABLE. And I think I'll set up a cron job to do a database dump each day with the date appended to the file name in the bash shell script. Thanks, Rich
On Tue, 14 May 2019, Tom Lane wrote: > Yeah. You can't just selectively copy files, because the data files are > dependent on the contents of the pg_xact transaction log; it's all or > nothing. Tom, That's why I thought of copying the entire data/ directory. > Also, I don't know what method you've been using to make file-level > backups, but they're really pretty worthless unless you (a) stop the > server or (b) use a filesystem snapshot. Otherwise you're very likely to > have inconsistent data. I run dirvish <http://www.dirvish.org/> which runs each night starting at 00:30 am when there's no activity (by me, at least) on the database. Regards, Rich
> From: Rich Shepard <rshepard@appl-ecosys.com>
> To: pgsql-general@lists.postgresql.org
> Date: 05/14/2019 05:19 PM
> Subject: [EXTERNAL] Re: Table update: restore or replace?
> And I think I'll set up a cron job to do a database dump each day with the
> date appended to the file name in the bash shell script.
Might I suggest you setup proper backups with continuous archiving instead? If you had those, you would be able to restore this database back to the point right before you issued the bad update statement.
I'd highly recommend pgBackRest for the task.
Brad.
On 5/14/19 2:19 PM, Rich Shepard wrote: > On Tue, 14 May 2019, Adrian Klaver wrote: > >> Does the table you overwrote the data change much? > > Adrian, > > Yes. It's in my business tracking database so it's updated almost every > day. > >> If not it might be safer to just fetch it from the April 20th dump and >> then apply the changes since then. > > The column I FUBAR'd holds e-mail addresses. What I'll do is set all to > NULL > and find the e-mail addresses I have external to the database and enter > them > in a large UPDATE TABLE. Or just fetch them from the table data you have saved in the dump file. > > And I think I'll set up a cron job to do a database dump each day with the > date appended to the file name in the bash shell script. Yeah, that will save a lot of heartburn:) > > Thanks, > > Rich > > -- Adrian Klaver adrian.klaver@aklaver.com
On 5/14/19 3:59 PM, Rich Shepard wrote:
Note that referring to file level copies as dumps can be very confusing in the Postgres world because the pg_dump command does logical backups, and they're called dump files.
On Tue, 14 May 2019, Adrian Klaver wrote:A file level backup or database dump?
Adrian,
File level. April 20th is my most recent database dump because I forgot to
run it last Friday afternoon.
Note that referring to file level copies as dumps can be very confusing in the Postgres world because the pg_dump command does logical backups, and they're called dump files.
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.
On Tue, 14 May 2019, Brad Nicholson wrote: > Might I suggest you setup proper backups with continuous archiving > instead? If you had those, you would be able to restore this database back > to the point right before you issued the bad update statement. > > I'd highly recommend pgBackRest for the task. Brad, I'll certainly take a look at pgBackRest. Thanks, Rich
On Tue, 14 May 2019, Adrian Klaver wrote: > Or just fetch them from the table data you have saved in the dump file. Adrian, Sigh. I should have thought of that. A great time saver. Thanks, Rich
On Tue, 14 May 2019, Adrian Klaver wrote: > Or just fetch them from the table data you have saved in the dump file. Adrian, This did save a lot of time! Thanks for the pointer. >> And I think I'll set up a cron job to do a database dump each day with the >> date appended to the file name in the bash shell script. > Yeah, that will save a lot of heartburn:) Done. Cron will run the shell script at 02:15 each day. This will do until I build, install, and configure pgBackRest (which consumes less disk space than do the daily pg_dumpall files. Best regards, Rich
Greetings, * Rich Shepard (rshepard@appl-ecosys.com) wrote: > That's why I thought of copying the entire data/ directory. That isn't going to work because things change in the data directory... > >Also, I don't know what method you've been using to make file-level > >backups, but they're really pretty worthless unless you (a) stop the > >server or (b) use a filesystem snapshot. Otherwise you're very likely to > >have inconsistent data. > > I run dirvish <http://www.dirvish.org/> which runs each night starting at > 00:30 am when there's no activity (by me, at least) on the database. The database system is potentially doing things though, so this isn't a backup solution that is reliable. You really should be using a backup solution that's been specifically written to work with PostgreSQL. I wouldn't trust performing a restore from a backup taken like this. I'd suggest you restore to a new server (or another directory, at least...) and try starting up PG and then dump out the table and then check that it's valid. And then switch to a backup system that actually works with PG. Thanks, Stephen