Re: block-level incremental backup

Поиск
Список
Период
Сортировка
От Stephen Frost
Тема Re: block-level incremental backup
Дата
Msg-id 20190422170805.GJ6197@tamriel.snowman.net
обсуждение исходный текст
Ответ на Re: block-level incremental backup  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: block-level incremental backup  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
Greetings,

* Robert Haas (robertmhaas@gmail.com) wrote:
> On Sat, Apr 20, 2019 at 4:32 PM Stephen Frost <sfrost@snowman.net> wrote:
> > Having been around for a while working on backup-related things, if I
> > was to implement the protocol for pg_basebackup today, I'd definitely
> > implement "give me a list" and "give me this file" rather than the
> > tar-based approach, because I've learned that people want to be
> > able to do parallel backups and that's a decent way to do that.  I
> > wouldn't set out and implement something new that's there's just no hope
> > of making parallel.  Maybe the first write of pg_basebackup would still
> > be simple and serial since it's certainly more work to make a frontend
> > tool like that work in parallel, but at least the protocol would be
> > ready to support a parallel option being added alter without being
> > rewritten.
> >
> > And that's really what I was trying to get at here- if we've got the
> > choice now to decide what this is going to look like from a protocol
> > level, it'd be great if we could make it able to support being used in a
> > parallel fashion, even if pg_basebackup is still single-threaded.
>
> I think we're getting closer to a meeting of the minds here, but I
> don't think it's intrinsically necessary to rewrite the whole method
> of operation of pg_basebackup to implement incremental backup in a
> sensible way.

It wasn't my intent to imply that the whole method of operation of
pg_basebackup would have to change for this.

> One could instead just do a straightforward extension
> to the existing BASE_BACKUP command to enable incremental backup.

Ok, how do you envision that?  As I mentioned up-thread, I am concerned
that we're talking too high-level here and it's making the discussion
more difficult than it would be if we were to put together specific
ideas and then discuss them.

One way I can imagine to extend BASE_BACKUP is by adding LSN as an
optional parameter and then having the database server scan the entire
cluster and send a tarball which contains essentially a 'diff' file of
some kind for each file where we can construct a diff based on the LSN,
and then the complete contents of the file for everything else that
needs to be in the backup.

So, sure, that would work, but it wouldn't be able to be parallelized
and I don't think it'd end up being very exciting for the external tools
because of that, but it would be fine for pg_basebackup.

On the other hand, if you added new commands for 'list of files changed
since this LSN' and 'give me this file' and 'give me this file with the
changes in it since this LSN', then pg_basebackup could work with that
pretty easily in a single-threaded model (maybe with two connections to
the backend, but still in a single process, or maybe just by slurping up
the file list and then asking for each one) and the external tools could
leverage those new capabilities too for their backups, both full backups
and incremental ones.  This also wouldn't have to change how
pg_basebackup does full backups today one bit, so what we're really
talking about here is the direction to take the new code that's being
written, not about rewriting existing code.  I agree that it'd be a bit
more work...  but hopefully not *that* much more, and it would mean we
could later add parallel backup to pg_basebackup more easily too, if we
wanted to.

> Then, to enable parallel full backup and all sorts of out-of-core
> hacking, one could expand the command language to allow tools to
> access individual steps: START_BACKUP, SEND_FILE_LIST,
> SEND_FILE_CONTENTS, STOP_BACKUP, or whatever.  The second thing makes
> for an appealing project, but I do not think there is a technical
> reason why it has to be done first.  Or for that matter why it has to
> be done second.  As I keep saying, incremental backup and full backup
> are separate projects and I believe it's completely reasonable for
> whoever is doing the work to decide on the order in which they would
> like to do the work.

I didn't mean to imply that one had to be done before the other from a
technical standpoint.  I agree that they don't depend on each other.

You're certainly welcome to do what you would like, I simply wanted to
share my experiences and try to help move this in a direction that would
involve less code rewrite in the future and to have a feature that would
be more appealing to the external tools.

> Having said that, I'm curious what people other than Stephen (and
> other pgbackrest hackers)

While David and I do talk, we haven't really discussed this proposal all
that much, so please don't assume that he shares my thoughts here.  I'd
also like to hear what others think, particularly those who have been
working in this area.

> think about the relative value of parallel
> backup vs. incremental backup.  Stephen appears quite convinced that
> parallel backup is full of win and incremental backup is a bit of a
> yawn by comparison, and while I certainly would not want to discount
> the value of his experience in this area, it sometimes happens on this
> mailing list that [ drum roll please ] not everybody agrees about
> everything.  So, what do other people think?

I'm afraid this is painting my position here with an extremely broad
brush and so I'd like to clarify a bit: I'm *all* for incremental
backups.  Incremental and differential backups were supported by
pgBackRest very early on and are used extensively.  Today's pgBackRest
does that at a file level, but I would very much like to get to a block
level shortly after we finish rewriting it into C and porting it to
Windows (and probably the other platforms PG runs on today), which isn't
very far off now.  I'd like to make sure that whatever core ends up with
as an incremental backup solution also matches very closely what we do
with pgBackRest too, but everything that's been discussed here seems
pretty reasonable when it comes to the bits around how the blocks are
detected and the files get stitched back together, so I don't expect
there to be too much of an issue there.

What I'm afraid will be lackluster is adding block-level incremental
backup support to pg_basebackup without any support for managing
backups or anything else.  I'm also concerned that it's going to mean
that people who want to use incremental backup with pg_basebackup are
going to have to write a lot of their own management code (probably in
shell scripts and such...) around that and if they get anything wrong
there then people are going to end up with bad backups that they can't
restore from, or they'll have corrupted clusters if they do manage to
get them restored.

It'd also be nice to have as much exposed through the common library as
possible when it comes to, well, everything being discussed, so that the
external tools could leverage that code and avoid having to write their
own.  This would probably apply more to the WAL-scanning discussion, but
figured I'd mention it here too.

If the protocol was implemented in a way that we could leverage it from
external tools in a parallel fashion then I'd be more excited about the
overall body of work, although, thinking about it a bit more, I have to
admit that I'm not sure that pgBackRest would end up using it in any
case, no matter how it's implemented, since it wouldn't support
compression or encryption, both of which we support doing in-stream
before the data leaves the server, though the external tools which don't
support those options likely would find the parallel option more
appealing.

Thanks,

Stephen

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Andres Freund
Дата:
Сообщение: Re: Unhappy about API changes in the no-fsm-for-small-rels patch
Следующее
От: Robert Haas
Дата:
Сообщение: Re: finding changed blocks using WAL scanning