Re: [RFC] What should we do for reliable WAL archiving?
От | MauMau |
---|---|
Тема | Re: [RFC] What should we do for reliable WAL archiving? |
Дата | |
Msg-id | AD05352D260B451FB853441F557C3A24@maumau обсуждение исходный текст |
Ответ на | Re: [RFC] What should we do for reliable WAL archiving? (Jeff Janes <jeff.janes@gmail.com>) |
Ответы |
Re: [RFC] What should we do for reliable WAL archiving?
Re: [RFC] What should we do for reliable WAL archiving? |
Список | pgsql-hackers |
From: "Jeff Janes" <jeff.janes@gmail.com> > Do people really just copy the files from one directory of local storage > to > another directory of local storage? I don't see the point of that. It makes sense to archive WAL to a directory of local storage for media recovery. Here, the local storage is a different disk drive which is directly attached to the database server or directly connected through SAN. > The recommendation is to refuse to overwrite an existing file of the same > name, and exit with failure. Which essentially brings archiving to a > halt, > because it keeps trying but it will keep failing. If we make a custom > version, one thing it should do is determine if the existing archived file > is just a truncated version of the attempting-to-be archived file, and if > so overwrite it. Because if the first archival command fails with a > network glitch, it can leave behind a partial file. What I'm trying to address is just an alternative to cp/copy which fsyncs a file. It just overwrites an existing file. Yes, you're right, the failed archive attempt leaves behind a partial file which causes subsequent attempts to fail, if you follow the PG manual. That's another undesirable point in the current doc. To overcome this, someone on this ML recommended me to do "cp %p /archive/dir/%f.tmp && mv /archive/dir/%f.tmp /archive/dir/%f". Does this solve your problem? Regards MauMau
В списке pgsql-hackers по дате отправления: