Re: How to know if a database has changed

Поиск
Список
Период
Сортировка
От marcelo
Тема Re: How to know if a database has changed
Дата
Msg-id 5246fbe0-4dc6-8a25-fa44-dcfd1bc87191@gmail.com
обсуждение исходный текст
Ответ на Re: How to know if a database has changed  (Sam Gendler <sgendler@ideasculptor.com>)
Ответы Re: How to know if a database has changed  (Adam Tauno Williams <awilliam@whitemice.org>)
Re: How to know if a database has changed  (Edson Carlos Ericksson Richter <richter@simkorp.com.br>)
Список pgsql-general
Hi Sam

You are right, and here are the reason behind my question: The server where postgres will be installed is not on 24/7. It turns on in the morning and goes off at the end of the day. The idea is that, as part of the shutdown process, a local backup is made. The next day, that backup will be copied to the cloud.
In order not to lengthen the shutdown process, we are trying to limit pg_dump to the databases that have had some change, not so much in their schema as in their data.
Of course, to add a trigger for every table and CUD operation on every database is not an option.


On 11/12/17 23:23, Sam Gendler wrote:
I think there's a more useful question, which is why do you want to do this?  If it is just about conditional backups, surely the cost of backup storage is low enough, even in S3 or the like, that a duplicate backup is an afterthought from a cost perspective? Before you start jumping through hoops to make your backups conditional, I'd first do some analysis and figure out what the real cost of the thing I'm trying to avoid actually is, since my guess is that you are deep into a premature optimization here, where either the cost of the duplicate backup isn't consequential or the frequency of duplicate backups is effectively 0.  It would always be possible to run some kind of checksum on the backup and skip storing it if it matches the previous backup's checksum if you decide that there truly is value in conditionally backing up the db.  Sure, that would result in dumping a db that doesn't need to be dumped, but if your write transaction rate is so low that backups end up being duplicates on a regular basis, then surely you can afford the cost of a pg_dump without any significant impact on performance?

On Mon, Dec 11, 2017 at 10:49 AM, Andreas Kretschmer <andreas@a-kretschmer.de> wrote:


Am 11.12.2017 um 18:26 schrieb Andreas Kretschmer:
it's just a rough idea...

... and not perfect, because you can't capture ddl in this way.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




В списке pgsql-general по дате отправления:

Предыдущее
От: Moreno Andreo
Дата:
Сообщение: Re: Windows XP to Win 10 migration issue
Следующее
От: Adam Tauno Williams
Дата:
Сообщение: Re: How to know if a database has changed