Обсуждение: Redundant databases/real-time backup

Поиск
Список
Период
Сортировка

Redundant databases/real-time backup

От
root
Дата:
Several people have expressed an interest in having the capability to have real
time redundancy.  I am releasing my kludge solution to the mailing list in hope
that others might expand upon it.

First, you should dump your database, drop it, and recreate it on the computer
to be mirrored.  You should also create a fresh copy on the mirroring computer.
Most likely, your OIDs are still not going to be in synch.  For those of you
that use OIDs as a poor man's primary key, it will be necessary for you to
write a script that can sync up the oids on both computers (probably best run
as a cron script too).  I have thought about ways to do this, but I'll leave
that to someone else to complete.  (PLEASE POST YOUR RESULTS THOUGH!)

It is necessary to create/alter the postgresql startup script.  I have included
a copy of mine.  The database to be mirrored must start up with logging
enabled:

su -l postgres -c '/usr/bin/postmaster -i -D/home/postgres/data >/home/postgres/data/query_log 2>&1 &'

In this case I have specified a file called query_log that will maintain a copy
of all of the queries that have been executed.   I have included a complete
copy of my startup script called (unimaginatively) postgresql.  For linux users,
it should be in /etc/rc.d/init.d

I have also attached my pg_options file.  For me, this resides in
/home/postgres/data.  I have found that this file does not seem to affect my
query_log, but I incude it for reference for others to use.

Next, you should create a line in your /etc/crontab or /var/spool/cron/root file
to execute the redundancy script with root level permissions:

0-59/5 * * * * /root/redundancy.pl

You should install the redundancy.pl and rederhandler.pl scripts in the same
directory.

You will, of course, need to modify these scripts to work.  Several of my dead
ends are still in the scripts commented out.

rederhandler.pl is set up to work with qmail instead of sendmail.  You should
be able to substitute the path to your sendmail program and it should work
fine, i.e. /usr/sbin/sendmail instead of /var/qmail/bin/qmail-inject

Other points:
The query_log can get large rather quickly.  You cannot simply issue a rm -rf
query_log, touch query_log and chmod.  Even with the appropriate permissions
the daemon will not write to a new file, for some reason you must restart
postgres using the startup script.  Perhaps one of the developers has an answer
to this problem.....

Also, my script does not check for network problems....

The script takes a lot of overhead.  For high volume inserts and deletes,
depending on how often you run redundancy.pl, sometimes it just cannot keep up.
 BEWARE....it can bog down and crash your server if the backlog becomes too
large.

Also, I chose to allow it to pass SELECT statements and get back the results
from the remote database......I beleive that if you do not use sub-selects, you
may grep -v 'SELECT' and improve your performance significantly.

Please email comments and suggestions/modifications to me at
rhampton@veritime.com or root@veritime.com

Вложения

Re: Redundant databases/real-time backup

От
Ragnar Kjørstad
Дата:
On Thu, Nov 16, 2000 at 11:14:15AM -0500, root wrote:
> It is necessary to create/alter the postgresql startup script.  I have included
> a copy of mine.  The database to be mirrored must start up with logging
> enabled:
>
> su -l postgres -c '/usr/bin/postmaster -i -D/home/postgres/data >/home/postgres/data/query_log 2>&1 &'

But why do you do 2>&1 in your startup-script? This will redirect data
on stderr to stdout - I don't think that is a good idea.

> Other points:
> The query_log can get large rather quickly.  You cannot simply issue a rm -rf
> query_log, touch query_log and chmod.  Even with the appropriate permissions
> the daemon will not write to a new file, for some reason you must restart
> postgres using the startup script.  Perhaps one of the developers has an answer
> to this problem.....

This is becuase the postgres deamon keeps the file open, and will keep
on writing to the old file even after you delete it. Most deamons will
reopen their log-files if you send them a HUP signal, so a possible
solution is:

mv logfile logfile.1
kill -HUP <pid>
process logfile.1
rm logfile.1

The order is important to make sure you don't have a race-condition.

A different alternative is to use a FIFO instead of a normal file, and
process it continually istead of in batch-jobs.




A totally different approach would be to have a "sql-proxy" relay all
requests to two (or more) different servers to always keep them in sync.


--
Ragnar Kjørstad

Re: Redundant databases/real-time backup

От
Ragnar Kjørstad
Дата:
On Thu, Nov 30, 2000 at 12:08:20AM -0800, hamptone wrote:
> > [ about processing the postgresql log ]
>
> Thanks for the tips Ragnar...although I wish you would explain more about
> FIFO.  The only thing that rings a bell on this is First In First Out from
> the inventory chaper of my accounting textbook.....

I guess you're familiar with pipes?
like:
cat logfile | program

a FIFO is pretty much the same thing, only it has a filename. This mean
a process can write to the FIFO thinking it's writing to a regular file,
and another process can read from the FIFO thinking it's reading from a
regular file.

This makes it possible to process the file in realtime instead of in
batch, if you choose to.

If you use a FIFO, your postgresql process will block if there is
nothing actually reading from the FIFO. This means two things:
1. If the program processing the logfile dies, postgresql die too.
2. The postgresql server can't work faster than your logfile processer.

But the good news is your backupserver will always be up-to-date.



--
Ragnar Kjørstad

Re: Redundant databases/real-time backup

От
"hamptone"
Дата:
Thanks for the tips Ragnar...although I wish you would explain more about
FIFO.  The only thing that rings a bell on this is First In First Out from
the inventory chaper of my accounting textbook.....

----- Original Message -----
From: "Ragnar Kjørstad" <postgres@ragnark.vestdata.no>
To: "root" <root@dennis.veritime.com>
Cc: <pgsql-admin@postgresql.org>
Sent: Thursday, November 23, 2000 8:28 AM
Subject: Re: [ADMIN] Redundant databases/real-time backup


> On Thu, Nov 16, 2000 at 11:14:15AM -0500, root wrote:
> > It is necessary to create/alter the postgresql startup script.  I have
included
> > a copy of mine.  The database to be mirrored must start up with logging
> > enabled:
> >
> > su -l postgres -c '/usr/bin/postmaster -i -D/home/postgres/data
>/home/postgres/data/query_log 2>&1 &'
>
> But why do you do 2>&1 in your startup-script? This will redirect data
> on stderr to stdout - I don't think that is a good idea.
>
> > Other points:
> > The query_log can get large rather quickly.  You cannot simply issue a
rm -rf
> > query_log, touch query_log and chmod.  Even with the appropriate
permissions
> > the daemon will not write to a new file, for some reason you must
restart
> > postgres using the startup script.  Perhaps one of the developers has an
answer
> > to this problem.....
>
> This is becuase the postgres deamon keeps the file open, and will keep
> on writing to the old file even after you delete it. Most deamons will
> reopen their log-files if you send them a HUP signal, so a possible
> solution is:
>
> mv logfile logfile.1
> kill -HUP <pid>
> process logfile.1
> rm logfile.1
>
> The order is important to make sure you don't have a race-condition.
>
> A different alternative is to use a FIFO instead of a normal file, and
> process it continually istead of in batch-jobs.
>
>
>
>
> A totally different approach would be to have a "sql-proxy" relay all
> requests to two (or more) different servers to always keep them in sync.
>
>
> --
> Ragnar Kjørstad
>



Re: Redundant databases/real-time backup-Revisited

От
Lockhurst
Дата:
Hi,

Tried by modifying /etc/rc.d/init.d/postgresql with following changes:
su -l postgres -c '/usr/bin/postmaster -i -D /var/lib/pgsql
/var/lib/pgsql/query_log 2>&1 &'

The file query_log is created. But it is remaining empty even after restart and
all sorts of transactions including insert statements.

Early thanks for any help

Wish you all a vey happy new 2001.

Lockhurst


> ----- Original Message -----
> From: "Ragnar Kjørstad" <postgres@ragnark.vestdata.no>
> To: "root" <root@dennis.veritime.com>
> Cc: <pgsql-admin@postgresql.org>
> Sent: Thursday, November 23, 2000 8:28 AM
> Subject: Re: [ADMIN] Redundant databases/real-time backup
>
> > On Thu, Nov 16, 2000 at 11:14:15AM -0500, root wrote:
> > > It is necessary to create/alter the postgresql startup script.  I have
> included
> > > a copy of mine.  The database to be mirrored must start up with logging
> > > enabled:
> > >
> > > su -l postgres -c '/usr/bin/postmaster -i -D/home/postgres/data
> >/home/postgres/data/query_log 2>&1 &'
> >
> > But why do you do 2>&1 in your startup-script? This will redirect data
> > on stderr to stdout - I don't think that is a good idea.
> >
> > > Other points:
> > > The query_log can get large rather quickly.  You cannot simply issue a
> rm -rf
> > > query_log, touch query_log and chmod.  Even with the appropriate
> permissions
> > > the daemon will not write to a new file, for some reason you must
> restart
> > > postgres using the startup script.  Perhaps one of the developers has an
> answer
> > > to this problem.....
> >
> > This is becuase the postgres deamon keeps the file open, and will keep
> > on writing to the old file even after you delete it. Most deamons will
> > reopen their log-files if you send them a HUP signal, so a possible
> > solution is:
> >
> > mv logfile logfile.1
> > kill -HUP <pid>
> > process logfile.1
> > rm logfile.1
> >
> > The order is important to make sure you don't have a race-condition.
> >
> > A different alternative is to use a FIFO instead of a normal file, and
> > process it continually istead of in batch-jobs.
> >
> >
> >
> >
> > A totally different approach would be to have a "sql-proxy" relay all
> > requests to two (or more) different servers to always keep them in sync.
> >
> >
> > --
> > Ragnar Kjørstad
> >