Re: Losing data from Postgres

Поиск
Список
Период
Сортировка
От Paul Breen
Тема Re: Losing data from Postgres
Дата
Msg-id Pine.LNX.3.96.1001116173721.6198H-100000@cpark37.computerpark.co.uk
обсуждение исходный текст
Ответ на Losing data from Postgres  (Paul Breen <pbreen@computerpark.co.uk>)
Список pgsql-admin
Bonjour Jean-Marc,

Yeah, we get the feeling that it may be a vacuum+index related problem,
not sure though?  As I said, we've gone back to only vacuuming twice a day
and the problem (we hope) has gone away.  It leaves us feeling uneasy
though, when we fix a problem we like to understand why!

Basically we are going to monitor it for the next few weeks and if there
is no occurrence of the data loss, we will - grudgingly - consider it no
longer a problem.  I'd still like to know what the Postgres backend
messages mean in the log, especially the one about "xid table corrupted"??

Anyway, thanks to everyone for their help & support, it is greatly
appreciated.  If we have any break-throughs on this thorny subject we will
mail the list with our findings - cheers.

Paul M. Breen, Software Engineer - Computer Park Ltd.

Tel:   (01536) 417155
Email: pbreen@computerpark.co.uk

On Wed, 15 Nov 2000, Jean-Marc Pigeon wrote:

> Bonjour Paul Breen
> >
> > Hello everyone,
> >
> > Can anyone help us?
> >
> > We are using Postgres in a hotspare configuration, that is, we have 2
> > separate boxes both running identical versions of Postgres and everytime
> > we insert|update|delete from the database we write to both boxes (at the
> > application level).  All communications to the databases are in
> > transaction blocks and if we cannot commit to both databases then we
> > rollback.
> [...]
> > Originally we were vacuuming twice a day but because some of the reports
> > we produce regularly were taking too long as the database grew, we added
> > multiple indexes onto the key tables and began vacuuming every hour.  It's
> > only after doing this that we noticed the data loss - don't know if this
> > is coincidental or not.  Yesterday we went back to vacuuming only twice a
> > day.
>
>     We found something similar on our application.
>     Seems to be a vacuum+index problem, the index do
>     not refer to ALL data after the vacuum!.
>
>     If I am right, drop the index, create the index again
>     and your data should be found again...
>
>     On our side now, before to do vacuum we drop the index
>     do vacuum, rebuild the index. The overall time is
>     the same as doing a 'simple' vacuum.
>
>     Hoping that help...
>
>
> A bientot
> ==========================================================================
> Jean-Marc Pigeon              Internet:   Jean-Marc.Pigeon@safe.ca
> SAFE Inc.                Phone: (514) 493-4280  Fax: (514) 493-1946
>        REGULUS,  a real time accounting/billing package for ISP
>            REGULUS' Home base <"http://www.regulus.safe.ca">
> ==========================================================================
>



В списке pgsql-admin по дате отправления:

Предыдущее
От: "turing2000"
Дата:
Сообщение: ...
Следующее
От: Aleksander Rozman - Andy
Дата:
Сообщение: Tool for filling database