Re: VACUUMs take twice as long across all nodes
От | Andreas Kostyrka |
---|---|
Тема | Re: VACUUMs take twice as long across all nodes |
Дата | |
Msg-id | 1162139073.7606.1.camel@andi-lap обсуждение исходный текст |
Ответ на | Re: VACUUMs take twice as long across all nodes (Andrew Sullivan <ajs@crankycanuck.ca>) |
Ответы |
Re: VACUUMs take twice as long across all nodes
|
Список | pgsql-performance |
Am Sonntag, den 29.10.2006, 10:34 -0500 schrieb Andrew Sullivan: > On Sun, Oct 29, 2006 at 03:08:26PM +0000, Gavin Hamill wrote: > > > > This is interesting, but I don't understand.. We've done a full restore > > from one of these pg_dump backups before now and it worked just great. > > > > Sure I had to DROP SCHEMA _replication CASCADE to clear out all the > > slony-specific triggers etc., but the new-master ran fine, as did > > firing up new replication to the other nodes :) > > > > Was I just lucky? > > Yes. Slony alters data in the system catalog for a number of > database objects on the replicas. It does this in order to prevent, > for example, triggers from firing both on the origin and the replica. > (That is the one that usually bites people hardest, but IIRC it's not > the only such hack in there.) This was a bit of a dirty hack that > was supposed to be cleaned up, but that hasn't been yet. In general, > you can't rely on a pg_dump of a replica giving you a dump that, when > restored, actually works. Actually, you need to get the schema from the master node, and can take the data from a slave. In mixing dumps like that, you must realize that there are two seperate parts in the schema dump: "table definitions" and "constraints". Do get a restorable backup you need to put the table definitions stuff before your data, and the constraints after the data copy. Andreas > > A >
Вложения
В списке pgsql-performance по дате отправления: