Re: Duplicating data folder without tablespace, for read access

Поиск
Список
Период
Сортировка
От Jerry Sievers
Тема Re: Duplicating data folder without tablespace, for read access
Дата
Msg-id 87d0ukde10.fsf@jsievers.enova.com
обсуждение исходный текст
Ответ на Re: Duplicating data folder without tablespace, for read access  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Jack Cushman <jcushman@gmail.com> writes:
>
>> Now here's the situation where I want to do what Christophe said not to do:
>> :)
>
>> I have a large database of text, with a 600GB table and a 100GB table
>> connected by a join table. They both see occasional updates throughout the
>> week. Once a week I want to "cut a release," meaning I will clone just the
>> 100GB table and copy it to a "release" server for read-only web access.
>
>> My procedure would be:
>
>> - keep the 600GB table on a separate tablespace
>> - cleanly stop postgres on both servers
>> - copy the data folder to the release server
>> - delete pg_tblspc/* on the release server
>> - start postgres on both servers
>
>> In local testing this seems to work -- the release server works fine, and I
>> only get an error message if I try to access the missing tables, which is
>> expected. But are there reasons this is going to bite me if I try it in
>> production?
>
> Sooner or later, autovacuum is going to try to touch the missing table.
> Maybe you can limp along with autovac failing in the background,
> especially if this is a throwaway copy of the DB with little or no
> write activity.  But I'm not sure how well that will work.
>
> Probably, the worst-case scenario would be for the database to shut
> down because it thinks it's in XID wraparound trouble.  But it's hard
> to see how you get to that without lots of write traffic, so maybe
> you can get away with this.
>
>             regards, tom lane
>

With Pg being pretty good about letting you drop objects that have
missing backend storage bits, such as when we snapshot large DBs on the
SAN which does *not* capture our NVMe temp tablespace...

I'd say find the objects supposed to be in the missing tablespace by
catalog scraping and drop them.  You'll see warnings or whatever in the
logs that file $foo not found but the objects are then gone nonetheless.

Presuming this avoids risk of eventual wrap conditions etc due to cronic
autovac failure if tables left in catalogs.

FWIW

>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consulting@comcast.net
p: 312.241.7800


В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: pg_upgrade with large pg_largeobject table
Следующее
От: Mate Varga
Дата:
Сообщение: Re: pg_upgrade with large pg_largeobject table