Re: Examining very large dumps

Поиск
Список
Период
Сортировка
От Achilleas Mantzios
Тема Re: Examining very large dumps
Дата
Msg-id 200804171715.55961.achill@matrix.gatewaynet.com
обсуждение исходный текст
Ответ на Re: Examining very large dumps  (Achilleas Mantzios <achill@matrix.gatewaynet.com>)
Список pgsql-admin
Στις Thursday 17 April 2008 08:46:24 ο/η Achilleas Mantzios έγραψε:
> Στις Thursday 17 April 2008 08:25:22 ο/η Tom Lane έγραψε:
> > Achilleas Mantzios <achill@matrix.gatewaynet.com> writes:
> > >> Did you make the dump using 8.3's pg_dump?
> >
> > > Yes, with 8.3.1's pg_dump (data only dump)
> >
> > That would be your problem.  *Don't* use a data-only dump, it
> > lobotomizes all intelligence in the system and leaves it up to you
> > to deal with foreign-key ordering issues.  There are lots of
> > performance arguments against that as well.  See the advice at
> >
>
> This procedure is kind of the official upgrade noted on http://www.sai.msu.su/~megera/wiki/Tsearch2_83_changes
> and described on http://sql-info.de/postgresql/notes/converting-tsearch2-to-8.3.html
>
> > http://www.postgresql.org/docs/8.3/static/populate.html
> I am reading this link right away.
So it seems that whoever decides to take this path (schema-only dump, then data-only dump), will be initially forced to
disabletriggers,  
which means that he/she will have to recrate all FK constraints inorder to enforce/verify the integrity of the DB.
Also all indexes will have to be recreated.
However, one think that puzzles me is why this fact about data-only dumps wasnt mentioned (at least from what i've
read) 
in the numerous conversations regarding upgrade of tsearch2 to 8.3, plus it is even recommended (indirectly) via the
officialtsearch2 
site (http://www.sai.msu.su/~megera/wiki/Tsearch2_83_changes) which points to
http://sql-info.de/postgresql/notes/converting-tsearch2-to-8.3.html

Anyway thats what i did in my case, where i had to cleanup from a previous 7.4 installation with tsearch2 and intarray
installed.
1) take a full dump using pg_dump of 8.3.1
pg_dump -U postgres -h old7.4.19machine mydbname > DUMPFULL.sql
2) createdb mydbname --encoding=original_db_encoding (this is essential in order to avoid encoding conversion problems)
3) In the 8.3.1 installation, we make sure tsearch2,intarray are not installed. This step is needed in order to force
any 
tsearch2.so, _int.so related stmts to break.
4)
    create domain public.tsvector as pg_catalog.tsvector;
    create domain public.gtsvector as pg_catalog.gtsvector;
    create domain public.tsquery as pg_catalog.tsquery;
(Thanx to Tom). This step is needed in order to force the corresponding CREATE TYPE stmts to fail,
in order for the CREATE TABLE stmts with tsvector type columns to succeed!
THIS IS IMPORTANT. In my case moving a large DB dump (35 GB) from 7.4.19 to 8.3.1
for some reason, and while i had this problem of broken CREATE TABLE stmts, i ended up with
incomplete DB. And i mean not only those missing tables, but also several other tables having 0 row count.
I didnt chase this one thru, however i got the idea that it is very important to have all tables in place.
5) psql dynacom -f DUMPFULL.sql >2see 2>&1
here most things having to do with either intarray or tsearch2 will fail, hopefully tho, we will have all our data in
place!
6) We drop whatever tsearch2,intarray related is left from the dump
    DROP TABLE public.pg_ts_cfg;
    DROP TABLE public.pg_ts_cfgmap;
    DROP TABLE public.pg_ts_dict ;
    DROP TABLE public.pg_ts_parser ;
    DROP TYPE public.query_int ;
    DROP TYPE public.statinfo ;
    DROP TYPE public.tokenout ;
    DROP TYPE public.tokentype ;
    DROP FUNCTION public.ts_debug(text) ;
    DROP TYPE public.tsdebug ;
at this point hopefully we are clean of garbage in our new 8.3.1 DB
7)
    ALTER TABLE vmanews ALTER idxfti TYPE pg_catalog.tsvector;
    ALTER TABLE smanews ALTER idxfti TYPE pg_catalog.tsvector;
    ALTER TABLE vesselhelp ALTER idxfti TYPE pg_catalog.tsvector;
now we drop the domains used in the intial Tom's trick
    DROP DOMAIN public.gtsvector ;
    DROP DOMAIN public.tsquery ;
    DROP DOMAIN public.tsvector ;
8) Now we create the triggers for the update of tsvector columns:
    CREATE TRIGGER mytable_tsvectorupdate
        BEFORE INSERT OR UPDATE ON mytable
        FOR EACH ROW
        EXECUTE PROCEDURE tsvector_update_trigger('idxfti', 'pg_catalog.english', 'header', 'content');
NOTE here that the previous functionality of having an extra function (.e.g. dropatsymbol) is removed from tsearch2.

9) installation of intarray and creation of all relevant indexes.

And that was it!
Any comments are welcome.
>
> Any thoughts very welcome.
> >
> >             regards, tom lane
> >
>
>
>
> --
> Achilleas Mantzios
>



--
Achilleas Mantzios

В списке pgsql-admin по дате отправления:

Предыдущее
От: "Mikko Partio"
Дата:
Сообщение: Re: FATAL: could not open relation xxx: No such file or directory
Следующее
От: Tom Lane
Дата:
Сообщение: Re: FATAL: could not open relation xxx: No such file or directory