Обсуждение: Re: Blob error after backup and restore [database > 5.5 Gb.]

Поиск
Список
Период
Сортировка

Re: Blob error after backup and restore [database > 5.5 Gb.]

От
"Amrit Angsusingh"
Дата:
The test database is on one of the fedora 3 server and postgresql 7.4.7 [rpm installed] . I back it up with the command

pg_dump -Ft - b test > test.tar
And once the process finished I transfer it to another server via nfs and restore it with
pg_restore -d test test.tar
I use this process for more than 1 1/2 year without any problem include the blob , but just for a couple of weeks ago
thebackup database has no blob and I cannot create and save a new blob 
data .
Is that because the database is quite big [5.5 Gb] or not?
Amrit
Thailand

"Amrit Angsusingh"  writes:
> after I try to restore it in another server which has the same environment [via nfs] and use the command
> pg_restore -d test test.tar
> The database "test" after restoration has the same data but no blob
> found .

Another thought here: what exactly do you mean by "no blob found"?

The current implementation does not try to restore blobs with the same
OIDs they had in the original database --- rather, it loads them up with
whatever OIDs the new backend cares to assign, and then tries to fix up
database columns of OID and LO types to reference the new OIDs instead
of the old ones.

This is obviously pretty fragile, and we are thinking about changing it
for 8.1, but I wonder if you are getting bit because you are doing
something that pg_restore isn't expecting.

            regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
      joining column's datatypes do not match



----------------------------------------------------------------------------------
http://www.chaiyo.com ไชโยเมล์ - ฟรีอีเมล์รูปแบบใหม่ สดใสกว่าเดิม...

Advice on structure /sequence / trigger

От
David Pratt
Дата:
Hi I have a number of tables. I am interested in having some
internationalization capability so that the application and it's data
can be in more than one language.  i18 will take care of the interface
but not the data. To internationalize the date, I am wanting to put the
internationalized text strings in a multi_language table like this:

CREATE TABLE multi_language (
    id                                   SERIAL,
          ml_id                            INTEGER NOT NULL,
    language_id               INTEGER NOT NULL,
    language_text             TEXT NOT NULL
);

Other tables would have be joined by ml_id (the multi language id).
For example:

CREATE TABLE example_table (
    id                                    SERIAL,
    name_ml_id                 INTEGER NOT NULL,

So in example_table, name_ml_id would join ml_id so you have the same
ml_id in multi_language table for more than one language. So there
would be two records in multi_language for a record in example_table if
you had an english translation and  french translation.

I want to add records to multi_language sequentially.  So lets say I
add a new example in example_table, I want to see what the last value
that was added to multi_language was so that if would use the next in
the sequence.  As you can see by the structure the id field is serial
and does this but I am speaking of the ml_id field specifically.  Let's
say I have one example record in example_table, multi_language would
look like this

1, 1, 1, the brown cow   # english translation of name - language 1 (en)
2, 1, 2,  la vache brun    # french translation of name - language 2
(fr)

ml_id for both record is 1.

So when I create a second record example_table, I want to have this:

1, 1, 1, the brown cow   # english translation of name (of example
record - language 1 (en)
2, 1, 2, la vache brun    # french translation of name (of example
record- language 2 (fr)
3, 2, 1, the blue turkey  #english translation of name (second record -
language 1(en)
4, 2, 2, la dandon bleu  #french translation of name (second record -
language 2 (fr)

Ok now the questions. How best to do this? Would I create a separate
sequence for multi_language ml_id and do a select on it to get the next
value before inserting each multi_language record.  Should this be done
using a trigger - if so how? Should this be done in my application code
and not sql or would that be dangerous.  For example, the
multi_language table would be used a lot. What if a couple of people
were creating new records at the same time. If I were using python and
doing this in my application code, I am wondering if there could be
problems. With a trigger it would be transactional, correct? Can you
have a trigger work from incrementing a sequence instead of updating a
table?

I just want to get this right because it will be an important part of
what I am preparing. Sorry for the really long message but I don't know
if  it would make any sense if I did not fully explain what i am
wanting to do. Oh, and I am not french so excuse my sample
translations...

Regards,
David