[Q] pg_dump with large object & backend cache...

Поиск
Список
Период
Сортировка
От C.S.Park
Тема [Q] pg_dump with large object & backend cache...
Дата
Msg-id 19990818174327.A30724@mhlx01.kek.jp
обсуждение исходный текст
Список pgsql-hackers
Hello,

We are using v6.3.2 with patches, and using many tables which
uses 'large object'.  Main problem with it is

(1) cannot dump large object,  is this still true in v6.5.1 ?     or no plan for immplementing dumping blobs in near
future?
(2) if I want to clone dbs from linux to solaris machine, due to the above    lo & pg_dump problem, lots of manual
worksneeded to dump dbs    -> restore to another architecture machine...  Is there any utility    to ease
duplication(backup)of dbs?
 
(3) to upgrade v6.3.2 dbs to v6.5.1, including large objects, is there     ways to dump/restore?

Another problem with v6.3.2 is frequent messages(error?) related to
the backend cache invalidation failure -- probably posted many times...
like this:      NOTICE:  SIAssignBackendId: discarding tag 2147430138      Connection databese 'request' failed.
FATAL1:  Backend cache invalidation initialization failed
 
(1) Increasing max connection # from 32 to 64 in     src/include/storage/sinvaladt.h will simply fix above problem?
(2) If I want to keep v6.3.2, which PATCH will FIX above problem?
(3) already fixed in v6.5.1?

Best Regards,
C.S.Park



В списке pgsql-hackers по дате отправления:

Предыдущее
От: "Ansley, Michael"
Дата:
Сообщение: RE: [HACKERS] Problem with query length
Следующее
От: Tatsuo Ishii
Дата:
Сообщение: vacuum process size