Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1

Поиск
Список
Период
Сортировка
От Chris White
Тема Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1
Дата
Msg-id 012f01c2feca$1a3ab860$ff926b80@amer.cisco.com
обсуждение исходный текст
Ответ на Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1  (Tom Lane <tgl@sss.pgh.pa.us>)
Ответы Re: [ADMIN] Problems with Large Objects using Postgres 7.2.1  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-jdbc
Looking at our code further, the actual code writes the large object commits
it, opens the large object updates the header of the large object (first 58
bytes) with some length info using seeks, then writes and commits the object
again, before updating and committing the associated tables. The data I saw
in the exported file was the header info without the updates for the length
info i.e. after the first commit!!

Chris

-----Original Message-----
From: Tom Lane [mailto:tgl@sss.pgh.pa.us]
Sent: Wednesday, April 09, 2003 10:28 AM
To: cjwhite@cisco.com
Cc: pgsql-jdbc@postgresql.org; pgsql-admin@postgresql.org
Subject: Re: [JDBC] [ADMIN] Problems with Large Objects using Postgres
7.2.1


"Chris White" <cjwhite@cisco.com> writes:
> I didn't looked at the data in the table. However, when I did a lo_export
of
> one of the objects I only got a 2K file output.

IIRC, we store 2K per row in pg_largeobject.  So this is consistent with
the idea that row 0 is present for the LO ID, while row 1 is not.  What
I'm wondering is if the other hundred-odd rows that would be needed to
hold a 300K large object are there or not.  Also, do the rows contain
the appropriate data for their parts of the overall large object?

            regards, tom lane


В списке pgsql-jdbc по дате отправления:

Предыдущее
От: "Cris"
Дата:
Сообщение: Index not used,
Следующее
От: Barry Lind
Дата:
Сообщение: Re: Index not used,