"Tuple too big" when the tuple is not that big...

Поиск
Список
Период
Сортировка
От Paulo Jan
Тема "Tuple too big" when the tuple is not that big...
Дата
Msg-id 3ACB55A5.D22B2D2B@digital.ddnet.es
обсуждение исходный текст
Ответы Re: "Tuple too big" when the tuple is not that big...  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
Hi all:

    I have a problem here, using Postgres 6.5.3 on a Red Hat Linux 6.0. I
have a table where, each time I do a "vacuum analyze", the database
complains saying "ERROR:  Tuple is too big: size 10460"... and the
problem is that there isn't any record as far as I know that goes beyond
the 8K limit.
    Some background: the table in question was initially created with a
"text" field, and it gave us endless problems (crashes, coredumps,
etc.). After searching the archives and finding a number of people
warning against using the "text" field (specially in the 6.x series), I
dumped the table contents (with COPY) and recreated it using
"varchar(8088)" instead. When importing the data back Postgres didn't
say anything, and I assume that if there had been any field bigger than
8K it would have complained. BUT... right after importing the data in
the brand new table, I try a "vacuum analyze" again and it does the same
thing.
    Some other facts:

    -"Vacuum" works fine. It's just "vacuum analyze" what gives problems.
    -The table doesn't have any indices.
    -Everytime I try to do a "\d (table)", Postgres dumps core with the
"backend closed the channel unexpectedly".

    Any ideas? (Aside of upgrading to 7.x; we can't do that for now). Do
you need any other information?



                        Paulo Jan.
                        DDnet.

В списке pgsql-general по дате отправления:

Предыдущее
От: Ahmed Moustafa
Дата:
Сообщение: Postgres 7.1RC1 on Solaris 7
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: Postgres 7.1RC1 on Solaris 7