Re: Trouble Upgrading Postgres

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: Trouble Upgrading Postgres
Дата
Msg-id 35cc6f6d-a16f-5361-19fa-a89999d6c175@aklaver.com
обсуждение исходный текст
Ответ на Re: Trouble Upgrading Postgres  ("Daniel Verite" <daniel@manitou-mail.org>)
Ответы Re: Trouble Upgrading Postgres  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
On 11/6/18 8:27 AM, Daniel Verite wrote:
>     Adrian Klaver wrote:
> 
>>> So there's no way it can deal with the contents over 500MB, and the
>>> ones just under that limit may also be problematic.
>>
>> To me that looks like a bug, putting data into a record you cannot get out.
> 
> Strictly speaking, it could probably get out with COPY in binary format,
> but pg_dump doesn't use that.
> 
> It's undoubtedly very annoying that a database can end up with
> non-pg_dump'able contents, but it's not an easy problem to solve.
> Some time ago, work was done to extend the 1GB limit
> but eventually it got scratched. The thread in [1] discusses
> many details of the problem and why the proposed solution
> were mostly a band aid. Basically, the specs of COPY
> and other internal aspects of Postgres are from the 32-bit era when
> putting the size of an entire CDROM in a single row/column was not
> anticipated as a valid use case.
> It's still a narrow use case today and applications that need to store
> big pieces of data like that should slice them in chunks, a bit like in
> pg_largeobject, except in much larger chunks, like 1MB.


Should there not be some indication of this in the docs here?:

https://www.postgresql.org/docs/11/datatype-binary.html

> 
> [1] pg_dump / copy bugs with "big lines" ?
> https://www.postgresql.org/message-id/1836813.YmyOrS99PX%40ronan.dunklau.fr
> 
> Best regards,
> 


-- 
Adrian Klaver
adrian.klaver@aklaver.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Marcio Meneguzzi
Дата:
Сообщение: Re: PgAgent on Windows
Следующее
От: Ravi Krishna
Дата:
Сообщение: why select count(*) consumes wal logs