Re: pg_dump / copy bugs with "big lines" ?

Поиск
Список
Период
Сортировка
От Alvaro Herrera
Тема Re: pg_dump / copy bugs with "big lines" ?
Дата
Msg-id 20161129171520.6s6xnr47vgaejykl@alvherre.pgsql
обсуждение исходный текст
Ответ на Re: pg_dump / copy bugs with "big lines" ?  ("Daniel Verite" <daniel@manitou-mail.org>)
Список pgsql-hackers
Daniel Verite wrote:

> If we consider what would happen with the latest patch on a platform
> with sizeof(int)=8 and a \copy invocation like this:
> 
> \copy (select big,big,big,big,big,big,big,big,...... FROM
>     (select lpad('', 1024*1024*200) as big) s) TO /dev/null
> 
> if we put enough copies of "big" in the select-list to grow over 2GB,
> and then over 4GB.

Oh, right, I was forgetting that.

> On a platform with sizeof(int)=4 this should normally fail over 2GB with
> "Cannot enlarge string buffer containing $X bytes by $Y more bytes"
> 
> I don't have an ILP64 environment myself to test, but I suspect
> it would malfunction instead of cleanly erroring out on such
> platforms.

I suspect nobody has such platforms, as they are mostly defunct.  But I
see an easy way to fix it, which is to use sizeof(int32).

> Also, without this limit, we can "COPY FROM/TO file" really huge rows, 4GB
> and beyond, like I tried successfully during the tests mentioned upthread
> (again with len as int64 on x86_64).
> So such COPYs would succeed or fail depending on whether they deal with
> a file or a network connection.
> Do we want this difference in behavior?

Such a patch would be for master only.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Christian Convey
Дата:
Сообщение: Re: Tackling JsonPath support
Следующее
От: Corey Huinker
Дата:
Сообщение: Re: PSQL commands: \quit_if, \quit_unless