Re: pg_dump / copy bugs with "big lines" ?

Поиск
Список
Период
Сортировка
От Michael Paquier
Тема Re: pg_dump / copy bugs with "big lines" ?
Дата
Msg-id CAB7nPqSZyHH535q_HvtXzX0cmWregHoJorKHxfLHGRobHCsMrw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: pg_dump / copy bugs with "big lines" ?  (Robert Haas <robertmhaas@gmail.com>)
Ответы Re: pg_dump / copy bugs with "big lines" ?  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Список pgsql-hackers
On Wed, Apr 8, 2015 at 11:53 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Apr 6, 2015 at 1:51 PM, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
>> In any case, I don't think it would be terribly difficult to allow a bit
>> more than 1GB in a StringInfo. Might need to tweak palloc too; ISTR there's
>> some 1GB limits there too.
>
> The point is, those limits are there on purpose.  Changing things
> arbitrarily wouldn't be hard, but doing it in a principled way is
> likely to require some thought.  For example, in the COPY OUT case,
> presumably what's happening is that we palloc a chunk for each
> individual datum, and then palloc a buffer for the whole row.  Now, we
> could let the whole-row buffer be bigger, but maybe it would be better
> not to copy all of the (possibly very large) values for the individual
> columns over into a row buffer before sending it.  Some refactoring
> that avoids the need for a potentially massive (1.6TB?) whole-row
> buffer would be better than just deciding to allow it.

I think that something to be aware of is that this is as well going to
require some rethinking of the existing libpq functions that are here
to fetch a row during COPY with PQgetCopyData, to make them able to
fetch chunks of data from one row.
-- 
Michael



В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: "rejected" vs "returned with feedback" in new CF app
Следующее
От: Amit Kapila
Дата:
Сообщение: Re: Parallel Seq Scan