Обсуждение: Support allocating memory for large strings

Поиск
Список
Период
Сортировка

Support allocating memory for large strings

От
Maxim Zibitsker
Дата:
PostgreSQL's MaxAllocSize limit prevents storing individual variable-length character strings exceeding ~1GB, causing
"invalidmemory alloc request size" errors during INSERT operations on tables with large text columns. Example
reproductionincluded in artifacts.md. 

This limitation also affects pg_dump when exporting a PostgreSQL database with such data. The attached patches
demonstratesa proof of concept using palloc_extended with MCXT_ALLOC_HUGE in the write path. For the read path, there
area couple of possible approaches: extending existing functions to handle huge allocations, or implementing a chunked
storagemechanism that avoids single large allocations. 

Thoughts?

Maxim



Вложения

Re: Support allocating memory for large strings

От
Tom Lane
Дата:
Maxim Zibitsker <max.zibitsker@gmail.com> writes:
> PostgreSQL's MaxAllocSize limit prevents storing individual variable-length character strings exceeding ~1GB, causing
"invalidmemory alloc request size" errors during INSERT operations on tables with large text columns. 

This is news to no one.  We are not especially interested in trying to
relax that limit, because doing so would bleed over into approximately
everything in the backend, and create opportunities for
integer-overflow bugs in many places that are perfectly okay today.
The cost-benefit ratio for changing this decision is horrible.

> The attached patches demonstrates a proof of concept using
> palloc_extended with MCXT_ALLOC_HUGE in the write path.

"Proof of concept"?  This can't possibly fix your problem, because it
does nothing for the fact that tuple size fields are still limited
to 1GB, as are varlena headers for individual fields.  A serious
attack on this limitation, at a guess, would require a patch on the
order of 100K lines, and that might be an underestimate.

            regards, tom lane



Re: Support allocating memory for large strings

От
Jose Luis Tallon
Дата:
On 8/11/25 3:15, Maxim Zibitsker wrote:
> PostgreSQL's MaxAllocSize limit prevents storing individual variable-length character strings exceeding ~1GB, causing
"invalidmemory alloc request size" errors during INSERT operations on tables with large text columns. Example
reproductionincluded in artifacts.md.
 

Tom Lane's very appropriate response not withstanding....

a) Why is this a problem? (Please share a bit more about your intended 
use case)

b) Why would someone need to store >1GB worth of TEXT (in a single 
string, no less!) in a column in an (albeit very flexible) Relational 
Database ?

     (I'm assuming no internal structure that would allow such amount of 
text to be split/spread over multiple records)

c) There exists LObs (Large OBjects) intended for this use, precisely... 
why is this mechanism not a good solution to your need?

d) Wouldn't a (journalling) File System (with a slim abstraction layer 
on top for directory hashing/indexing) not be a better solution for this 
particular application?

     Full Text Search on the stored data doesn't look like it would ever 
be performant... there exist specialized tools for that


And... how did you get "invalid" data in the database, that pg_dump 
wouldn't process, in the first place? (maybe just speculating/projecting 
and I didn't pick up the nuance properly)


Mostly curious about the problem / intended use case.... when we 
explored limits and limitations in Postgres almost 15 years ago, we 
never considered this even :o



Thanks,

-- 
Parkinson's Law: Work expands to fill the time alloted to it.




Re: Support allocating memory for large strings

От
Nathan Bossart
Дата:
On Fri, Nov 07, 2025 at 09:32:45PM -0500, Tom Lane wrote:
> Maxim Zibitsker <max.zibitsker@gmail.com> writes:
>> PostgreSQL's MaxAllocSize limit prevents storing individual
>> variable-length character strings exceeding ~1GB, causing "invalid
>> memory alloc request size" errors during INSERT operations on tables
>> with large text columns.
> 
> This is news to no one.  We are not especially interested in trying to
> relax that limit, because doing so would bleed over into approximately
> everything in the backend, and create opportunities for
> integer-overflow bugs in many places that are perfectly okay today.
> The cost-benefit ratio for changing this decision is horrible.

FWIW something I am hearing about more often these days, and what I believe
Maxim's patch is actually after, is the 1GB limit on row size.  Even if
each field doesn't exceed 1GB (which is what artifacts.md seems to
demonstrate), heap_form_tuple() and friends can fail to construct the whole
tuple.  This doesn't seem to be covered in the existing documentation about
limits [0].

[0] https://www.postgresql.org/docs/devel/limits.html

-- 
nathan



Re: Support allocating memory for large strings

От
Tom Lane
Дата:
Nathan Bossart <nathandbossart@gmail.com> writes:
> FWIW something I am hearing about more often these days, and what I believe
> Maxim's patch is actually after, is the 1GB limit on row size.  Even if
> each field doesn't exceed 1GB (which is what artifacts.md seems to
> demonstrate), heap_form_tuple() and friends can fail to construct the whole
> tuple.  This doesn't seem to be covered in the existing documentation about
> limits [0].

Yeah.  I think our hopes of relaxing the 1GB limit on individual
field values are about zero, but maybe there is some chance of
allowing tuples that are wider than that.  The notion that it's
a one-line fix is still ludicrous though :-(

One big problem with a scheme like that is "what happens when
I try to make a bigger-than-1GB tuple into a composite datum?".

Another issue is what happens when a wider-than-1GB tuple needs
to be sent to or from clients.  I think there are assumptions
in the wire protocol about message lengths fitting in an int,
for example.  Even if the protocol were okay with it, I wouldn't
count on client libraries not to fall over.

On the whole, it's a nasty can of worms, and I stand by the
opinion that the cost-benefit ratio of removing the limit is
pretty awful.

            regards, tom lane



Re: Support allocating memory for large strings

От
Jim Mlodgenski
Дата:
On Mon, Nov 10, 2025 at 4:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
> Nathan Bossart <nathandbossart@gmail.com> writes:
> > FWIW something I am hearing about more often these days, and what I believe
> > Maxim's patch is actually after, is the 1GB limit on row size.  Even if
> > each field doesn't exceed 1GB (which is what artifacts.md seems to
> > demonstrate), heap_form_tuple() and friends can fail to construct the whole
> > tuple.  This doesn't seem to be covered in the existing documentation about
> > limits [0].
>
> Yeah.  I think our hopes of relaxing the 1GB limit on individual
> field values are about zero, but maybe there is some chance of
> allowing tuples that are wider than that.  The notion that it's
> a one-line fix is still ludicrous though :-(
>
> One big problem with a scheme like that is "what happens when
> I try to make a bigger-than-1GB tuple into a composite datum?".
>
> Another issue is what happens when a wider-than-1GB tuple needs
> to be sent to or from clients.  I think there are assumptions
> in the wire protocol about message lengths fitting in an int,
> for example.  Even if the protocol were okay with it, I wouldn't
> count on client libraries not to fall over.
>
> On the whole, it's a nasty can of worms, and I stand by the
> opinion that the cost-benefit ratio of removing the limit is
> pretty awful.
>

This is a case that I see with users on a somewhat recurring basis. It is
typically from applications that were migrated from other databases. It's not
super common but when it happens, they bump into a fair amount of pain.

Consider the following contrived example. A user inserts a row and then updates
that row to exceed 1GB. That row effectively becomes unreadable without
breaking it up into smaller pieces and tools like pg_dump can't export it.

CREATE TABLE wide_row (
  id int, a varchar, b varchar, c varchar, d varchar, e varchar, f varchar,
  g varchar, h varchar, i varchar, j varchar, k varchar, l varchar, m varchar,
  n varchar, o varchar);

INSERT INTO wide_row (id, a, b, c, d, e, f, g, h, i)
VALUES (1, repeat('x', (10^8)::int), repeat('x', (10^8)::int),
        repeat('x', (10^8)::int), repeat('x', (10^8)::int),
        repeat('x', (10^8)::int), repeat('x', (10^8)::int),
        repeat('x', (10^8)::int), repeat('x', (10^8)::int),
        repeat('x', (10^8)::int));

UPDATE wide_row
SET j = repeat('x', (10^8)::int), k = repeat('x', (10^8)::int),
    l = repeat('x', (10^8)::int), m = repeat('x', (10^8)::int),
    n = repeat('x', (10^8)::int), o = repeat('x', (10^8)::int)
WHERE id = 1;

SELECT * FROM wide_row WHERE id = 1;
ERROR:  string buffer exceeds maximum allowed length (1073741823 bytes)
DETAIL:  Cannot enlarge string buffer containing 1000000051 bytes by
100000000 more bytes.

Putting some information in the limits doc page can help some users to avoid
the issue and maybe a better error message could help others fix the issue
faster when it occurs. Ideally, we should fix things far enough where we can
dump out any valid row in the table. Perhaps less common things like composites
could just continue to throw an error? If we don't want to open that
can of worms
we should be making it clear that the effective row size limit is 1GB.



Re: Support allocating memory for large strings

От
David Rowley
Дата:
On Thu, 8 Jan 2026 at 07:31, Jim Mlodgenski <jimmy76@gmail.com> wrote:
> SELECT * FROM wide_row WHERE id = 1;
> ERROR:  string buffer exceeds maximum allowed length (1073741823 bytes)
> DETAIL:  Cannot enlarge string buffer containing 1000000051 bytes by
> 100000000 more bytes.

At least it's significantly better than it used to be before
1029bdec2, but I agree it doesn't give you much context on what you
might do to solve the issue. One solution is to move away from using
StringInfo for backend protocol stuff. That doesn't seem ideal from a
code reusability point of view. Having the caller specify an error
function callback means more memory in StringInfoData and is complex
because of how many ways there are to initialise a StringInfo.

> we should be making it clear that the effective row size limit is 1GB.

Maybe "Backend protocol message length" -> "1GB" is worth putting in there.

David