On 11/6/18 3:47 AM, Daniel Verite wrote:
> Charles Martin wrote:
>
>> but the second one returned this:
>>
>> 0 "623140"
>> 1 "53"
>> 2 "12"
>> 3 "10"
>> 4 "1"
>> 5 "1"
>> 7 "1"
>> [null] "162"
>>
>> Not quite sure what that means, but if there is just a small number of
>> overly-large records, I might be able to delete them. If I can find them.
>
> The query was:
>
> SELECT octet_length(docfilecontents)/(1024*1024*100),
> count(*)
> FROM docfile
> GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> The results above show that there is one document weighing over 700 MB
> (the first column being the multiple of 100MB), one between 500 and
> 600 MB, one between 400 MB and 500 MB, 10 between 300 and 400 MB, and
> so on.
>
> The hex expansion performed by COPY must allocate twice that size,
> plus the rest of the row, and if that resulting size is above 1GB, it
> will error out with the message you mentioned upthread:
> ERROR: invalid memory alloc request size <some value over 1 billion>.
> So there's no way it can deal with the contents over 500MB, and the
> ones just under that limit may also be problematic.
To me that looks like a bug, putting data into a record you cannot get out.
>
> A quick and dirty way of getting rid of these contents would be to
> nullify them. For instance, nullify anything over 400MB:
>
> UPDATE docfile SET docfilecontents=NULL
> WHERE octet_length(docfilecontents) > 1024*1024*400;
>
> Or a cleaner solution would be to delete them with the application if
> that's possible. You may turn the above query into a SELECT that
> retrieve the fields of interest (avoid SELECT * because of the huge
> column).
>
>
> Best regards,
>
--
Adrian Klaver
adrian.klaver@aklaver.com