Re: Out of memory error handling in frontend code
| От | Frédéric Yhuel |
|---|---|
| Тема | Re: Out of memory error handling in frontend code |
| Дата | |
| Msg-id | 5a3058d1-1810-5a72-c50e-18df2de56fdf@dalibo.com обсуждение исходный текст |
| Ответ на | Re: Out of memory error handling in frontend code (Daniel Gustafsson <daniel@yesql.se>) |
| Список | pgsql-hackers |
Hi Daniel,
Thank you for your answer.
On 9/28/23 14:02, Daniel Gustafsson wrote:
>> On 28 Sep 2023, at 10:14, Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:
>
>> After some time, we understood that the 20 million of large objects were responsible for the huge memory usage (more
than10 GB) by pg_dump.
>
> This sounds like a known issue [0] which has been reported several times, and
> one we should get around to fixing sometime.
>
Indeed, I saw some of these reports afterwards :)
>> I think a more useful error message would help for such cases.
>
> Knowing that this is case that pops up, I agree that we could do better around
> the messaging here.
>
>> I haven't try to get the patch ready for review, I know that the format of the messages isn't right, I'd like to
knowwhat do you think of the idea, first.
>
> I don't think adding more details is a bad idea, but it shouldn't require any
> knowledge about internals so I think messages like the one below needs to be
> reworded to be more helpful.
>
> + if (loinfo == NULL)
> + {
> + pg_fatal("getLOs: out of memory");
> + }
>
OK, here is a second version of the patch.
I didn't try to fix the path getLOs -> AssignDumpId -> catalogid_insert
-> [...] -> catalogid_allocate, but that's annoying because it amounts
to 11% of the memory allocations from valgrind's output.
Best regards,
Frédéric
Вложения
В списке pgsql-hackers по дате отправления: