Aw: Re: Fatal Error : Invalid Memory alloc request size 1236252631
От | Karsten Hilbert |
---|---|
Тема | Aw: Re: Fatal Error : Invalid Memory alloc request size 1236252631 |
Дата | |
Msg-id | trinity-52d1c07e-0519-414d-817e-70638bf7986d-1692282932894@3c-app-gmx-bs52 обсуждение исходный текст |
Ответ на | Re: Fatal Error : Invalid Memory alloc request size 1236252631 (Sai Teja <saitejasaichintalapudi@gmail.com>) |
Ответы |
Re: Re: Fatal Error : Invalid Memory alloc request size 1236252631
|
Список | pgsql-general |
Even I used postgreSQL Large Objects by referring this link to store and retrieve large files (As bytea not working) https://www.postgresql.org/docs/current/largeobjects.html But even now I am unable to fetch the data at once from large objects select lo_get(oid); Here I'm getting the same error message. But if I use select data from pg_large_object where loid = 49374 Then I can fetch the data but in page wise (data splitting into rows of each size 2KB) So, here how can I fetch the data at single step rather than page by page without any error. And I'm just wondering how do many applications storing huge amount of data in GBs? I know that there is 1GB limit for eachfield set by postgreSQL. If so, how to deal with these kind of situations? Would like to know about this to deal withreal time scenarios. https://github.com/lzlabs/pg_dumpbinary/blob/master/README.md might be of help Karsten
В списке pgsql-general по дате отправления: