Re: [REVIEW] Re: Compression of full-page-writes

Поиск
Список
Период
Сортировка
От Rahila Syed
Тема Re: [REVIEW] Re: Compression of full-page-writes
Дата
Msg-id CAH2L28t-nn42mOyAoDujVLu3cd38AE56Ue830c-rbFqWd-2E9Q@mail.gmail.com
обсуждение исходный текст
Ответ на Re: [REVIEW] Re: Compression of full-page-writes  (Robert Haas <robertmhaas@gmail.com>)
Список pgsql-hackers
>So, it seems like you're basically using malloc to work around the
>fact that a palloc failure is an error, and we can't throw an error in
>a critical section.  I don't think that's good; we want all of our
>allocations, as far as possible, to be tracked via palloc.  It might
>be a good idea to add a new variant of palloc or MemoryContextAlloc
>that returns NULL on failure instead of throwing an error; I've wanted
>that once or twice.  But in this particular case, I'm not quite seeing
>why it should be necessary

I am using malloc to return NULL in case of failure and proceed without compression of FPW ,if it returns NULL. 
Proceeding without compression seems to be more accurate than throwing an error and exiting because of failure to allocate memory for compression. 

>the number of backup blocks per record is
>limited to some pretty small number, so it ought to be possible to
>preallocate enough memory to compress them all, perhaps just by
>declaring a global variable like char wal_compression_space[8192]; or
>whatever.

In the updated patch  a static global variable is added to which memory is allocated from heap using malloc outside critical section. The size of the memory block is 4 * BkpBlock header + 4 * BLCKSZ.  


Thank you,



On Mon, Aug 18, 2014 at 10:40 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Thu, Jul 3, 2014 at 3:58 PM, Rahila Syed <rahilasyed90@gmail.com> wrote:
> Updated version of patches are attached.
> Changes are as follows
> 1. Improved readability of the code as per the review comments.
> 2. Addition of block_compression field in BkpBlock structure to store
> information about compression of block. This provides for switching
> compression on/off and changing compression algorithm as required.
> 3.Handling of OOM in critical section by checking for return value of malloc
> and proceeding without compression of FPW if return value is NULL.

So, it seems like you're basically using malloc to work around the
fact that a palloc failure is an error, and we can't throw an error in
a critical section.  I don't think that's good; we want all of our
allocations, as far as possible, to be tracked via palloc.  It might
be a good idea to add a new variant of palloc or MemoryContextAlloc
that returns NULL on failure instead of throwing an error; I've wanted
that once or twice.  But in this particular case, I'm not quite seeing
why it should be necessary - the number of backup blocks per record is
limited to some pretty small number, so it ought to be possible to
preallocate enough memory to compress them all, perhaps just by
declaring a global variable like char wal_compression_space[8192]; or
whatever.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Robert Haas
Дата:
Сообщение: Re: replication commands and log_statements
Следующее
От: Tomas Vondra
Дата:
Сообщение: Re: bad estimation together with large work_mem generates terrible slow hash joins