Re: Performance Improvement by reducing WAL for Update Operation

Поиск
Список
Период
Сортировка
От Amit Kapila
Тема Re: Performance Improvement by reducing WAL for Update Operation
Дата
Msg-id 006601ce6373$649f1860$2ddd4920$@kapila@huawei.com
обсуждение исходный текст
Ответ на Re: Performance Improvement by reducing WAL for Update Operation  (Heikki Linnakangas <hlinnakangas@vmware.com>)
Ответы Re: Performance Improvement by reducing WAL for Update Operation  (Hari Babu <haribabu.kommi@huawei.com>)
Список pgsql-hackers
On Wednesday, March 06, 2013 2:57 AM Heikki Linnakangas wrote:
> On 04.03.2013 06:39, Amit Kapila wrote:
> > On Sunday, March 03, 2013 8:19 PM Craig Ringer wrote:
> >> On 02/05/2013 11:53 PM, Amit Kapila wrote:
> >>>> Performance data for the patch is attached with this mail.
> >>>> Conclusions from the readings (these are same as my previous
> patch):
> >>>>
> The attached patch also just adds overhead in most cases, but the
> overhead is much smaller in the worst case. I think that's the right
> tradeoff here - we want to avoid scenarios where performance falls off
> the cliff. That said, if you usually just get a slowdown, we certainly
> can't make this the default, and if we can't turn it on by default,
> this probably just isn't worth it.
> 
> The attached patch contains the variable-hash-size changes I posted in
> the "Optimizing pglz compressor". But in the delta encoding function,
> it goes further than that, and contains some further micro-
> optimizations:
> the hash is calculated in a rolling fashion, and it uses a specialized
> version of the pglz_hist_add macro that knows that the input can't
> exceed 4096 bytes. Those changes shaved off some cycles, but you could
> probably do more. One idea is to only add every 10 bytes or so to the
> history lookup table; that would sacrifice some compressibility for
> speed.
> 
> If you could squeeze pglz_delta_encode function to be cheap enough that
> we could enable this by default, this would be pretty cool patch. Or at
> least, the overhead in the cases that you get no compression needs to
> be brought down, to about 2-5 % at most I think. If it can't be done
> easily, I feel that this probably needs to be dropped.

After trying some more on optimizing pglz_delta_encode(), I found that if we
use new data also in history, then the results of compression
and cpu utilization are much better. 

In addition to the pg lz micro optimization changes, following changes are
done in modified patch 

1. The unmatched new data is also added to the history which can be
referenced later. 
2. To incorporate this change in the lZ algorithm, 1 extra control bit is
needed to indicate if data is from old or new tuple

Performance Data
-----------------

Head code: 
               testname                 | wal_generated |     duration 
-----------------------------------------+---------------+------------------
two short fields, no change             |    1232908016 | 36.3914430141449 two short fields, one changed           |
1232904040| 36.5231261253357 two short fields, both changed          |    1235215048 | 37.7455959320068 one short and
onelong field, no change |    1051394568 | 24.418487071991 ten tiny fields, all changed            |    1395189872 |
43.2316210269928hundred tiny fields, first 10 changed   |     622156848 | 21.9155580997467 hundred tiny fields, all
changed       |     625962056 | 22.3296411037445 hundred tiny fields, half changed       |     621901128 |
21.3881061077118hundred tiny fields, half nulled        |     557708096 | 19.4633228778839 
 



pglz-with-micro-optimization-compress-using-newdata-1: 
               testname                 | wal_generated |     duration 
-----------------------------------------+---------------+------------------
two short fields, no change             |    1235992768 | 37.3365149497986 two short fields, one changed           |
1240979256| 36.897796869278 two short fields, both changed          |    1236079976 | 38.4273149967194 one short and
onelong field, no change |     651010944 | 20.9490079879761 ten tiny fields, all changed            |    1315606864 |
42.5771369934082hundred tiny fields, first 10 changed   |     459134432 | 17.4556930065155 hundred tiny fields, all
changed       |     456506680 | 17.8865270614624 hundred tiny fields, half changed       |     454784456 |
18.0130441188812hundred tiny fields, half nulled        |     486675784 | 18.6600229740143
 


Observation
---------------
1. It yielded compression in more cases (refer all cases of hundred tiny
fields)
2. CPU- utilization is also better.


Performance data for pgbench related scenarios is attached in document
(pgbench_lz_opt_compress_using_newdata.htm)

1. Better reduction in WAL
2. TPS increase can be observed after records size is >=250
3. There is small performance penality for single-thread (0.04~3.45), but
when penality is 3.45 in single thread, for 8 threads TPS improvement is
high.

Do you think it matches the conditions you have in mind for further
proceeding of this patch?


Thanks to Hari Babu for helping in implementation of this idea and taking
performance data.


With Regards,
Amit Kapila.

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Heikki Linnakangas
Дата:
Сообщение: Re: Hard limit on WAL space used (because PANIC sucks)
Следующее
От: Markus Wanner
Дата:
Сообщение: Re: Proposal for CSN based snapshots