On Wed, Aug 23, 2017 at 9:45 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
>>
>> ...
>> if (tbm->nentries <= tbm->maxentries / 2)
>> {
>> /*
>> * We have made enough room.
>> ...
>> I think we could try higher fill factor, say, 0.9. tbm_lossify basically
>> just continues iterating over the hashtable with not so much overhead per a
>> call, so calling it more frequently should not be a problem. On the other
>> hand, it would have to process less pages, and the bitmap would be less
>> lossy.
>>
>> I didn't benchmark the index scan per se with the 0.9 fill factor, but the
>> reduction of lossy pages was significant.
>
> I will try this and produce some performance number.
>
I have done some performance testing as suggested by Alexander (patch attached).
Performance results: I can see a significant reduction in lossy_pages
count in all the queries and also a noticeable reduction in the
execution time in some of the queries. I have tested with 2 different
work_mem. Below are the test results (lossy pages count and execution
time).
TPCH benchmark: 20 scale factor
Machine: Power 4 socket
Tested with max_parallel_worker_per_gather=0
Work_mem: 20 MB
(Lossy Pages count:)
Query head patch
4 166551 35478
5 330679 35765
6 1160339 211357
14 666897 103275
15 1160518 211544
20 1982981 405903
(Time in ms:)
Query head patch
4 14849 14093
5 76790 74486
6 25816 14327
14 16011 11093
15 51381 35326
20 211115 195501
Work_mem: 40 MB
(Lossy Pages count)
Query head patch
6 995223 195681
14 337894 75744
15 995417 195873
20 1654016 199113
(Time in ms)
Query head patch
6 23819 14571
14 13514 11183
15 49980 32400
20 204441 188978
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers