On Wed, Mar 12, 2014 at 5:26 PM, Kouhei Kaigai <kaigai@ak.jp.nec.com> wrote:
> Thanks for your efforts!
>> Head patched
>> Diff
>> Select - 500K 772ms 2659ms -200%
>> Insert - 400K 3429ms 1948ms 43% (I am
>> not sure how it improved in this case)
>> delete - 200K 2066ms 3978ms -92%
>> update - 200K 3915ms 5899ms -50%
>>
>> This patch shown how the custom scan can be used very well but coming to
>> patch as It is having some performance problem which needs to be
>> investigated.
>>
>> I attached the test script file used for the performance test.
>>
> First of all, it seems to me your test case has too small data set that
> allows to hold all the data in memory - briefly 500K of 200bytes record
> will consume about 100MB. Your configuration allocates 512MB of
> shared_buffer, and about 3GB of OS-level page cache is available.
> (Note that Linux uses free memory as disk cache adaptively.)
Thanks for the information and a small correction. The Total number of
records are 5 million.
The select operation is selecting 500K records. The total table size
is around 1GB.
Once I get your new patch re-based on the custom scan patch, I will
test the performance
again by increasing my database size more than the RAM size. And also
I will make sure
that memory available for disk cache is less.
Regards,
Hari Babu
Fujitsu Australia