Re: [WIP PATCH] for Performance Improvement in Buffer Management
От | Amit kapila |
---|---|
Тема | Re: [WIP PATCH] for Performance Improvement in Buffer Management |
Дата | |
Msg-id | 6C0B27F7206C9E4CA54AE035729E9C38285442C5@szxeml509-mbx обсуждение исходный текст |
Ответ на | Re: [WIP PATCH] for Performance Improvement in Buffer Management (Amit kapila <amit.kapila@huawei.com>) |
Список | pgsql-hackers |
On Monday, October 22, 2012 11:21 PM Amit kapila wrote On Sunday, October 21, 2012 1:29 PM Amit kapila wrote: On Saturday, October 20, 2012 11:03 PM Jeff Janes wrote: On Fri, Sep 7, 2012 at 6:14 AM, Amit kapila <amit.kapila@huawei.com> wrote: >>>>> The results for the updated code is attached with this mail. >>>>> The scenario is same as in original mail. > >>>> The data for shared_buffers = 7GB is attached with this mail. I have also attached scripts used to take this data. >>> Is this result reproducible? Did you monitor IO (with something like >>>vmstat) to make sure there was no IO going on during the runs? >> Yes, I have reproduced it 2 times. However I shall reproduce once more and use vmstat as well. >> I have not observed with vmstat but it is observable in the data. >> When I have kept shared buffers = 5G, the tps is more and when I increased it to 7G, the tps is reduced which shows thereis some I/O started happening. >> When I increased to 10G, the tps reduced drastically which shows there is lot of I/O. Tommorow I will post 10G sharedbuffers data as well. > Today again I have again collected the data for configuration Shared_buffers = 7G along with vmstat. > The data and vmstat information (bi) are attached with this mail. It is observed from vmstat info that I/O is happeningfor both cases, however after running for > long time, the I/O is also comparatively less with new patch. Please find the data for shared buffers = 5G and 10G attached with this mail. Following is consolidated data for avg. of multiple runs: -Patch- -tps@-c8- -tps@-c16- -tps@-c32- -tps@-c64- -tps@-c100- head,-sb-5G 59731 59185 56282 30068 12608 head+patch,-sb-5G 59177 59957 57831 47986 21325 head,-sb-7G 5866 6319 6604 5841 head+patch,-sb-7G 15939 40501 38199 18025 head,-sb-10G 2079 2824 3217 3206 2657 head+patch,-sb-10G 2044 2706 3012 2967 2515 Script for collecting performance data are also attached with this mail: # $1 = Initialize pgbench # $2 = Scale Factor # $3 = No Of Clients # $4 = No Of pgbench Threads # $5 = Execution time in seconds # $6 = Shared Buffers # $7 = number of sample runs # $8 = Drop the tables Eg: taking 16GB database & 5GB shared buffers. ./run_reading.sh 1 1200 8 8 1200 5GB 4 0 ./run_reading.sh 0 1200 16 16 1200 5GB 4 0 ./run_reading.sh 0 1200 32 32 1200 5GB 4 0 ./run_reading.sh 0 1200 64 64 1200 5GB 4 0 Let me know your suggestions, how can we proceed to ensure whether it can be win or loss to have such a patch. With Regards, Amit Kapila.
Вложения
В списке pgsql-hackers по дате отправления: