On Thu, Jun 6, 2013 at 3:01 AM, Amit Kapila <amit.kapila@huawei.com> wrote:
> To avoid above 3 factors in test readings, I used below steps:
> 1. Initialize the database with scale factor such that database size +
> shared_buffers = RAM (shared_buffers = 1/4 of RAM).
> For example:
> Example -1
> if RAM = 128G, then initialize db with scale factor = 6700
> and shared_buffers = 32GB.
> Database size (98 GB) + shared_buffers (32GB) = 130 (which
> is approximately equal to total RAM)
> Example -2 (this is based on your test m/c)
> If RAM = 64GB, then initialize db with scale factor = 3400
> and shared_buffers = 16GB.
> 2. reboot m/c
> 3. Load all buffers with data (tables/indexes of pgbench) using pg_prewarm.
> I had loaded 3 times, so that usage count of buffers will be approximately
> 3.
Hmm. I don't think the usage count will actually end up being 3,
though, because the amount of data you're loading is sized to 3/4 of
RAM, and shared_buffers is just 1/4 of RAM, so I think that each run
of pg_prewarm will end up turning over the entire cache and you'll
never get any usage counts more than 1 this way. Am I confused?
I wonder if it would be beneficial to test the case where the database
size is just a little more than shared_buffers. I think that would
lead to a situation where the usage counts are high most of the time,
which - now that you mention it - seems like the sweet spot for this
patch.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company