On Fri, Sep 11, 2015 at 9:21 PM, Robert Haas <robertmhaas@gmail.com> wrote: > > On Fri, Sep 11, 2015 at 10:31 AM, Amit Kapila <amit.kapila16@gmail.com> wrote: > > > Could you perhaps try to create a testcase where xids are accessed that > > > are so far apart on average that they're unlikely to be in memory? And > > > then test that across a number of client counts? > > > > > > > Now about the test, create a table with large number of rows (say 11617457, > > I have tried to create larger, but it was taking too much time (more than a day)) > > and have each row with different transaction id. Now each transaction should > > update rows that are at least 1048576 (number of transactions whose status can > > be held in 32 CLog buffers) distance apart, that way ideally for each update it will > > try to access Clog page that is not in-memory, however as the value to update > > is getting selected randomly and that leads to every 100th access as disk access. > > What about just running a regular pgbench test, but hacking the > XID-assignment code so that we increment the XID counter by 100 each > time instead of 1? >
If I am not wrong we need 1048576 number of transactions difference
for each record to make each CLOG access a disk access, so if we
increment XID counter by 100, then probably every 10000th (or multiplier