On Tue, Oct 01, 2013 at 02:50:29PM +0200, Fabien COELHO wrote:
> I do not think that there is a clean and simple way to take the
> start/stop period into account when computing global performances of
> a run. The TPC-C benchmark tells to ignore the warmup/closure
> period, whatever they are, and only perform measures within the
> steady state. However the full graph must be provided when the bench
> is provided.
That makes sense to me. "pgbench --progress" and "pgbench --log
--aggregate-interval" are good tools for excluding non-steady-state periods.
> About better measures: If I could rely on having threads, I would
> simply synchronise the threads at the beginning so that they
> actually start after they are all created, and one thread would
> decide when to stop and set a shared volatile variable to stop all
> transactions more or less at once. In this case, the thread start
> time would be taken just after the synchronization, and maybe only
> by thread 0 would be enough.
>
> Note that this is pretty standard stuff with threads, ISTM that it
> would solve most of the issues, *but* this is not possible with the
> "thread fork emulation" implemented by pgbench, which really means
> no threads at all.
You could do those same things in the fork emulation mode using anonymous
shared memory, like we do in the server. That would permit removing the
current "#ifdef PTHREAD_FORK_EMULATION" wart, too.
For the time being, I propose the attached comment patch.
--
Noah Misch
EnterpriseDB http://www.enterprisedb.com