pgsql/contrib/pgbench README.pgbench README.pg ...
От | ishii@postgresql.org (Tatsuo Ishii) |
---|---|
Тема | pgsql/contrib/pgbench README.pgbench README.pg ... |
Дата | |
Msg-id | 20020720030201.F31DA475943@postgresql.org обсуждение исходный текст |
Список | pgsql-committers |
CVSROOT: /cvsroot Module name: pgsql Changes by: ishii@postgresql.org 02/07/19 23:02:01 Modified files: contrib/pgbench: README.pgbench README.pgbench_jis pgbench.c Log message: Apply patches from Neil Conway. > Hi Tatsuo, > > I've attached a patch for the version of pgbench in CVS. It includes the > following changes: > > - fix some spelling mistakes, indentation stuff, etc. > > - minor code cleanup -- (void) args instead of (), etc. > > - allocate the state array dynamically, so that it is only as > large as needed. This reduces the memory consumption of pgbench > slightly, and makes a larger MAXCLIENTS setting possible > > - (the only controversial change) add an option "-l" to log > transaction latencies to a file. The "transaction latency" > is the time between when the BEGIN is issued and the transaction > commits. This is written to a file, along with the client # > and the transaction #. The data in the file can then be used > for things like: > > - consistency analysis: is the TPS the same through the > entire run of pgbench, or does it change? > > - more detailed stats: what is the average latency, worse-case > latency, best-case latency? > > - graphs: feed the data to gnuplot, graph latency versus. time > > - etc. > > I was going to store this data in memory and write it to disk > at the end of the pgbench run, but that isn't feasible because > the data can be very large: for example, ~70MB if benchmarking > 128 clients doing 100,000 transactions each. > > Cheers, > > Neil
В списке pgsql-committers по дате отправления: