Re: 8.0beta5 results w/ dbt2

Поиск
Список
Период
Сортировка
От Mark Wong
Тема Re: 8.0beta5 results w/ dbt2
Дата
Msg-id 20041130104452.A15968@osdl.org
обсуждение исходный текст
Ответ на Re: 8.0beta5 results w/ dbt2  (Greg Stark <gsstark@mit.edu>)
Список pgsql-hackers
On Tue, Nov 30, 2004 at 02:00:29AM -0500, Greg Stark wrote:
> Mark Wong <markw@osdl.org> writes:
> 
> > I have some initial results using 8.0beta5 with our OLTP workload.
> >     http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
> >     throughput: 4076.97
> 
> Do people really only look at the "throughput" numbers? Looking at those
> graphs it seems that while most of the OLTP transactions are fulfilled in
> subpar response times, there are still significant numbers that take as much
> as 30s to fulfil.
> 
> Is this just a consequence of the type of queries being tested and the data
> distribution? Or is Postgres handling queries that could run consistently fast
> but for some reason generating large latencies sometimes?
> 
> I'm concerned because in my experience with web sites, once the database
> responds slowly for even a small fraction of the requests, the web server
> falls behind in handling http requests and a catastrophic failure builds.
> 
> It seems to me that reporting maximum, or at least the 95% confidence interval
> (95% of queries executed between 50ms-20s) would be more useful than an
> overall average. 
> 
> Personally I would be happier with an average of 200ms but an interval of
> 100-300ms than an average of 100ms but an interval of 50ms-20s. Consistency
> can be more important than sheer speed.
> 

Looking at just the throughput number is oversimplying it a bit.  The
scale factor (size of the database) limits what your maximum
throughput can be with constraints on think times (delays between
transaction requests) and the number of terminals simulated, which is
also dictated by the size of the database.  So given the throughput
with a scale factor (600 in these tests) you can infer whether or not
the response times are reasonable or not.  At the 600 warehouse scale
factor, we could theoretically hit about 7200 new-order transactions
per minute.  The math is roughly 12 * warehouses.

I do agree that reporting max response times and a confidence
interval (I have been meaning to report a 90th percentile number)
would be informative in addition to a mean.  Instead I included the
distribution charts in the mean time...

Mark


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: Increasing the length of
Следующее
От: "David Parker"
Дата:
Сообщение: Re: Increasing the length of