Re: CPU costs of random_zipfian in pgbench

Поиск
Список
Период
Сортировка
От Peter Geoghegan
Тема Re: CPU costs of random_zipfian in pgbench
Дата
Msg-id CAH2-Wzkgs10mQLuWmDvZ+eTZYsK=ZOs7YC_pPFUHoC9N8J+dmg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: CPU costs of random_zipfian in pgbench  (Fabien COELHO <coelho@cri.ensmp.fr>)
Список pgsql-hackers
On Tue, Feb 19, 2019 at 7:14 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:
> What I like in "pgbench" is that it is both versatile and simple so that
> people can benchmark their own data with their own load and their own
> queries by writing a few lines of trivial SQL and psql-like slash command
> and adjusting a few options, and extract meaningful statistics out of it.

That's also what I like about it. However, I don't think that pgbench
is capable of helping me answer questions that are not relatively
simple. That is going to become less and less interesting over time.

> I have not encountered other tools with this versatility and simplicity.
> The TPC-C implementation you point out and others I have seen are
> structurally targetted at TPC-C and nothing else. I do not care about
> TPC-C per se, I care about people being able to run relevant benchmarks
> with minimal effort.

Lots and lots of people care about TPC-C. Far more than care about
TPC-B, which has been officially obsolete for a long time. I don't
doubt that there are some bad reasons for the interest that you see
from vendors, but the TPC-C stuff has real merit (just read Jim Gray,
who you referenced in relation to the Zipfian generator). Lots of
smart people worked for a couple of years on the original
specification of TPC-C. There is a lot of papers on TPC-C. It *is*
complicated in various ways, which is a good thing, as it approximates
a real-world workload, and exercises a bunch of code paths that TPC-B
does not. TPC-A and TPC-B were early attempts, and managed to be
better than nothing at a time when performance validation was not
nearly as advanced as it is today.

> > I have been using BenchmarkSQL as a fair-use TPC-C implementation for
> > my indexing project, with great results. pgbench just isn't very
> > useful when validating the changes to B-Tree page splits that I
> > propose, because the insertion pattern cannot be modeled
> > probabilistically.
>
> I do not understand the use case, and why pgbench could not be used for
> this purpose.

TPC-C is characterized by *localized* monotonically increasing
insertions in most of its indexes. By far the biggest index is the
order lines table primary key, which is on '(ol_w_id, ol_d_id,
ol_o_id, ol_number)'. You get pathological performance with this
currently, because you should really to split at the point that new
items are inserted at, but we do a standard 50/50 page split. The left
half of the page isn't inserted into again (except by rare non-HOT
updates), so you end up *reliably* wasting about half of all space in
the index.

IOW, there are cases where we should behave like we're doing a
rightmost page split (kind of), that don't happen to involve the
rightmost page. The problem was described but not diagnosed in this
blog post: https://www.commandprompt.com/blog/postgres_autovacuum_bloat_tpc-c/

If you had random insertions (or insertions that were characterized or
defined in terms of a probability distribution and range), then you
would not see this problem. Instead, you'd get something like 70%
space utilization -- not 50% utilization. I think that it would be
difficult if not impossible to reproduce the pathological performance
with pgbench, even though it's a totally realistic scenario. There
needs to be explicit overall ordering/phases across co-operating
backends, or backends that are in some sense localized (e.g.
associated with a particular warehouse inputting a particular order).
TPC-C offers several variations of this same pathological case.

This is just an example. The point is that there is a lot to be said
for investing significant effort in coming up with a benchmark that is
a distillation of a real workload, with realistic though still kind of
adversarial bottlenecks. I wouldn't have become aware of the page
split problem without TPC-C, which suggests to me that the TPC people
know what they're doing. Also, there is an advantage to having
something that is a known quantity, that enables comparisons across
systems.

I also think that TPC-E is interesting, since it stresses OLTP systems
in a way that is quite different to TPC-C. It's much more read-heavy,
and has many more secondary indexes.

> Yep. Pgbench only does "simple stats". I script around the per-second
> progress output for graphical display and additional stats (eg 5 number
> summary…).

It's far easier to spot regressions over time and other such surprises
if you have latency graphs that break down latency by transaction.
When you're benchmarking queries with joins, then you need to be
vigilant of planner issues over time. The complexity has its pluses as
well as its minuses.

I'm hardly in a position to tell you what to work on. I think that
there may be another perspective on this that you could take something
away from, though.

--
Peter Geoghegan


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: Using old master as new replica after clean switchover
Следующее
От: Nikita Glukhov
Дата:
Сообщение: Re: [PATCH] kNN for btree