Re: some aspects of our qsort might not be ideal

Поиск
Список
Период
Сортировка
От John Naylor
Тема Re: some aspects of our qsort might not be ideal
Дата
Msg-id CAFBsxsHeTACMP1JVQ+m35-v2NkmEqsJMHLhEfWk4sTB5aw_jkQ@mail.gmail.com
обсуждение исходный текст
Ответ на some aspects of our qsort might not be ideal  (John Naylor <john.naylor@enterprisedb.com>)
Ответы Re: some aspects of our qsort might not be ideal  (Peter Geoghegan <pg@bowt.ie>)
Список pgsql-hackers
Hi,

Attached is a draft series that implements some but not all features
of pattern-defeating quicksort, namely the ones I thought were
interesting for us. Recently this quicksort variant got committed for
the next release of the Go language 1.19 [1] (which in turn was based
on that of Rust [2]), and that implementation was a helpful additional
example. The bottom line is that replacing the partitioning scheme
this way is likely not worth doing because of our particular use
cases, but along the way I found some other things that might be worth
doing, so some good may come out of this.

0001 is a test module for running a microbenchmark, based on earlier
work by Thomas Munro.

0002 is just preliminary refactoring.

(From here on some regression tests change from patch to patch, since
tests that rely on a sort order might have duplicates in the sort
key.)

0003 starts a "partial insertion sort" if partitioning didn't do any
swaps, and bails if it doesn't finish quickly. A footnote to what I
said earlier:

> All we need is a
> lightweight way to detect that the partitioning phase didn't do any
> swaps.

Actually, Postgres did detect this until a3f0b3d68 [3], but wasn't
very bright about it -- it attempted a complete insertion sort.
Limiting the iterations is more sensible. In addition to the swap
count that I borrowed from ancient PG code, this patch checks the root
level as we do today (but now in a different code path), and there are
a couple additional heuristics to limit wasted work:
- Only try it if the pivot candidates are in order
- If the input is presorted we always return, but if it isn't we will
only proceed with swapping elements if the input was large enough that
we checked the order of nine pivot candidates.
- Only try partial insertion sort if the partitions are not highly
different in size, since bad pivots will lead to fewer swaps in
general, causing the swap count heuristic to be unhelpful. (Future
work: look into randomizing pivot candidates when encountering such
skewed partitions, as some other implementations do)

0004 builds on upon the previous and preemptively reverses the input
if there are nine pivot candidates and they are in descending order.

0005 implements the PDQuicksort partitioning scheme. In working on
this, I found I need to correct something I said earlier:

> There is one requirement that is a double-edged sword: Recursion to
> the partitions must happen in order. This has an additional advantage
> that in every case but one, insertion sort doesn't need to guard
> against running off the beginning of the array. The downside for us is
> that we currently recurse to a partition based on its size as a
> stack-guarding measure, so that guard would have to be replaced.

This is not true actually (I read too much into an implementation
detail), and with a tiny bit of bookkeepping we can retain our
practice of recursing to the smaller partition.

A very brief description of 0005: There are two partitioning
functions: left and right. Partition-right puts elements >= pivot in
the right partition. This is the default. When many duplicates end up
in the right partition, the pivot chosen for the next iteration there
could be the same as the previous pivot. In that case, we switch to
partition-left: put elements equal to the pivot in the left partition.
Since all those elements are the same, we are done and can iterate on
the right side. With this scheme, only 2-way comparators are needed.
Also, the previous counting of swaps during partitioning is replaced
by a simple pointer check once per iteration.

-- Tests:

I have two flavors of tests:
1) ORDER BY queries with 10 million records: Select a single value for
three types:
- bigint (so can use datum sort)
- 16-byte abbreviatable text
- 16-byte non-abbreviatable text
(median of 5 runs, script attached)

2) a microbenchmark sorting 1 million int32 datums with two ways to
vary comparison schemes: direct C function/fmgr function and
runtime/specialized, so four different combinations. The idea here is
to get an easy way to test comparators of different "heaviness", not
that these actual cases are all necessarily relevant.
(minimum of 10 runs)

The input distributions can be put into three categories:
- Varying amounts of randomness and presortedness (random, sort50/90/99, dither)
- Long runs of ascending and descending sequences (ascending,
descending, organ, merge)
- Low cardinality (dup8, dupsq, mod100/8)

To get the microbenchmark output in the same format as the query script I ran

psql -c 'select test_sort_cmp_weight(1 * 1024*1024)' 2> result.txt
perl -n -e 'print "$2\t$3\t$1\t$4\n" if /NOTICE:  \[(.+)\] num=(\d+),
order=(\w+), time=(\d+\.\d+)/;' result.txt > result.csv

-- Summary of results:

Partial insertion sort:
- For small types sometimes possibly a bit faster, sometimes
significantly faster.
- With queries, significantly slower for some partially-presorted
distributions, otherwise mostly no change.
- Could be further optimized for types known at compile time by using
a "sift" algorithm rather than swapping at every iteration. We could
also try basing the iteration limit on type size.

Optimistically reversing input:
- 20-25x faster on descending input, which is nice. It's not clear if
it would benefit "mostly descending" input.

PDQuicksort:
- With light or moderate types/comparators, it seems significantly
faster than B&M.
- With heavier comparators (including all queries), it's slightly
faster at best, and regresses noticeably with low cardinality inputs.
This latter finding was somewhat surprising. The O(kn) behavior for
low cardinality input is academically interesting, but doesn't
translate to gains for some of our key use cases.

-- Conclusion: The partial insertion sort seems useful if the
regressions can be worked out. The reverse-input optimization is a
small addition on top of that and seems worth doing. The PDQuicksort
partitioning scheme is a win in some cases I tested but a loss in
others, so at this point I'm inclined not to pursue that part further.

I'd be happy to hear any thoughts anyone might have.

[1] https://github.com/golang/go/blob/72e77a7f41bbf45d466119444307fd3ae996e257/src/sort/zsortfunc.go
[2] https://github.com/rust-lang/rust/blob/415c9f95884caebd0be9b837ef0885d1ffde1b9b/library/core/src/slice/sort.rs
[3]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=a3f0b3d68f9a5357a3f72b40a45bcc714a9e0649;hp=570b726533fb561bc5a35ba50ea944f139a250d5

Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Michael Paquier
Дата:
Сообщение: Re: pg_upgrade test writes to source directory
Следующее
От: Peter Geoghegan
Дата:
Сообщение: Re: some aspects of our qsort might not be ideal