Re: Adding skip scan (including MDAM style range skip scan) to nbtree
От | Peter Geoghegan |
---|---|
Тема | Re: Adding skip scan (including MDAM style range skip scan) to nbtree |
Дата | |
Msg-id | CAH2-Wzk68UGN_9hyOmeAB6BbEagJhe6kP+PmxNC5orkeDA9hGQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Adding skip scan (including MDAM style range skip scan) to nbtree (Peter Geoghegan <pg@bowt.ie>) |
Ответы |
Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Re: Adding skip scan (including MDAM style range skip scan) to nbtree |
Список | pgsql-hackers |
On Sat, Mar 22, 2025 at 1:47 PM Peter Geoghegan <pg@bowt.ie> wrote: > Attached is v30, which fully replaces the pstate.prechecked > optimization with the new _bt_skip_ikeyprefix optimization (which now > appears in v30-0002-Lower-nbtree-skip-array-maintenance-overhead.patch, > and not in 0003-*, due to my committing the primscan scheduling patch > just now). Attached is v31, which has a much-improved _bt_skip_ikeyprefix (which I've once again renamed, this time to _bt_set_startikey). There are some bug fixes here (nothing very interesting), and the heuristics have been tuned to take into account the requirements of conventional SAOP scans with dense/contiguous array keys (we should mostly just be preserving existing performance characteristics here). > The newly expanded _bt_skip_ikeyprefix needs quite a bit more testing > and polishing to be committable. I didn't even update the relevant > commit message for v30. Plus I'm not completely sure what to do about > RowCompare keys just yet, which have some funny rules when dealing > with NULLs. v31 fixes all these problems, too. Most notably, v31 differs from v30 in that it removes *both* of the optimizations added to Postgres 17 by Alexander Korotkov's commit e0b1ee17 -- including the pstate.firstmatch optimization. I didn't expect things to go this way. I recently said that pstate.firstmatch is something that can and should be kept (and v30 only removed pstate.prechecked). Obviously, I've since changed my mind. I changed my mind because lots of things are noticeably faster this way, across most of my microbenchmarks. These performance validation tests/microbenchmarks are really sensitive to the number of branches added to _bt_check_compare; removing anything nonessential from that hot code path really matters here. Notably, removing the pstate.firstmatch optimization (along with removing pstate.prechecked) is enough to fully eliminate what I've long considered to be the most important microbenchmark regressions. I refer to the microbenchmark suite originally written by Masahiro Ikeda, and later enhanced/expanded by me to use a wider variety of data cardinalities and datatypes. For the last several months, I thought we'd need to live with a 5% - 10% regression with such cases (those were the numbers I've thrown around when giving a high-level summary of the extent of the regressions in unfavorable cases). Now these microbenchmarks show that the queries are all about ~2% *faster* instead. What's more, there may even be a similar small improvement for important standard benchmarks (not microbenchmarks), such as the standard pgbench SELECT benchmark. (I think simple pgbench is that much faster, which is enough to matter, but not enough that it's easy to prove under time pressure.) There is at least a theoretical downside to replacing pstate.firstmatch with the new _bt_set_startikey path, that I must acknowledge: we only actually call _bt_set_startikey on the second or subsequent leaf page, so that's the earliest possible point that it can help speed things up (exactly like pstate.prechecked). Whereas pstate.firstmatch is usually effective right away, on the first page (effective at allowing us to avoid evaluating > or >= keys in the common case where we're scanning the index forwards). It's fair to wonder how much we'd be missing out, by giving up on that advantage. It's very difficult to actually see any benefit that can be tied to that theoretical advantage for pstate.firstmatch, though. What I actually see when I run my range microbenchmark suite with v31 (not the aforementioned main microbenchmark suite) is that simple cases involving no skip arrays (only simple scalar inequalities), with quals like "WHERE col BETWEEN 0 and 1_000_000" are now *also* about 2% faster. Any slowdown is more than made up for. Granted, if I worked hard enough I might find a counter-example, where it actually is slower overall, but I just can't see that ever mattering enough to make me reconsider getting rid of pstate.firstmatch. -- Peter Geoghegan
Вложения
В списке pgsql-hackers по дате отправления: