Обсуждение: partial heap only tuples

Поиск
Список
Период
Сортировка

partial heap only tuples

От
"Bossart, Nathan"
Дата:
Hello,

I'm hoping to gather some early feedback on a heap optimization I've
been working on.  In short, I'm hoping to add "partial heap only
tuple" (PHOT) support, which would allow you to skip updating indexes
for unchanged columns even when other indexes require updates.  Today,
HOT works wonders when no indexed columns are updated.  However, as
soon as you touch one indexed column, you lose that optimization
entirely, as you must update every index on the table.  The resulting
performance impact is a pain point for many of our (AWS's) enterprise
customers, so we'd like to lend a hand for some improvements in this
area.  For workloads involving a lot of columns and a lot of indexes,
an optimization like PHOT can make a huge difference.  I'm aware that
there was a previous attempt a few years ago to add a similar
optimization called WARM [0] [1].  However, I only noticed this
previous effort after coming up with the design for PHOT, so I ended
up taking a slightly different approach.  I am also aware of a couple
of recent nbtree improvements that may mitigate some of the impact of
non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice
complement to those.  I've attached a very early proof-of-concept
patch with the design described below.

As far as performance is concerned, it is simple enough to show major
benefits from PHOT by tacking on a large number of indexes and columns
to a table.  For a short pgbench run where each table had 5 additional
text columns and indexes on every column, I noticed a ~34% bump in
TPS with PHOT [4].  Theoretically, the TPS bump should be even higher
with additional columns with indexes.  In addition to showing such
benefits, I have also attempted to show that regular pgbench runs are
not significantly affected.  For a short pgbench run with no table
modifications, I noticed a ~2% bump in TPS with PHOT [5].

Next, I'll go into the design a bit.  I've commandeered the two
remaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and
HEAP_PHOT_TUPLE.  These are analogous to the HEAP_HOT_UPDATED and
HEAP_ONLY_TUPLE bits.  (If there are concerns about exhausting the
t_infomask2 bits, I think we could only use one of the remaining bits
as a "modifier" bit on the HOT ones.  I opted against that for the
proof-of-concept patch to keep things simple.)  When creating a PHOT
tuple, we only create new index tuples for updated columns.  These new
index tuples point to the PHOT tuple.  Following is a simple
demonstration with a table with two integer columns, each with its own
index:

postgres=# SELECT  heap_tuple_infomask_flags(t_infomask, t_infomask2), t_data
             FROM  heap_page_items(get_raw_page('test', 0))
            WHERE  t_infomask IS NOT NULL
               OR  t_infomask2 IS NOT NULL;
                          heap_tuple_infomask_flags                          |       t_data
-----------------------------------------------------------------------------+--------------------
 ("{HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_PHOT_UPDATED}",{})          | \x0000000000000000
 ("{HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_PHOT_UPDATED,HEAP_PHOT_TUPLE}",{}) | \x0100000000000000
 ("{HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_PHOT_TUPLE}",{})                     | \x0100000002000000
(3 rows)

postgres=# SELECT  itemoffset, ctid, data
             FROM  bt_page_items(get_raw_page('test_a_idx', 1));
 itemoffset | ctid  |          data
------------+-------+-------------------------
          1 | (0,1) | 00 00 00 00 00 00 00 00
          2 | (0,2) | 01 00 00 00 00 00 00 00
(2 rows)

postgres=# SELECT  itemoffset, ctid, data
             FROM  bt_page_items(get_raw_page('test_b_idx', 1));
 itemoffset | ctid  |          data
------------+-------+-------------------------
          1 | (0,1) | 00 00 00 00 00 00 00 00
          2 | (0,3) | 02 00 00 00 00 00 00 00
(2 rows)

When it is time to scan through a PHOT chain, there are a couple of
things to account for.  Sequential scans work out-of-the-box thanks to
the visibility rules, but other types of scans like index scans
require additional checks.  If you encounter a PHOT chain when
performing an index scan, you should only continue following the chain
as long as none of the columns the index indexes are modified.  If the
scan does encounter such a modification, we stop following the chain
and continue with the index scan.  Even if there is a tuple in that
PHOT chain that should be returned by our index scan, we will still
find it, as there will be another matching index tuple that points us
to later in the PHOT chain.  My initial idea for determining which
columns were modified was to add a new bitmap after the "nulls" bitmap
in the tuple header.  However, the attached patch simply uses
HeapDetermineModifiedColumns().  I've yet to measure the overhead of
this approach versus the bitmap approach, but I haven't noticed
anything too detrimental in the testing I've done so far.

In my proof-of-concept patch, I've included a temporary hack to get
some basic bitmap scans working as expected.  Since we won't have
followed the PHOT chains in the bitmap index scan, we must know how to
follow them in the bitmap heap scan.  Unfortunately, the bitmap heap
scan has no knowledge of what indexed columns to pay attention to when
following the PHOT chains.  My temporary hack fixes this by having the
bitmap heap scan pull the set of indexed columns it needs to consider
from the outer plan.  I think this is one area of the design that will
require substantially more effort.  Following is a demonstration of a
basic sequential scan and bitmap scan:

postgres=# EXPLAIN (COSTS FALSE) SELECT * FROM test;
    QUERY PLAN
------------------
 Seq Scan on test
(1 row)

postgres=# SELECT * FROM test;
 a | b
---+---
 1 | 2
(1 row)

postgres=# EXPLAIN (COSTS FALSE) SELECT * FROM test WHERE a >= 0;
              QUERY PLAN
---------------------------------------
 Bitmap Heap Scan on test
   Recheck Cond: (a >= 0)
   ->  Bitmap Index Scan on test_a_idx
         Index Cond: (a >= 0)
(4 rows)

postgres=# SELECT * FROM test WHERE a >= 0;
 a | b
---+---
 1 | 2
(1 row)

This design allows for "weaving" between HOT and PHOT in a chain.
However, it is still important to treat each consecutive set of HOT
updates or PHOT updates as its own chain for the purposes of pruning
and cleanup.  Pruning is heavily restricted for PHOT due to the
presence of corresponding index tuples.  I believe we can redirect
line pointers for consecutive sets of PHOT updates that modify the
same set of indexed columns, but this is only possible if no index has
duplicate values in the redirected set.  Also, I do not think it is
possible to prune intermediate line pointers in a PHOT chain.  While
it may be possible to redirect all line pointers to the final tuple in
a series of updates to the same set of indexed columns, my hunch is
that doing so will add significant complexity for tracking
intermediate updates, and any performance gains will be marginal.
I've created some small diagrams to illustrate my proposed cleanup
strategy.

Here is a hypothetical PHOT chain.

        idx1      0       1       2
        idx2      0                       1       2
        idx3      0
        lp        1       2       3       4       5
        heap      (0,0,0) (1,0,0) (2,0,0) (2,1,0) (2,2,0)

Heap tuples may be removed and line pointers may be redirected for
consecutive updates to the same set of indexes (as long as no index
has duplicate values in the redirected set of updates).

        idx1      0       1       2
        idx2      0                       1       2
        idx3      0
        lp        1       2  ->   3       4  ->   5
        heap      (0,0,0)         (2,0,0)         (2,2,0)

When following redirect chains, we must check that the "interesting"
columns for the relevant indexes are not updated whenever a tuple is
found.  In order to be eligible for cleanup, the final tuple in the
corresponding PHOT/HOT chain must also be eligible for cleanup, or all
indexes must have been updated later in the chain before any visible
tuples.  (I suspect that the former condition may cause significant
bloat for some workloads and the latter condition may be prohibitively
complicated.  Perhaps this can be mitigated by limiting how long we
allow PHOT chains to get.)  My proof-of-concept patch does not yet
implement line pointer redirecting and cleanup, so it is possible that
I am missing some additional obstacles and optimizations here.

I think PostgreSQL 15 is realistically the earliest target version for
this change.  Given that others find this project worthwhile, that's
my goal for this patch.  I've CC'd a number of folks who have been
involved in this project already and who I'm hoping will continue to
help me drive this forward.

Nathan

[0] https://www.postgresql.org/message-id/flat/CABOikdMop5Rb_RnS2xFdAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw%40mail.gmail.com
[1]
https://www.postgresql.org/message-id/flat/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com
[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0d861bbb
[3] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=d168b666
[4] non-PHOT:
        transaction type: <builtin: TPC-B (sort of)>
        scaling factor: 1000
        query mode: simple
        number of clients: 256
        number of threads: 256
        duration: 1800 s
        number of transactions actually processed: 29759733
        latency average = 15.484 ms
        latency stddev = 10.102 ms
        tps = 16530.552950 (including connections establishing)
        tps = 16530.730565 (excluding connections establishing)

    PHOT:
        ...
        number of transactions actually processed: 39998968
        latency average = 11.520 ms
        latency stddev = 8.157 ms
        tps = 22220.709117 (including connections establishing)
        tps = 22221.182648 (excluding connections establishing)
[5] non-PHOT:
        ...
        number of transactions actually processed: 151841961
        latency average = 3.034 ms
        latency stddev = 1.854 ms
        tps = 84354.268591 (including connections establishing)
        tps = 84355.061353 (excluding connections establishing)

    PHOT:
        ...
        number of transactions actually processed: 155225857
        latency average = 2.968 ms
        latency stddev = 1.264 ms
        tps = 86234.044783 (including connections establishing)
        tps = 86234.961286 (excluding connections establishing)


Вложения

Re: partial heap only tuples

От
Bruce Momjian
Дата:
On Tue, Feb  9, 2021 at 06:48:21PM +0000, Bossart, Nathan wrote:
> Hello,
> 
> I'm hoping to gather some early feedback on a heap optimization I've
> been working on.  In short, I'm hoping to add "partial heap only
> tuple" (PHOT) support, which would allow you to skip updating indexes
> for unchanged columns even when other indexes require updates.  Today,

I think it is great you are working on this.  I think it is a major way
to improve performance and I have been disappointed it has not moved
forward since 2016.

> HOT works wonders when no indexed columns are updated.  However, as
> soon as you touch one indexed column, you lose that optimization
> entirely, as you must update every index on the table.  The resulting
> performance impact is a pain point for many of our (AWS's) enterprise
> customers, so we'd like to lend a hand for some improvements in this
> area.  For workloads involving a lot of columns and a lot of indexes,
> an optimization like PHOT can make a huge difference.  I'm aware that
> there was a previous attempt a few years ago to add a similar
> optimization called WARM [0] [1].  However, I only noticed this
> previous effort after coming up with the design for PHOT, so I ended
> up taking a slightly different approach.  I am also aware of a couple
> of recent nbtree improvements that may mitigate some of the impact of
> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice
> complement to those.  I've attached a very early proof-of-concept
> patch with the design described below.

How is your approach different from those of [0] and [1]?  It is
interesting you still see performance benefits even after the btree
duplication improvements.  Did you test with those improvements?

> As far as performance is concerned, it is simple enough to show major
> benefits from PHOT by tacking on a large number of indexes and columns
> to a table.  For a short pgbench run where each table had 5 additional
> text columns and indexes on every column, I noticed a ~34% bump in
> TPS with PHOT [4].  Theoretically, the TPS bump should be even higher

That's a big improvement.

> Next, I'll go into the design a bit.  I've commandeered the two
> remaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and
> HEAP_PHOT_TUPLE.  These are analogous to the HEAP_HOT_UPDATED and
> HEAP_ONLY_TUPLE bits.  (If there are concerns about exhausting the
> t_infomask2 bits, I think we could only use one of the remaining bits
> as a "modifier" bit on the HOT ones.  I opted against that for the
> proof-of-concept patch to keep things simple.)  When creating a PHOT
> tuple, we only create new index tuples for updated columns.  These new
> index tuples point to the PHOT tuple.  Following is a simple
> demonstration with a table with two integer columns, each with its own
> index:

Whatever solution you have, you have to be able to handle
adding/removing columns, and adding/removing indexes.

> When it is time to scan through a PHOT chain, there are a couple of
> things to account for.  Sequential scans work out-of-the-box thanks to
> the visibility rules, but other types of scans like index scans
> require additional checks.  If you encounter a PHOT chain when
> performing an index scan, you should only continue following the chain
> as long as none of the columns the index indexes are modified.  If the
> scan does encounter such a modification, we stop following the chain
> and continue with the index scan.  Even if there is a tuple in that

I think in patch [0] and [1], if an index column changes, all the
indexes had to be inserted into, while you seem to require inserts only
into the index that needs it.  Is that correct?

> PHOT chain that should be returned by our index scan, we will still
> find it, as there will be another matching index tuple that points us
> to later in the PHOT chain.  My initial idea for determining which
> columns were modified was to add a new bitmap after the "nulls" bitmap
> in the tuple header.  However, the attached patch simply uses
> HeapDetermineModifiedColumns().  I've yet to measure the overhead of
> this approach versus the bitmap approach, but I haven't noticed
> anything too detrimental in the testing I've done so far.

A bitmap is an interesting approach, but you are right it will need
benchmarking.

I wonder if you should create a Postgres wiki page to document all of
this.  I agree PG 15 makes sense.  I would like to help with this if I
can.  I will need to study this email more later.

-- 
  Bruce Momjian  <bruce@momjian.us>        https://momjian.us
  EDB                                      https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee




Re: partial heap only tuples

От
"Bossart, Nathan"
Дата:
On 2/10/21, 2:43 PM, "Bruce Momjian" <bruce@momjian.us> wrote:
> On Tue, Feb  9, 2021 at 06:48:21PM +0000, Bossart, Nathan wrote:
>> HOT works wonders when no indexed columns are updated.  However, as
>> soon as you touch one indexed column, you lose that optimization
>> entirely, as you must update every index on the table.  The resulting
>> performance impact is a pain point for many of our (AWS's) enterprise
>> customers, so we'd like to lend a hand for some improvements in this
>> area.  For workloads involving a lot of columns and a lot of indexes,
>> an optimization like PHOT can make a huge difference.  I'm aware that
>> there was a previous attempt a few years ago to add a similar
>> optimization called WARM [0] [1].  However, I only noticed this
>> previous effort after coming up with the design for PHOT, so I ended
>> up taking a slightly different approach.  I am also aware of a couple
>> of recent nbtree improvements that may mitigate some of the impact of
>> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice
>> complement to those.  I've attached a very early proof-of-concept
>> patch with the design described below.
>
> How is your approach different from those of [0] and [1]?  It is
> interesting you still see performance benefits even after the btree
> duplication improvements.  Did you test with those improvements?

I believe one of the main differences is that index tuples will point
to the corresponding PHOT tuple instead of the root of the HOT/PHOT
chain.  I'm sure there are other differences.  I plan on giving those
two long threads another read-through in the near future.

I made sure that the btree duplication improvements were applied for
my benchmarking.  IIUC those don't alleviate the requirement that you
insert all index tuples for non-HOT updates, so PHOT can still provide
some added benefits there.

>> Next, I'll go into the design a bit.  I've commandeered the two
>> remaining bits in t_infomask2 to use as HEAP_PHOT_UPDATED and
>> HEAP_PHOT_TUPLE.  These are analogous to the HEAP_HOT_UPDATED and
>> HEAP_ONLY_TUPLE bits.  (If there are concerns about exhausting the
>> t_infomask2 bits, I think we could only use one of the remaining bits
>> as a "modifier" bit on the HOT ones.  I opted against that for the
>> proof-of-concept patch to keep things simple.)  When creating a PHOT
>> tuple, we only create new index tuples for updated columns.  These new
>> index tuples point to the PHOT tuple.  Following is a simple
>> demonstration with a table with two integer columns, each with its own
>> index:
>
> Whatever solution you have, you have to be able to handle
> adding/removing columns, and adding/removing indexes.

I admittedly have not thought too much about the implications of
adding/removing columns and indexes for PHOT yet, but that's
definitely an important part of this project that I need to look into.
I see that HOT has some special handling for commands like CREATE
INDEX that I can reference.

>> When it is time to scan through a PHOT chain, there are a couple of
>> things to account for.  Sequential scans work out-of-the-box thanks to
>> the visibility rules, but other types of scans like index scans
>> require additional checks.  If you encounter a PHOT chain when
>> performing an index scan, you should only continue following the chain
>> as long as none of the columns the index indexes are modified.  If the
>> scan does encounter such a modification, we stop following the chain
>> and continue with the index scan.  Even if there is a tuple in that
>
> I think in patch [0] and [1], if an index column changes, all the
> indexes had to be inserted into, while you seem to require inserts only
> into the index that needs it.  Is that correct?

Right, PHOT only requires new index tuples for the modified columns.
However, I was under the impression that WARM aimed to do the same
thing.  I might be misunderstanding your question.

> I wonder if you should create a Postgres wiki page to document all of
> this.  I agree PG 15 makes sense.  I would like to help with this if I
> can.  I will need to study this email more later.

Thanks for taking a look.  I think a wiki is a good idea for keeping
track of the current state of the design.  I'll look into that.

Nathan


Re: partial heap only tuples

От
Andres Freund
Дата:
Hi,

On 2021-02-09 18:48:21 +0000, Bossart, Nathan wrote:
> In order to be eligible for cleanup, the final tuple in the
> corresponding PHOT/HOT chain must also be eligible for cleanup, or all
> indexes must have been updated later in the chain before any visible
> tuples.

This sounds like it might be prohibitively painful. Adding effectively
unremovable bloat to remove other bloat is not an uncomplicated
premise. I think you'd really need a way to fully remove this as part of
vacuum for this to be viable.

Greetings,

Andres Freund



Re: partial heap only tuples

От
"Bossart, Nathan"
Дата:
On 2/13/21, 8:26 AM, "Andres Freund" <andres@anarazel.de> wrote:
> On 2021-02-09 18:48:21 +0000, Bossart, Nathan wrote:
>> In order to be eligible for cleanup, the final tuple in the
>> corresponding PHOT/HOT chain must also be eligible for cleanup, or all
>> indexes must have been updated later in the chain before any visible
>> tuples.
>
> This sounds like it might be prohibitively painful. Adding effectively
> unremovable bloat to remove other bloat is not an uncomplicated
> premise. I think you'd really need a way to fully remove this as part of
> vacuum for this to be viable.

Yeah, this is something I'm concerned about.  I think adding a bitmap
of modified columns to the header of PHOT-updated tuples improves
matters quite a bit, even for single-page vacuuming.  Following is a
strategy I've been developing (there may still be some gaps).  Here's
a basic PHOT chain where all tuples are visible and the last one has
not been deleted or updated:

idx1    0       1               2       3
idx2    0       1       2
idx3    0               2               3
lp      1       2       3       4       5
tuple   (0,0,0) (0,1,1) (2,2,1) (2,2,2) (3,2,3)
bitmap          -xx     xx-     --x     x-x

For single-page vacuum, we take the following actions:
    1. Starting at the root of the PHOT chain, create an OR'd bitmap
       of the chain.
    2. Walk backwards, OR-ing the bitmaps.  Stop when the bitmap
       matches the one from step 1.  As we walk backwards, identify
       "key" tuples, which are tuples where the OR'd bitmap changes as
       you walk backwards.  If the OR'd bitmap does not include all
       columns for the table, also include the root of the PHOT chain
       as a key tuple.
    3. Redirect each key tuple to the next key tuple.
    4. For all but the first key tuple, OR the bitmaps of all pruned
       tuples from each key tuple (exclusive) to the next key tuple
       (inclusive) and store the result in the bitmap of the next key
       tuple.
    5. Mark all line pointers for all non-key tuples as dead.  Storage
       can be removed for all tuples except the last one, but we must
       leave around the bitmap for all key tuples except for the first
       one.

After this, our basic PHOT chain looks like this:

idx1    0       1               2       3
idx2    0       1       2
idx3    0               2               3
lp      X       X       3->5    X       5
tuple                                   (3,2,3)
bitmap                                  x-x

Without PHOT, this intermediate state would have 15 index tuples, 5
line pointers, and 1 heap tuples.  With PHOT, we have 10 index tuples,
5 line pointers, 1 heap tuple, and 1 bitmap.  When we vacuum the
indexes, we can reclaim the dead line pointers and remove the
associated index tuples:

idx1            3
idx2    2
idx3    2       3
lp      3->5    5
tuple           (3,2,3)
bitmap          x-x

Without PHOT, this final state would have 3 index tuples, 1 line
pointer, and 1 heap tuple.  With PHOT, we have 4 index tuples, 2 line
pointers, 1 heap tuple, and 1 bitmap.  Overall, we still end up
keeping around more line pointers and tuple headers (for the bitmaps),
but maybe that is good enough.  I think the next step here would be to
find a way to remove some of the unnecessary index tuples and adjust
the remaining ones to point to the last line pointer in the PHOT
chain.

Nathan


Re: partial heap only tuples

От
Peter Geoghegan
Дата:
On Tue, Feb 9, 2021 at 10:48 AM Bossart, Nathan <bossartn@amazon.com> wrote:
> I'm hoping to gather some early feedback on a heap optimization I've
> been working on.  In short, I'm hoping to add "partial heap only
> tuple" (PHOT) support, which would allow you to skip updating indexes
> for unchanged columns even when other indexes require updates.  Today,
> HOT works wonders when no indexed columns are updated.  However, as
> soon as you touch one indexed column, you lose that optimization
> entirely, as you must update every index on the table.  The resulting
> performance impact is a pain point for many of our (AWS's) enterprise
> customers, so we'd like to lend a hand for some improvements in this
> area.  For workloads involving a lot of columns and a lot of indexes,
> an optimization like PHOT can make a huge difference.  I'm aware that
> there was a previous attempt a few years ago to add a similar
> optimization called WARM [0] [1].  However, I only noticed this
> previous effort after coming up with the design for PHOT, so I ended
> up taking a slightly different approach.  I am also aware of a couple
> of recent nbtree improvements that may mitigate some of the impact of
> non-HOT updates [2] [3], but I am hoping that PHOT serves as a nice
> complement to those.  I've attached a very early proof-of-concept
> patch with the design described below.

I would like to share some thoughts that I have about how I think
about optimizations like PHOT, and how they might fit together with my
own work -- particularly the nbtree bottom-up index deletion feature
you referenced. My remarks could equally well apply to WARM.
Ordinarily this is the kind of thing that would be too hand-wavey for
the mailing list, but we don't have the luxury of in-person
communication right now.

Everybody tends to talk about HOT as if it works perfectly once you
make some modest assumptions, such as "there are no long-running
transactions", and "no UPDATEs will logically modify indexed columns".
But I tend to doubt that that's truly the case -- I think that there
are still pathological cases where HOT cannot keep the total table
size stable in the long run due to subtle effects that eventually
aggregate into significant issues, like heap fragmentation. Ask Jan
Wieck about the stability of some of the TPC-C/BenchmarkSQL tables to
get one example of this. There is no reason to believe that PHOT will
help with that. Maybe that's okay, but I would think carefully about
what that means if I were undertaking this work. Ensuring stability in
the on-disk size of tables in cases where the size of the logical
database is stable should be an important goal of a project like PHOT
or HOT.

If you want to get a better sense of how these inefficiencies might
happen, I suggest looking into using recently added autovacuum logging
to analyze how well HOT works today, using the technique that I go
into here:

https://postgr.es/m/CAH2-WzkjU+NiBskZunBDpz6trSe+aQvuPAe+xgM8ZvoB4wQwhA@mail.gmail.com

Small inefficiencies in the on-disk structure have a tendency to
aggregate over time, at least when there is no possible way to reverse
them. The bottom-up index deletion stuff is very effective as a
backstop against index bloat, because things are generally very
non-linear. The cost of an unnecessary page split is very high, and
permanent. But we can make it cheap to *try* to avoid that using
fairly simple heuristics. We can be reasonably confident that we're
about to split the page unnecessarily, and use cues that ramp up the
number of heap page accesses as needed. We ramp up during a bottom-up
index deletion, as we manage to free some index tuples as a result of
previous heap page accesses.

This works very well because we can intervene very selectively. We
aren't interested in deleting index tuples unless and until we really
have to, and in general there tends to be quite a bit of free space to
temporarily store extra version duplicates -- that's what most index
pages look like, even on the busiest of databases. It's possible for
the bottom-up index deletion mechanism to be invoked very
infrequently, and yet make a huge difference. And when it fails to
free anything, it fails permanently for that particular leaf page
(because it splits) -- so now we have lots of space for future index
tuple insertions that cover the original page's key space. We won't
thrash.

My intuition is that similar principles can be applied inside heapam.
Failing to fit related versions on a heap page (having managed to do
so for hours or days before that point) is more or less the heap page
equivalent of a leaf page split from version churn (this is the
pathology that bottom-up index deletion targets). For example, we
could have a fall back mode that compresses old versions that is used
if and only if heap pruning was attempted but then failed. We should
always try to avoid migrating to a new heap page, because that amounts
to a permanent solution to a temporary problem. We should perhaps make
the updater work to prove that that's truly necessary, rather than
giving up immediately (i.e. assuming that it must be necessary at the
first sign of trouble).

We might have successfully fit the successor heap tuple version a
million times before just by HOT pruning, and yet currently we give up
just because it didn't work on the one millionth and first occasion --
don't you think that's kind of silly? We may be able to afford having
a fallback strategy that is relatively expensive, provided it is
rarely used. And it might be very effective in the aggregate, despite
being rarely used -- it might provide us just what we were missing
before. Just try harder when you run into a problem every once in a
blue moon!

A diversity of strategies with fallback behavior is sometimes the best
strategy. Don't underestimate the contribution of rare and seemingly
insignificant adverse events. Consider the lifecycle of the data over
time. If we quit trying to fit new versions on the same heap page at
the first sign of real trouble, then it's only a matter of time until
widespread heap fragmentation results -- each heap page only has to be
unlucky once, and in the long run it's inevitable that they all will.
We could probably do better at nipping it in the bud at the level of
individual heap pages and opportunistic prune operations.

I'm sure that it would be useful to not have to rely on bottom-up
index deletion in more cases -- I think that the idea of "a better
HOT" might still be very helpful. Bottom-up index deletion is only
supposed to be a backstop against pathological behavior (version churn
page splits), which is probably always going to be possible with a
sufficiently extreme workload. I don't believe that the current levels
of version churn/write amplification that we still see with Postgres
must be addressed through totally eliminating multiple versions of the
same logical row that live together in the same heap page. This idea
is a false dichotomy. And it fails to acknowledge how the current
design often works very well. When and how it fails to work well with
a real workload and real tuning (especially heap fill factor tuning)
is probably not well understood. Why not start with that?

Our default heap fill factor is 100. Maybe that's the right decision,
but it significantly impedes the ability of HOT to keep the size of
tables stable over time. Just because heap fill factor 90 also has
issues today doesn't mean that each pathological behavior cannot be
fixed through targeted intervention. Maybe the myth that HOT works
perfectly once you make some modest assumptions could come true.

-- 
Peter Geoghegan



Re: partial heap only tuples

От
Bruce Momjian
Дата:
On Sun, Apr 18, 2021 at 04:27:15PM -0700, Peter Geoghegan wrote:
> Everybody tends to talk about HOT as if it works perfectly once you
> make some modest assumptions, such as "there are no long-running
> transactions", and "no UPDATEs will logically modify indexed columns".
> But I tend to doubt that that's truly the case -- I think that there
> are still pathological cases where HOT cannot keep the total table
> size stable in the long run due to subtle effects that eventually
> aggregate into significant issues, like heap fragmentation. Ask Jan
> Wieck about the stability of some of the TPC-C/BenchmarkSQL tables to

...

> We might have successfully fit the successor heap tuple version a
> million times before just by HOT pruning, and yet currently we give up
> just because it didn't work on the one millionth and first occasion --
> don't you think that's kind of silly? We may be able to afford having
> a fallback strategy that is relatively expensive, provided it is
> rarely used. And it might be very effective in the aggregate, despite
> being rarely used -- it might provide us just what we were missing
> before. Just try harder when you run into a problem every once in a
> blue moon!
> 
> A diversity of strategies with fallback behavior is sometimes the best
> strategy. Don't underestimate the contribution of rare and seemingly
> insignificant adverse events. Consider the lifecycle of the data over

That is an intersting point --- we often focus on optimizing frequent
operations, but preventing rare but expensive-in-aggregate events from
happening is also useful.

-- 
  Bruce Momjian  <bruce@momjian.us>        https://momjian.us
  EDB                                      https://enterprisedb.com

  If only the physical world exists, free will is an illusion.




Re: partial heap only tuples

От
Peter Geoghegan
Дата:
On Mon, Apr 19, 2021 at 5:09 PM Bruce Momjian <bruce@momjian.us> wrote:
> > A diversity of strategies with fallback behavior is sometimes the best
> > strategy. Don't underestimate the contribution of rare and seemingly
> > insignificant adverse events. Consider the lifecycle of the data over
>
> That is an intersting point --- we often focus on optimizing frequent
> operations, but preventing rare but expensive-in-aggregate events from
> happening is also useful.

Right. Similarly, we sometimes focus on adding an improvement,
overlooking more promising opportunities to subtract a disimprovement.
Apparently this is a well known tendency:

https://www.scientificamerican.com/article/our-brain-typically-overlooks-this-brilliant-problem-solving-strategy/

I believe that it's particularly important to consider subtractive
approaches with a complex system. This has sometimes worked well for
me as a conscious and deliberate strategy.

--
Peter Geoghegan



Re: partial heap only tuples

От
vignesh C
Дата:
On Tue, Mar 9, 2021 at 12:09 AM Bossart, Nathan <bossartn@amazon.com> wrote:
>
> On 3/8/21, 10:16 AM, "Ibrar Ahmed" <ibrar.ahmad@gmail.com> wrote:
> > On Wed, Feb 24, 2021 at 3:22 AM Bossart, Nathan <bossartn@amazon.com> wrote:
> >> On 2/10/21, 2:43 PM, "Bruce Momjian" <bruce@momjian.us> wrote:
> >>> I wonder if you should create a Postgres wiki page to document all of
> >>> this.  I agree PG 15 makes sense.  I would like to help with this if I
> >>> can.  I will need to study this email more later.
> >>
> >> I've started the wiki page for this:
> >>
> >>    https://wiki.postgresql.org/wiki/Partial_Heap_Only_Tuples
> >>
> >> Nathan
> >
> > The regression test case  (partial-index) is failing
> >
> > https://cirrus-ci.com/task/5310522716323840
>
> This patch is intended as a proof-of-concept of some basic pieces of
> the project.  I'm working on a new patch set that should be more
> suitable for community review.

The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".

Regards,
Vignesh



Re: partial heap only tuples

От
Daniel Gustafsson
Дата:
> On 14 Jul 2021, at 13:34, vignesh C <vignesh21@gmail.com> wrote:

> The patch does not apply on Head anymore, could you rebase and post a
> patch. I'm changing the status to "Waiting for Author".

As no update has been posted, the patch still doesn't apply.  I'm marking this
patch Returned with Feedback, feel free to open a new entry for an updated
patch.

--
Daniel Gustafsson        https://vmware.com/




Re: partial heap only tuples

От
"Bossart, Nathan"
Дата:
On 11/4/21, 3:24 AM, "Daniel Gustafsson" <daniel@yesql.se> wrote:
> As no update has been posted, the patch still doesn't apply.  I'm marking this
> patch Returned with Feedback, feel free to open a new entry for an updated
> patch.

Thanks.  I have been working on this intermittently, and I hope to
post a more complete proof-of-concept in the near future.  I'll create
a new commitfest entry once that's done.

Nathan