Обсуждение: generalized conveyor belt storage

Поиск
Список
Период
Сортировка

generalized conveyor belt storage

От
Robert Haas
Дата:
Hi!

Back when we were working on zheap, we realized that we needed some
way of storing undo records that would permit us to discard old undo
efficiently when it was no longer needed. Thomas Munro dubbed this
"conveyor belt storage," the idea being that items are added at one
end and removed from the other. In the zheap patches, Thomas took an
approach similar to what we've done elsewhere for CLOG and WAL: keep
creating new files, put a relatively small amount of data in each one,
and remove the old files in their entirety when you can prove that
they are no longer needed. While that did and does seem reasonable, I
came to dislike it, because it meant we needed a separate smgr for
undo as compared with everything else, which was kind of complicated.
Also, that approach was tightly integrated with and thus only useful
for zheap, and as Thomas observed at the time, the problem seems to be
fairly general. I got interested in this problem again because of the
idea discussed in
https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com
of having a "dead TID" relation fork in which to accumulate TIDs that
have been marked as dead in the table but not yet removed from the
indexes, so as to permit a looser coupling between table vacuum and
index vacuum. That's yet another case where you accumulate new data
and then at a certain point the oldest data can be thrown away because
its intended purpose has been served.

So here's a patch. Basically, it lets you initialize a relation fork
as a "conveyor belt," and then you can add pages of basically
arbitrary data to the conveyor belt and then throw away old ones and,
modulo bugs, it will take care of recycling space for you. There's a
fairly detailed README in the patch if you want a more detailed
description of how the whole thing works. It's missing some features
that I want it to have: for example, I'd like to have on-line
compaction, where whatever logical page numbers of data currently
exist can be relocated to lower physical page numbers thus allowing
you to return space to the operating system, hopefully without
requiring a strong heavyweight lock. But that's not implemented yet,
and it's also missing a few other things, like test cases, performance
results, more thorough debugging, better write-ahead logging
integration, and some code to use it to do something useful. But
there's enough here, I think, for you to form an opinion about whether
you think this is a reasonable direction, and give any design-level
feedback that you'd like to give. My colleagues Dilip Kumar and Mark
Dilger have contributed to this effort with some testing help, but all
the code in this patch is mine.

When I was chatting with Andres about this, he jumped to the question
of whether this could be used to replace SLRUs. To be honest, it's not
really designed for applications that are quite that intense. I think
we would get too much contention on the metapage, which you have to
lock and often modify for just about every conveyor belt operation.
Perhaps that problem can be dodged somehow, and it might even be a
good idea, because (1) then we'd have that data in shared_buffers
instead of a separate tiny buffer space and (2) the SLRU code is
pretty crappy. But I'm more interested in using this for new things
than I am in replacing existing core technology where any new bugs
will break everything for everyone. Still, I'm happy to hear ideas
around this kind of thing, or to hear the results of any
experimentation you may want to do.

Let me know what you think.

Thanks,

-- 
Robert Haas
EDB: http://www.enterprisedb.com

Вложения

Re: generalized conveyor belt storage

От
Peter Geoghegan
Дата:
On Tue, Dec 14, 2021 at 3:00 PM Robert Haas <robertmhaas@gmail.com> wrote:
> I got interested in this problem again because of the
> idea discussed in
> https://www.postgresql.org/message-id/CA%2BTgmoZgapzekbTqdBrcH8O8Yifi10_nB7uWLB8ajAhGL21M6A%40mail.gmail.com
> of having a "dead TID" relation fork in which to accumulate TIDs that
> have been marked as dead in the table but not yet removed from the
> indexes, so as to permit a looser coupling between table vacuum and
> index vacuum. That's yet another case where you accumulate new data
> and then at a certain point the oldest data can be thrown away because
> its intended purpose has been served.

Thanks for working on this! It seems very strategically important to me.

> So here's a patch. Basically, it lets you initialize a relation fork
> as a "conveyor belt," and then you can add pages of basically
> arbitrary data to the conveyor belt and then throw away old ones and,
> modulo bugs, it will take care of recycling space for you. There's a
> fairly detailed README in the patch if you want a more detailed
> description of how the whole thing works.

How did you test this? I ask because it would be nice if there was a
convenient way to try this out, as somebody with a general interest.
Even just a minimal test module, that you used for development work.

> When I was chatting with Andres about this, he jumped to the question
> of whether this could be used to replace SLRUs. To be honest, it's not
> really designed for applications that are quite that intense.

I personally think that SLRUs (and the related problem of hint bits)
are best addressed by tracking which transactions have modified what
heap blocks (perhaps only approximately), and then eagerly cleaning up
aborted transaction IDs, using a specialized version of VACUUM that
does something like heap pruning for aborted xacts. It just seems
weird that we keep around clog in order to not have to run VACUUM too
frequently, which (among other things) already cleans up after aborted
transactions. Having a more efficient data structure for commit status
information doesn't seem all that promising, because the problem is
actually our insistence on remembering which XIDs aborted almost
indefinitely. There is no fundamental reason why it has to work that
way. I don't mean that it could in principle be changed (that's almost
always true); I mean that it seems like an accident of history, that
could have easily gone another way: the ancestral design of clog
existing in a world without MVCC. It seems like a totally vestigial
thing to me, which I wouldn't say about a lot of other things (just
clog and freezing).

Something like the conveyor belt seems like it would help with this
other kind of VACUUM, mind you. We probably don't want to do anything
like index vacuuming for these aborted transactions (actually maybe we
want to do retail deletion, which is much more likely to work out
there). But putting the TIDs into a store used for dead TIDs could
make sense. Especially in extreme cases.

-- 
Peter Geoghegan



Re: generalized conveyor belt storage

От
Dilip Kumar
Дата:
On Wed, Dec 15, 2021 at 6:33 AM Peter Geoghegan <pg@bowt.ie> wrote:
>

> How did you test this? I ask because it would be nice if there was a
> convenient way to try this out, as somebody with a general interest.
> Even just a minimal test module, that you used for development work.
>

I have tested this using a simple extension over the conveyor belt
APIs, this extension provides wrapper apis over the conveyor belt
APIs.  These are basic APIs which can be extended even further for
more detailed testing, e.g. as of now I have provided an api to read
complete page from the conveyor belt but that can be easily extended
to read from a particular offset and also the amount of data to read.

Parallelly I am also testing this by integrating it with the vacuum,
which is still a completely WIP patch and needs a lot of design level
improvement so not sharing it, so once it is in better shape I will
post that in the separate thread.

Basically, we have decoupled different vacuum phases (only for manual
vacuum ) something like below,
VACUUM (first_pass) t;
VACUUM idx;
VACUUM (second_pass) t;

So in the first pass we are just doing the first pass of vacuum and
wherever we are calling lazy_vacuum() we are storing those dead tids
in the conveyor belt.  In the index pass, user can vacuum independent
index and therein it will just fetch the last conveyor belt point upto
which it has already vacuum, then from there load dead tids which can
fit in maintenance_work_mem and then call the index bulk delete (this
will be done in loop until we complete the index vacuum).  In the
second pass, we check all the indexes and find the minimum conveyor
belt point upto which all indexes have vacuumed.  We also fetch the
last point where we left the second pass of the heap.  Now we fetch
the dead tids from the conveyor belt (which fits in
maintenance_work_mem) from the last vacuum point of heap upto the min
index vacuum point.  And perform the second heap pass.

I have given the highlights of the decoupling work just to show what
sort of testing we are doing for the conveyor belt.  But we can
discuss this on a separate thread when I am ready to post that patch.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Вложения

Re: generalized conveyor belt storage

От
Matthias van de Meent
Дата:
On Wed, 15 Dec 2021 at 00:01, Robert Haas <robertmhaas@gmail.com> wrote:
>
> Hi!
[...]
> So here's a patch. Basically, it lets you initialize a relation fork
> as a "conveyor belt," and then you can add pages of basically
> arbitrary data to the conveyor belt and then throw away old ones and,
> modulo bugs, it will take care of recycling space for you. There's a
> fairly detailed README in the patch if you want a more detailed
> description of how the whole thing works.

I was reading through this README when I hit the following:

> +Conceptually, a relation fork organized as a conveyor belt has three parts:
> +
> +- Payload. The payload is whatever data the user of this module wishes
> +  to store. The conveyor belt doesn't care what you store in a payload page,
> +  but it does require that you store something: each time a payload page is
> +  initialized, it must end up with either pd_lower > SizeOfPageHeaderData,
> +  or pd_lower < BLCKSZ.

As SizeOfPageHeaderData < BLCKSZ, isn't this condition always true? Or
at least, this currently allows for either any value of pd_lower, or
the (much clearer) 'pd_lower <= SizeOfPageHeaderData or pd_lower >=
BLCKSZ', depending on exclusiveness of the either_or clause.

>  It's missing some features
> that I want it to have: for example, I'd like to have on-line
> compaction, where whatever logical page numbers of data currently
> exist can be relocated to lower physical page numbers thus allowing
> you to return space to the operating system, hopefully without
> requiring a strong heavyweight lock. But that's not implemented yet,
> and it's also missing a few other things, like test cases, performance
> results, more thorough debugging, better write-ahead logging
> integration, and some code to use it to do something useful. But
> there's enough here, I think, for you to form an opinion about whether
> you think this is a reasonable direction, and give any design-level
> feedback that you'd like to give. My colleagues Dilip Kumar and Mark
> Dilger have contributed to this effort with some testing help, but all
> the code in this patch is mine.

You mentioned that this is meant to be used as a "relation fork", but
I couldn't find new code in relpath.h (where ForkNumber etc. are
defined) that allows one more fork per relation. Is that too on the
missing features list, or did I misunderstand what you meant with
"relation fork"?

Kind regards,

Matthias van de Meent



Re: generalized conveyor belt storage

От
Robert Haas
Дата:
On Wed, Dec 15, 2021 at 10:03 AM Matthias van de Meent
<boekewurm+postgres@gmail.com> wrote:
> > +Conceptually, a relation fork organized as a conveyor belt has three parts:
> > +
> > +- Payload. The payload is whatever data the user of this module wishes
> > +  to store. The conveyor belt doesn't care what you store in a payload page,
> > +  but it does require that you store something: each time a payload page is
> > +  initialized, it must end up with either pd_lower > SizeOfPageHeaderData,
> > +  or pd_lower < BLCKSZ.
>
> As SizeOfPageHeaderData < BLCKSZ, isn't this condition always true? Or
> at least, this currently allows for either any value of pd_lower, or
> the (much clearer) 'pd_lower <= SizeOfPageHeaderData or pd_lower >=
> BLCKSZ', depending on exclusiveness of the either_or clause.

The second part of the condition should say pd_upper < BLCKSZ, not
pd_lower. Woops. It's intended to be false for uninitialized pages and
also for pages that are initialized but completely empty with no line
pointers and no special space.

> You mentioned that this is meant to be used as a "relation fork", but
> I couldn't find new code in relpath.h (where ForkNumber etc. are
> defined) that allows one more fork per relation. Is that too on the
> missing features list, or did I misunderstand what you meant with
> "relation fork"?

It's up to whoever is using the code to provide the relation fork -
and it could be the main fork of some new kind of relation, or it
could use some other existing fork number, or it could be an entirely
new fork. But it would be up to the calling code to figure that stuff
out.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: generalized conveyor belt storage

От
Amul Sul
Дата:
On Wed, Dec 15, 2021 at 9:04 PM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Wed, Dec 15, 2021 at 10:03 AM Matthias van de Meent
> <boekewurm+postgres@gmail.com> wrote:
> [...]

Thought patch is WIP, here are a few comments that I found while
reading the patch and thought might help:

+   {
+       if (meta->cbm_oldest_index_segment ==
+           meta->cbm_newest_index_segment)
+           elog(ERROR, "must remove last index segment when only one remains");
+       meta->cbm_oldest_index_segment = segno;
+   }

How about having to assert or elog to ensure that 'segno' is indeed
the successor?
--

+   if (meta->cbm_index[offset] != offset)
+       elog(ERROR,
+            "index entry at offset %u was expected to be %u but found %u",
+            offset, segno, meta->cbm_index[offset]);

IF condition should be : meta->cbm_index[offset] != segno ?
--

+   if (segno >= CB_METAPAGE_FREESPACE_BYTES * BITS_PER_BYTE)
+       elog(ERROR, "segment %u out of range for metapage fsm", segno);

I think CB_FSM_SEGMENTS_FOR_METAPAGE should be used like
cb_metapage_set_fsm_bit()
--

+/*
+ * Increment the count of segments allocated.
+ */
+void
+cb_metapage_increment_next_segment(CBMetapageData *meta, CBSegNo segno)
+{
+   if (segno != meta->cbm_next_segment)
+       elog(ERROR, "extending to create segment %u but next segment is %u",
+            segno, meta->cbm_next_segment);

I didn't understand this error, what does it mean?  It would be
helpful to add a brief about what it means and why we are throwing it
and/or rephrasing the error bit.
--

+++ b/src/backend/access/conveyor/cbxlog.c
@@ -0,0 +1,442 @@
+/*-------------------------------------------------------------------------
+ *
+ * cbxlog.c
+ *   XLOG support for conveyor belts.
+ *
+ * For each REDO function in this file, see cbmodify.c for the
+ * corresponding function that performs the modification during normal
+ * running and logs the record that we REDO here.
+ *
+ * Copyright (c) 2016-2021, PostgreSQL Global Development Group
+ *
+ * src/backend/access/conveyor/cbmodify.c

Incorrect file name: s/cbmodify.c/cbxlog.c/
--

+           can_allocate_segment =
+               (free_segno_first_blkno < possibly_not_on_disk_blkno)

The condition should be '<=' ?
--

+ * ConveyorBeltPhysicalTruncate. For more aggressive cleanup options, see
+ * ConveyorBeltCompact or ConveyorBeltRewrite.

Didn't find ConveyorBeltCompact or ConveyorBeltRewrite code, might yet
to be implemented?
--

+ * the value computed here here if the entry at that offset is already

"here" twice.
--

And few typos:

othr
semgents
fucntion
refrenced
initialied
remve
extrordinarily
implemenation
--

Regards,
Amul



Re: generalized conveyor belt storage

От
Robert Haas
Дата:
On Wed, Dec 29, 2021 at 7:08 AM Amul Sul <sulamul@gmail.com> wrote:
> Thought patch is WIP, here are a few comments that I found while
> reading the patch and thought might help:
>
> +   {
> +       if (meta->cbm_oldest_index_segment ==
> +           meta->cbm_newest_index_segment)
> +           elog(ERROR, "must remove last index segment when only one remains");
> +       meta->cbm_oldest_index_segment = segno;
> +   }
>
> How about having to assert or elog to ensure that 'segno' is indeed
> the successor?

This code doesn't have any inexpensive way to know whether segno is
the successor. To figure that out you'd need to look at the latest
index page that does exist, but this function's job is just to look at
the metapage. Besides, the eventual caller will have just looked up
that value in order to pass it to this function, so double-checking
what we just computed right before doesn't really make sense.

> +   if (meta->cbm_index[offset] != offset)
> +       elog(ERROR,
> +            "index entry at offset %u was expected to be %u but found %u",
> +            offset, segno, meta->cbm_index[offset]);
>
> IF condition should be : meta->cbm_index[offset] != segno ?

Oops, you're right.

> +   if (segno >= CB_METAPAGE_FREESPACE_BYTES * BITS_PER_BYTE)
> +       elog(ERROR, "segment %u out of range for metapage fsm", segno);
>
> I think CB_FSM_SEGMENTS_FOR_METAPAGE should be used like
> cb_metapage_set_fsm_bit()

Good call.

> +/*
> + * Increment the count of segments allocated.
> + */
> +void
> +cb_metapage_increment_next_segment(CBMetapageData *meta, CBSegNo segno)
> +{
> +   if (segno != meta->cbm_next_segment)
> +       elog(ERROR, "extending to create segment %u but next segment is %u",
> +            segno, meta->cbm_next_segment);
>
> I didn't understand this error, what does it mean?  It would be
> helpful to add a brief about what it means and why we are throwing it
> and/or rephrasing the error bit.

cbm_next_segment is supposed to be the lowest-numbered segment that
doesn't yet exist. Imagine that it's 4. But, suppose that the free
space map shows segment 4 as in use, even though the metapage's
cbm_next_segment value claims it's not allocated yet. Then maybe we
decide that the lowest-numbered free segment according to the
freespace map is 5, while meanwhile the metapage is pretty sure we've
never created 4. Then I think we'll end up here and trip over this
error check. To get more understanding of this, look at how
ConveyorBeltGetNewPage selects free_segno.

> Incorrect file name: s/cbmodify.c/cbxlog.c/

Right, thanks.

> +           can_allocate_segment =
> +               (free_segno_first_blkno < possibly_not_on_disk_blkno)
>
> The condition should be '<=' ?

It doesn't look that way to me. If they were equal, then that block
doesn't necessarily exist on disk ... in which case we should not
allocate. Am I missing something?

> + * ConveyorBeltPhysicalTruncate. For more aggressive cleanup options, see
> + * ConveyorBeltCompact or ConveyorBeltRewrite.
>
> Didn't find ConveyorBeltCompact or ConveyorBeltRewrite code, might yet
> to be implemented?

Right.

> + * the value computed here here if the entry at that offset is already
>
> "here" twice.

Oops.

> And few typos:
>
> othr
> semgents
> fucntion
> refrenced
> initialied
> remve
> extrordinarily
> implemenation

Wow, OK, that's a lot of typos.

Updated patch attached, also with a fix for the mistake in the readme
that Matthias found, and a few other bug fixes.

-- 
Robert Haas
EDB: http://www.enterprisedb.com

Вложения

Re: generalized conveyor belt storage

От
Dilip Kumar
Дата:
On Wed, Jan 5, 2022 at 1:13 AM Robert Haas <robertmhaas@gmail.com> wrote:
>
>
> Updated patch attached, also with a fix for the mistake in the readme
> that Matthias found, and a few other bug fixes.

Rebased over the current master.  Basically, I rebased for other patch
sets I am planning to post for heap and index vacuum decoupling.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

Вложения

Re: generalized conveyor belt storage

От
Amul Sul
Дата:
 On Wed, Jan 5, 2022 at 1:12 AM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Wed, Dec 29, 2021 at 7:08 AM Amul Sul <sulamul@gmail.com> wrote:
> > Thought patch is WIP, here are a few comments that I found while
> > reading the patch and thought might help:
> >
> > +   {
> > +       if (meta->cbm_oldest_index_segment ==
> > +           meta->cbm_newest_index_segment)
> > +           elog(ERROR, "must remove last index segment when only one remains");
> > +       meta->cbm_oldest_index_segment = segno;
> > +   }
> >
> > How about having to assert or elog to ensure that 'segno' is indeed
> > the successor?
>
> This code doesn't have any inexpensive way to know whether segno is
> the successor. To figure that out you'd need to look at the latest
> index page that does exist, but this function's job is just to look at
> the metapage. Besides, the eventual caller will have just looked up
> that value in order to pass it to this function, so double-checking
> what we just computed right before doesn't really make sense.
>

Ok.

Assert(meta->cbm_oldest_index_segment < segno) was in my mind since
the segment to be removed is between meta->cbm_oldest_index_segment
and segno, IIUC.

> [...]
> Updated patch attached, also with a fix for the mistake in the readme
> that Matthias found, and a few other bug fixes.
>

Few more comments for this version:

+void
+cb_cache_invalidate(CBCache *cache, CBPageNo index_start,
+                   uint64 index_segments_moved)
+{
+   if (index_segments_moved != cache->index_segments_moved)
+   {
+       cb_iseg_reset(cache->iseg);
+       cache->index_segments_moved = index_segments_moved;
+   }
+   else if (index_start > cache->oldest_possible_start)
+   {
+       cb_iseg_iterator it;
+       cb_iseg_entry *entry;
+
+       cb_iseg_start_iterate(cache->iseg, &it);
+       while ((entry = cb_iseg_iterate(cache->iseg, &it)) != NULL)
+           if (entry->index_segment_start < index_start)
+               cb_iseg_delete_item(cache->iseg, entry);
+   }
+}

Shouldn't update oldest_possible_start, once all the entries preceding
the index_start are deleted?
--

+CBSegNo
+cbfsmpage_find_free_segment(Page page)
+{
+   CBFSMPageData *fsmp = cb_fsmpage_get_special(page);
+   unsigned    i;
+   unsigned    j;
+
+   StaticAssertStmt(CB_FSMPAGE_FREESPACE_BYTES % sizeof(uint64) == 0,
+                    "CB_FSMPAGE_FREESPACE_BYTES should be a multiple of 8");
+

I am a bit confused about this assertion, why is that so?
Why should CB_FSMPAGE_FREESPACE_BYTES be multiple of 8?
Do you mean CB_FSM_SEGMENTS_PER_FSMPAGE instead of CB_FSMPAGE_FREESPACE_BYTES?
--

+/*
+ * Add index entries for logical pages beginning at 'pageno'.
+ *
+ * It is the caller's responsibility to supply the correct index
page, and
+ * to make sure that there is enough room for the entries to be added.
+ */
+void
+cb_indexpage_add_index_entries(Page page,
+                              unsigned pageoffset,
+                              unsigned num_index_entries,
+                              CBSegNo *index_entries)

The first comment line says "...logical pages beginning at 'pageno'",
there is nothing 'pageno' in that function, does it mean pageoffset?
--

+ * Sets *pageno to the first logical page covered by this index page.
+ *
+ * Returns the segment number to which the obsolete index entry points.
+ */
+CBSegNo
+cb_indexpage_get_obsolete_entry(Page page, unsigned *pageoffset,
+                               CBPageNo *first_pageno)
+{
+   CBIndexPageData *ipd = cb_indexpage_get_special(page);
+
+   *first_pageno = ipd->cbidx_first_page;
+
+   while (*pageoffset < CB_INDEXPAGE_INDEX_ENTRIES &&
+          ipd->cbidx_entry[*pageoffset] != CB_INVALID_SEGMENT)
+       ++*pageoffset;
+
+   return ipd->cbidx_entry[*pageoffset];
+}

Here I think *first_pageno should be instead of *pageno in the comment
line.  The second line says "Returns the segment number to which the
obsolete index entry points." I am not sure if that is correct or I
might have misunderstood this? Look like function returns
CB_INVALID_SEGMENT or ipd->cbidx_entry[CB_INDEXPAGE_INDEX_ENTRIES]
value which could be a garbage value due to out-of-bound memory
access.
--

+/*
+ * Copy the indicated number of index entries out of the metapage.
+ */
+void
+cb_metapage_get_index_entries(CBMetapageData *meta, unsigned num_index_entries,
+                             CBSegNo *index_entries)
+{
+   Assert(num_index_entries <= cb_metapage_get_index_entries_used(meta));

IMO, having elog instead of assert would be good, like
cb_metapage_remove_index_entries() has an elog for (used < count).
--

+ * Copyright (c) 2016-2021, PostgreSQL Global Development Group

This rather is a question for my knowledge, why this copyright year
starting from 2016, I thought we would add the current year only for the
new file to be added.

Regards,
Amul



Re: generalized conveyor belt storage

От
Alvaro Herrera
Дата:
What's with the free text in cbstorage.h?  I would guess that this
wouldn't even compile, and nobody has noticed because the file is not
included by anything yet ...

-- 
Álvaro Herrera        Breisgau, Deutschland  —  https://www.EnterpriseDB.com/
#error "Operator lives in the wrong universe"
  ("Use of cookies in real-time system development", M. Gleixner, M. Mc Guire)



Re: generalized conveyor belt storage

От
Thom Brown
Дата:
On Wed, 20 Apr 2022 at 13:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
>
> What's with the free text in cbstorage.h?  I would guess that this
> wouldn't even compile, and nobody has noticed because the file is not
> included by anything yet ...

I'm not able to compile:

cbfsmpage.c: In function ‘cb_fsmpage_initialize’:
cbfsmpage.c:34:11: warning: unused variable ‘fsm_block_spacing’
[-Wunused-variable]
  unsigned fsm_block_spacing = cb_fsm_block_spacing(pages_per_segment);
           ^
cbfsmpage.c:33:14: warning: unused variable ‘first_fsm_block’
[-Wunused-variable]
  BlockNumber first_fsm_block = cb_first_fsm_block(pages_per_segment);
...
cbxlog.c: In function ‘cb_xlog_allocate_payload_segment’:
cbxlog.c:70:24: error: void value not ignored as it ought to be
  bool  have_fsm_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);
                        ^
cbxlog.c: In function ‘cb_xlog_allocate_index_segment’:
cbxlog.c:123:17: error: void value not ignored as it ought to be
  have_prev_page = XLogRecGetBlockTag(record, 2, NULL, NULL, NULL);
                 ^
cbxlog.c:124:16: error: void value not ignored as it ought to be
  have_fsm_page = XLogRecGetBlockTag(record, 3, NULL, NULL, NULL);
                ^
cbxlog.c: In function ‘cb_xlog_recycle_payload_segment’:
cbxlog.c:311:16: error: void value not ignored as it ought to be
  have_metapage = XLogRecGetBlockTag(record, 0, NULL, NULL, NULL);
                ^
cbxlog.c:312:18: error: void value not ignored as it ought to be
  have_index_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);
                  ^
cbxlog.c:313:16: error: void value not ignored as it ought to be
  have_fsm_page = XLogRecGetBlockTag(record, 2, NULL, NULL, NULL);
                ^
make[4]: *** [cbxlog.o] Error 1
make[4]: Leaving directory
`/home/thom/Development/postgresql/src/backend/access/conveyor'
make[3]: *** [conveyor-recursive] Error 2
make[3]: Leaving directory
`/home/thom/Development/postgresql/src/backend/access'
make[2]: *** [access-recursive] Error 2


--
Thom



Re: generalized conveyor belt storage

От
Robert Haas
Дата:
On Wed, Apr 20, 2022 at 8:02 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
> What's with the free text in cbstorage.h?  I would guess that this
> wouldn't even compile, and nobody has noticed because the file is not
> included by anything yet ...

I think I was using that file for random notes with the idea of
removing it eventually.

I'll clean that up if I get back around to working on this. Right now
it's not clear to me how to get this integrated with vacuum in a
useful way, so finishing this part of it isn't that exciting, at least
not unless somebody else comes up with a cool use for it.

What motivated you to look at this?

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: generalized conveyor belt storage

От
Robert Haas
Дата:
On Wed, Apr 20, 2022 at 8:32 AM Thom Brown <thom@linux.com> wrote:
> On Wed, 20 Apr 2022 at 13:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:
> > What's with the free text in cbstorage.h?  I would guess that this
> > wouldn't even compile, and nobody has noticed because the file is not
> > included by anything yet ...
>
> I'm not able to compile:

Since I haven't been working on this for a while, the patch set's not updated.

> cbfsmpage.c: In function ‘cb_fsmpage_initialize’:
> cbfsmpage.c:34:11: warning: unused variable ‘fsm_block_spacing’
> [-Wunused-variable]
>   unsigned fsm_block_spacing = cb_fsm_block_spacing(pages_per_segment);
>            ^

Hmm, well I guess if that variable is unused we can just delete it.

> cbxlog.c: In function ‘cb_xlog_allocate_payload_segment’:
> cbxlog.c:70:24: error: void value not ignored as it ought to be
>   bool  have_fsm_page = XLogRecGetBlockTag(record, 1, NULL, NULL, NULL);

This is because Tom recently made that function return void instead of
bool. I guess all these will need to be changed to use
XLogRecGetBlockTagExtended and pass an extra NULL for the
*prefetch_buffer argument.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: generalized conveyor belt storage

От
Alvaro Herrera
Дата:
On 2022-Apr-20, Robert Haas wrote:

> I'll clean that up if I get back around to working on this. Right now
> it's not clear to me how to get this integrated with vacuum in a
> useful way, so finishing this part of it isn't that exciting, at least
> not unless somebody else comes up with a cool use for it.

Oh, okay.

> What motivated you to look at this?

Shrug ... it happened to fall in my mutt index view and I wanted to have
a general idea of where it was going.  I have no immediate use for it,
sorry.

-- 
Álvaro Herrera               48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
"I dream about dreams about dreams", sang the nightingale
under the pale moon (Sandman)