Re: [HACKERS] Online enabling of page level checksums

Поиск
Список
Период
Сортировка
От David Christensen
Тема Re: [HACKERS] Online enabling of page level checksums
Дата
Msg-id 1E6E64E9-634B-43F4-8AA2-CD85AD92D2F8@endpoint.com
обсуждение исходный текст
Ответ на [HACKERS] Online enabling of page level checksums  (Magnus Hagander <magnus@hagander.net>)
Ответы Re: [HACKERS] Online enabling of page level checksums  (Simon Riggs <simon@2ndquadrant.com>)
Список pgsql-hackers
So as mentioned on IRC, I have a patch that I am working to rebase on HEAD with the following design.  It is very
similarto what you have proposed, so maybe we can use my development notes as a jumping-off point for
discussion/refinement.

* Incremental Checksums

PostgreSQL users should have a way up upgrading their cluster to use data checksums without having to do a costly
pg_dump/pg_restore;in particular, checksums should be able to be enabled/disabled at will, with the database enforcing
thelogic of whether the pages considered for a given database are valid. 

Considered approaches for this are having additional flags to pg_upgrade to set up the new cluster to use checksums
wherethey did not before (or optionally turning these off).  This approach is a nice tool to have, but in order to be
ableto support this process in a manner which has the database online while the database is going throught the initial
checksumprocess. 

In order to support the idea of incremental checksums, this design adds the following things:

** pg_control:

Keep "data_checksum_version", but have it indicate *only* the algorithm version for checksums. i.e., it's no longer
usedfor the data_checksum enabled/disabled state. 

Add "data_checksum_state", an enum with multiple states: "disabled", "enabling", "enforcing" (perhaps "revalidating"
too;something to indicate that we are reprocessing a database that purports to have been completely checksummed
already)

An explanation of the states as well as the behavior of the checksums for each.

- disabled => not in a checksum cycle; no read validation, no checksums written.  This is the current behavior for
Postgres*without* checksums. 

- enabling => in a checksum cycle; no read validation, write checksums.  Any page that gets written to disk will be a
validchecksum.  This is required when transitioning a cluster which has never had checksums, as the page reads would
normallyfail since they are uninitialized. 

- enforcing => not in a checksum cycle; read validation, write checksums.  This is the current behavior of Postgres
*with*checksums. 
(caveat: I'm not certain the following state is needed (and the current version of this patch doesn't have it)):

- revalidating => in a checksum cycle; read validation, write checksums.  The difference between this and "enabling" is
thatwe care if page reads fail, since by definition they should have been validly checksummed, as we should verify
this.

Add "data_checksum_cycle", a counter that gets incremented with every checksum cycle change.  This is used as a flag to
verifywhen new checksum actions take place, for instance if we wanted to upgrade/change the checksum algorithm, or if
wejust want to support periodic checksum validation. 

This variable will be compared against new values in the system tables to keep track of which relations still need to
bechecksummed in the cluster. 

** pg_database:

Add a field "datlastchecksum" which will be the last checksum cycle which has completed for all relations in that
database.

** pg_class:

Add a field "rellastchecksum" which stores the last successful checksum cycle for each relation.

** The checksum bgworker:

When the enabling event is initiated, we will iterate over all databases, checking for all databases where the
"datlastchecksum"field is < the current checksum cycle.  For each of these, we will spawn a bgworker to connect to
thesedbs and iterate over pg_class looking for "rellastchecksum < data_checksum_cycle".  If it finds none (i.e., every
recordhas rellastchecksum == data_checksum_cycle) then it marks the containing database as up-to-date by updating
"datlastchecksum= data_checksum_cycle".  We can presumably skip over temporary and unlogged relations here. 

For any relation that it finds in the database which is not checksummed, it starts an actual worker to handle the
checksumprocess for this table.  Since the state of the cluster is already either "enforcing" or "revalidating", any
blockwrites will get checksums added automatically, so the only thing the bgworker needs to do is load each block in
therelation and explicitly mark as dirty (unless that's not required for FlushBuffer() to do its thing).  After every
blockin the relation is visited this way and checksummed, its pg_class record will have "rellastchecksum" updated. 

(XXX: how to handle databases where connections are disabled, like "template1"?)

When all database have "datlastchecksum" == data_checksum_cycle, we initiate checksumming of any global cluster heap
files. When the global cluster tables heap files have been checksummed, then we consider the checksum cycle complete,
changepg_control's "data_checksum_state" to "enforcing" and consider things fully up-to-date. 


** Function API:

Interface to the functionality will be via the following Utility functions:
 - pg_enable_checksums(void) => turn checksums on for a cluster.  Will error if the state is anything but "disabled".
Ifthis is the first time this cluster has run this, this will initialize ControlFile->data_checksum_version to the
preferredbuilt-in algorithm (since there's only one currently, we just set it to 1).  This increments the
ControlFile->data_checksum_cyclevariable, then sets the state to "enabling", which means that the next time the
bgworkerchecks if there is anything to do it will see that state,  scan all the databases' "datlastchecksum" fields,
andstart kicking off the bgworker processes to handle the checksumming of the actual relation files. 
 - pg_disable_checksums(void) => turn checksums off for a cluster.  Sets the state to "disabled", which means bg_worker
willnot do anything. 
 - pg_request_checksum_cycle(void) => if checksums are "enabled", increment the data_checksum_cycle counter and set the
stateto "enabling".  (Alterantely, if we use the "revalidate" state here we could ensure that existing checksums are
validatedon read to alert us of any blocks with problems.  This could also be made to be "smart" i.e., interrupt
existingrunning checksum cycle to kick off another one (not sure of the use case), effectively call
pg_enable_checksums()if the cluster has not been explictly enabled before, etc; depends on how pedantic we want to be. 

** Design notes/implications:

When the system is in one of the modes which write checksums (currently everything but "disabled") any new
relations/databaseswill have their "rellastchecksum"/"datlastchecksum" counters prepopulated with the current value of
"data_checksum_counter",as we know that any space used for these relations will be checksummed, and hence valid.  By
pre-settingthis, we remove the need for the checksum bgworker to explicitly visit these new relations and force
checksumswhich will already be valid. 

With checksums on, we know any full-heap-modifying operations will be properly checksummed, we may be able to pre-set
rellastchecksumfor other operations such as ALTER TABLEs which trigger a full rewrite *without* having to explicitly
havethe checksum bgworker run on this.  I suspect there are a number of other places which may lend themselves to
optimizationlike this to avoid having to process relations explicitly.  (Say, if we somehow were able to force a
checksumoperation on any full SeqScan and update the state after the fact, we'd avoid paying this penalty another
time.)

** pg_upgrade:

With this additional complexity, we need to consider pg_upgrade, both now and in future versions.  For one thing, we
needto transfer settings from pg_control, plus make sure that pg_upgrade accepts deviances in any of the
data_checksum-relatedsettings in pg_control. 

4 scenarios to consider if/what to allow:

*** non-checksummed -> non-checksummed

exactly as it stands now

*** checksummed -> non-checksummed

pretty trivial; since the system tables will be non-checksummed, just equivalent to resetting the checksum_cycle and
pg_controlfields; user data files will be copied or linked into place with the checksums, but since it is disbled they
willbe ignored. 

*** non-checksummed -> checksummed

For the major version this patch makes it into, this will likely be the primary use case; add an --enable-checksums
optionto `pg_upgrade` to initially set the new cluster state to the checksums_enabling state, pre-init the system
databaseswith the correct state and checksum cycle flag, or have checksums be the *default* option.  Either way, this
needsto be set in the initial system initialization. 

*** checksummed -> checksummed

The potentially tricky case (but likely to be more common going forward as incremental checksums are supported).

Since we may have had checksum cycles in process in the old cluster or otherwise had the checksum counter we need to do
thefollowing: 

- need to propagate data_checksum_state, data_checksum_cycle, and data_checksum_version.  If we wanted to support a
differentCRC algorithm, we could pre-set the data_checksum_version to a different version here, increment
data_checksum_cycle,and set data_checksum_state to either "enabling" or "revalidating", depending on the original state
fromthe old cluster.  (i.e., if we were in the middle of an initial checksum cycle (state == "enabling"). 

- new cluster's system tables may need to have the "rellastchecksum" and "datlastchecksum" settings saved from the
previoussystem, if that's easy, to avoid a fresh checksum run if there is no need. 

** Handling checksums on a standby:

How to handle checksums on a standby is a bit trickier since checksums are inherently a local cluster state and not WAL
loggedbut we are storing state in the system tables for each database we need to make sure that the replicas reflect
truthfulstate for the checksums for the cluster. 

In order to manage this discrepency, we WAL log a few additional pieces of information; specifically:

- new events to capture/propogate any of the pg_control fields, such as: checksum version data, checksum cycle
increases,enabling/disabling actions 

- checksum background worker block ranges.  (XXX: we could decide that we're okay with relations being all or nothing
here,rendering this point moot.) 

Some notes on the block ranges: This would effectively be a series of records containing (datid, relid, start block,
endblock) for explicit checksum ranges, generated by the checksum bgworker as it checksums individual relations.  This
couldbe broken up into a series of blocks so rather than having the granularity be by relation we could have these
recordsget generated periodicaly (say in groups of 10K blocks or whatever, number to be determined) to allow standby
checksumrecalculation to be incremental so as not to delay replay unnecessarily as checksums are being created. 

Since the block range WAL records will be replayed before any of the pg_class/pg_database catalog records are replay,
we'llbe guaranteed to have the checksums calculated on the standby by the time it appears valid due to system state. 

We may also be able to use the WAL records to speed up the processing of existing heap files if they are interrupted
forsome reason, this remains to be seen. 

** Testing changes:

We need to add separate initdb checksum regression test which are outside of the normal pg_regress framework.

--
David Christensen
End Point Corporation
david@endpoint.com
785-727-1171






В списке pgsql-hackers по дате отправления:

Предыдущее
От: Stephen Frost
Дата:
Сообщение: Re: [HACKERS] GSoC 2017
Следующее
От: Claudio Freire
Дата:
Сообщение: Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem