Обсуждение: SQL-level pg_datum_image_equal
Hi, One of our customers has this workload where every so often they update the whole table to make sure it's up-to-date. In general, you'd probably want to use MERGE for such a workload and ignore all rows that already have only matching data, but there's a catch: PostgreSQL doesn't have an efficient way to check if the provided data is actually equal in all senses of the word, so we can't easily and cheaply determine whether an update is needed; which is one reason why the full table was updated every time. A naive approach to determining whether each value needs to be updated would use `old IS NOT DISTINCT FROM new`, but a.) this relies on `=` operators to exist for that type, and b.) the = operator of some types don't always distinguish between values that are different for human readers; with as famous example '1.0' and '1.00' in numeric; they have an equal value but are clearly distinct to readers (and certain functions). One could get around this in this case by 'simply' casting to text and comparing the outputs (using the C collation for performance and determinism), or by wrapping it in a row (which then uses record_image_eq, which does use binary compare functions internally), but both imply additional parsing, wrapping, and overhead compared to a direct datum_image_eq call. So, attached is a simple and to-the-point patch that adds the function mentioned in $subject, which will tell the user whether two values of the same type have an exactly equal binary representation, using datum_image_eq. Kind regards, Matthias van de Meent
Вложения
On Thu, Dec 11, 2025 at 1:46 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote: > > Hi, > > > So, attached is a simple and to-the-point patch that adds the function > mentioned in $subject, which will tell the user whether two values of > the same type have an exactly equal binary representation, using > datum_image_eq. > hi. maybe Table 9.76 (https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG) is the right place for this function. corner case confused me, I think this is related to null handling, maybe not related to this. create type t1 as (a int, b text); select pg_datum_image_equal('(,)'::t1, $$(,)$$::t1); select pg_datum_image_equal('(,)'::t1, NULL::t1); select '(,)'::t1 is null, NULL::t1 is null; enforce_generic_type_consistency already resolved generic type. see select pg_datum_image_equal('1','1'); ERROR: could not determine polymorphic type because input has type unknown so + if (!OidIsValid(typ)) + { + ereport(ERROR, + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), + errmsg("could not determine type"))); + } this part should be elog(ERROR.....) ? -- jian https://www.enterprisedb.com/
On Wed, Dec 10, 2025 at 12:46 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:
Hi,
One of our customers has this workload where every so often they
update the whole table to make sure it's up-to-date. In general, you'd
probably want to use MERGE for such a workload and ignore all rows
that already have only matching data, but there's a catch: PostgreSQL
doesn't have an efficient way to check if the provided data is
actually equal in all senses of the word, so we can't easily and
cheaply determine whether an update is needed; which is one reason why
the full table was updated every time.
Have you ruled out the suppress_redundant_updates_trigger?
On Sat, 20 Dec 2025, 17:07 Corey Huinker, <corey.huinker@gmail.com> wrote: > > On Wed, Dec 10, 2025 at 12:46 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote: >> >> Hi, >> >> One of our customers has this workload where every so often they >> update the whole table to make sure it's up-to-date. In general, you'd >> probably want to use MERGE for such a workload and ignore all rows >> that already have only matching data, but there's a catch: PostgreSQL >> doesn't have an efficient way to check if the provided data is >> actually equal in all senses of the word, so we can't easily and >> cheaply determine whether an update is needed; which is one reason why >> the full table was updated every time. > > Have you ruled out the suppress_redundant_updates_trigger? Thank you for the reference, I wasn't aware of this trigger. Sadly, it does not work for our use case, as that only suppresses an update if the heap-formatted rows are binary identical, which is not guaranteed even if when all values are equivalent; as it doesn't take detoasting into account. It also doesn't minimize the pressure on the TOAST table, which is something else we're trying to do with the new function. The issue is that when you SET a column with a user-provided value, during trigger handling, HOT checking, and TOASTing, the binary representation of that user-provided value is the untoasted version (as it has not yet been inserted into any toast table and isn't represented as varatt_external), while the original row's value may be a toast pointer (represented as varatt_external). The checks in trigger handling, TOASTing, and HOT checking, the old tuple's value for that column (in its varatt_external representation) is compared against the new value (as normal varattrib_4b.va_4byte or varattrib_1b), and those will never be binary equal - their first byte is guaranteed to be different. Only if the value is pulled directly from the original column will the original column's TOAST pointer be used, and can a new toast table insertion be skipped (after which suppress_redundant_updates_trigger with its in-heap-row compare option might become useful). But, lacking a system that checks checks whether toasted values actually changed (and thus whether HOT applies, and whether an update has to happen), that trigger isn't up to the task at hand. Kind regards, Matthias van de Meent Databricks (https://www.databricks.com)
On Sat, 20 Dec 2025 at 14:15, jian he <jian.universality@gmail.com> wrote: > > On Thu, Dec 11, 2025 at 1:46 AM Matthias van de Meent > <boekewurm+postgres@gmail.com> wrote: > > > > Hi, > > > > > > > > So, attached is a simple and to-the-point patch that adds the function > > mentioned in $subject, which will tell the user whether two values of > > the same type have an exactly equal binary representation, using > > datum_image_eq. > > > > hi. > > maybe Table 9.76 > (https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG) > is the right place for this function. I think table 9.3 (https://www.postgresql.org/docs/18/functions-comparison.html#FUNCTIONS-COMPARISON-FUNC-TABLE) makes more sense, as this is more a compare function than one that exposes catalog information about the input. > corner case confused me, I think this is related to null handling, > maybe not related to this. > create type t1 as (a int, b text); > select pg_datum_image_equal('(,)'::t1, $$(,)$$::t1); > select pg_datum_image_equal('(,)'::t1, NULL::t1); > select '(,)'::t1 is null, NULL::t1 is null; Yes, that's row-type NULL handling for you. '(,)' is a composite value with only NULL values in the attributes, and SQL defines that rows with only NULL columns must return True when `IS NULL` evaluates their NULL-ness. On disk, however, it is still stored as a "composite type; attributes 'a' and 'b' are NULL"; so that a user that casts the value to text will get a different result between (NULL::t1::text) and ('(,)'::t1::text), allowing safe round-trip conversions. Also note that `('(,)'::t1 IS DISTINCT FROM NULL::t1) = TRUE, another curious consequence of this SQL rule. So, that output is expected; some methods already expose these differences between the values, so pg_datum_image_equal() *must* also indicate they are different. And now we also have one more reason to have a function that can notice distinctions that go deeper than surface-level SQL. Aside: This new function doesn't actually fully cover the spectrum of possible inequalities detectable through SQL, as there are some very low level datum introspection tools like pg_column_size() whose output depends on the type of toasting applied. My function cover that, because that data should be completely irrelevant to normal data usage, and the user can combine this manually if they really need it. > enforce_generic_type_consistency already resolved generic type. While you are correct to point out that the type system would prevent this from getting called from SQL without a proper type, I'd like to keep the check to make sure that callers from outside the type system don't accidentally fail to provide the function with a correct type. > so > + if (!OidIsValid(typ)) > + { > + ereport(ERROR, > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), > + errmsg("could not determine type"))); > + } > this part should be elog(ERROR.....) ? Is there a policy on what should _not_ use ereport? I know we don't require ereport for internal errors, but is considered forbidden? Kind regards, Matthias van de Meent Databricks (https://www.databricks.com)