Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

Поиск
Список
Период
Сортировка
От Neha Sharma
Тема Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions
Дата
Msg-id CANiYTQtnL_LzBY0rjLLPD7haaFB4aNNAMFtEhGTxtX9D+uW0Sg@mail.gmail.com
обсуждение исходный текст
Ответ на Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
Ответы Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions  (Amit Kapila <amit.kapila16@gmail.com>)
Список pgsql-hackers
Hi Amit/Dilip,

I have tested a few scenarios on  top of the v56 patches, where the replication worker still had few subtransactions in uncommitted state and we restart the publisher server.
No crash or data discrepancies were observed, attached are the test scenarios verified.

Data Setup:
Publication Server postgresql.conf :
echo "wal_level = logical
max_wal_senders = 10
max_replication_slots = 15
wal_log_hints = on
hot_standby_feedback = on
wal_receiver_status_interval = 1
listen_addresses='*'
log_min_messages=debug1
wal_sender_timeout = 0
logical_decoding_work_mem=64kB

Subscription Server postgresql.conf :
wal_level = logical
max_wal_senders = 10
max_replication_slots = 15
wal_log_hints = on
hot_standby_feedback = on
wal_receiver_status_interval = 1
listen_addresses='*'
log_min_messages=debug1
wal_sender_timeout = 0
logical_decoding_work_mem=64kB
port=5433

Initial setup:
Publication Server:
create table t(a int PRIMARY KEY ,b text);
CREATE OR REPLACE FUNCTION large_val() RETURNS TEXT LANGUAGE SQL AS 'select array_agg(md5(g::text))::text from generate_series(1, 256) g';
create publication test_pub for table t with(PUBLISH='insert,delete,update,truncate');
alter table t replica identity FULL ;
insert into t values (generate_series(1,20),large_val()) ON CONFLICT (a) DO UPDATE SET a=EXCLUDED.a*300;

Subscription server:
 create table t(a int,b text);
 create subscription test_sub CONNECTION 'host=localhost port=5432 dbname=postgres user=edb' PUBLICATION test_pub WITH ( slot_name = test_slot_sub1,streaming=on);

Thanks.
--
Regards,
Neha Sharma


On Mon, Aug 31, 2020 at 1:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Mon, Aug 31, 2020 at 10:49 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
>
> On Sun, Aug 30, 2020 at 2:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:
> >
>
> Another comment:
>
> +cleanup_rel_sync_cache(TransactionId xid, bool is_commit)
> +{
> + HASH_SEQ_STATUS hash_seq;
> + RelationSyncEntry *entry;
> +
> + Assert(RelationSyncCache != NULL);
> +
> + hash_seq_init(&hash_seq, RelationSyncCache);
> + while ((entry = hash_seq_search(&hash_seq)) != NULL)
> + {
> + if (is_commit)
> + entry->schema_sent = true;
>
> How is it correct to set 'entry->schema_sent' for all the entries in
> RelationSyncCache? Consider a case where due to invalidation in an
> unrelated transaction we have set the flag schema_sent for a
> particular relation 'r1' as 'false' and that transaction is executed
> before the current streamed transaction for which we are performing
> commit and called this function. It will set the flag for unrelated
> entry in this case 'r1' which doesn't seem correct to me. Or, if this
> is correct, it would be a good idea to write some comments about it.
>

Few more comments:
1.
+my $appname = 'tap_sub';
+$node_subscriber->safe_psql('postgres',
+"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr
application_name=$appname' PUBLICATION tap_pub"
+);

In most of the tests, we are using the above statement to create a
subscription. Don't we need (streaming = 'on') parameter while
creating a subscription? Is there a reason for not doing so in this
patch itself?

2.
009_stream_simple.pl
+# Insert, update and delete enough rows to exceed the 64kB limit.
+$node_publisher->safe_psql('postgres', q{
+BEGIN;
+INSERT INTO test_tab SELECT i, md5(i::text) FROM generate_series(3, 5000) s(i);
+UPDATE test_tab SET b = md5(b) WHERE mod(a,2) = 0;
+DELETE FROM test_tab WHERE mod(a,3) = 0;
+COMMIT;
+});

How much above this data is 64kB limit? I just wanted to see that it
should not be on borderline and then due to some alignment issues the
streaming doesn't happen on some machines? Also, how such a test
ensures that the streaming has happened because the way we are
checking results, won't it be the same for the non-streaming case as
well?

3.
+# Change the local values of the extra columns on the subscriber,
+# update publisher, and check that subscriber retains the expected
+# values
+$node_subscriber->safe_psql('postgres', "UPDATE test_tab SET c =
'epoch'::timestamptz + 987654321 * interval '1s'");
+$node_publisher->safe_psql('postgres', "UPDATE test_tab SET b = md5(a::text)");
+
+wait_for_caught_up($node_publisher, $appname);
+
+$result =
+  $node_subscriber->safe_psql('postgres', "SELECT count(*),
count(extract(epoch from c) = 987654321), count(d = 999) FROM
test_tab");
+is($result, qq(3334|3334|3334), 'check extra columns contain locally
changed data');

Again, how this test is relevant to streaming mode?

4. I have checked that in one of the previous patches, we have a test
v53-0004-Add-TAP-test-for-streaming-vs.-DDL which contains a test case
quite similar to what we have in
v55-0002-Add-support-for-streaming-to-built-in-logical-re/013_stream_subxact_ddl_abort.
If there is any difference that can cover more scenarios then can we
consider merging them into one test?

Apart from the above, I have made a few changes in the attached patch
which are mainly to simplify the code at one place, added/edited few
comments, some other cosmetic changes, and renamed the test case files
as the initials of their name were matching other tests in the similar
directory.

--
With Regards,
Amit Kapila.
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Tomas Vondra
Дата:
Сообщение: Re: list of extended statistics on psql
Следующее
От: Alvaro Herrera
Дата:
Сообщение: Re: Clang UndefinedBehaviorSanitize (Postgres14) Detected undefined-behavior