Обсуждение: Initial COPY of Logical Replication is too slow
Subscriber needs to ask publisher about tables and fields to COPY and it uses pg_get_publication_tables for that, and it is too slow when the number of tables is high because on every table it's subscribed it has to run this select.
We can get the same result with a join on pg_publication_rel.
regards
Marcos
Вложения
You can see how much time a subscriber will need to get all files which were added with this.
Run first time and will create 10 thousand tables, publish them and measure how much time to get all tables Ready on pg_subscription_rel.
Run again to add more 10 thousand tables and see that time will increase, more tables and more time.
This is just to show that if you create a subscription with a high number of tables it spends more time doing select on pg_get_publication_tables than the time spent actually copying. My use case I have 50 thousand tables, and it takes 5 seconds every time it needs to get next table to copy.
--Create a empty publication
create publication my_pub;
--Run these 3 following anonymous blocks to create schemas, tables and add them to publication.
--Run these 3 following anonymous blocks to create schemas, tables and add them to publication.
--Need to have 3 blocks because I cannot create a table in a schema that is not committed. And the same for a publication.
do $$ declare Schemas_Add integer = 100; Actual_Schema text; begin
for Actual_Schema in select 'test_'||(select to_char(coalesce(max(substring(nspname,'test_(\d+)')::integer),0)+g,'FM00000')
from pg_namespace where nspname ~ 'test_\d+') from generate_series(1,Schemas_Add) g loop
execute format('create schema %s',Actual_Schema);
end loop;
end;$$;
do $$ declare Tables_Add integer = 100; Actual_Schema text; begin
for Actual_Schema in select nspname from pg_namespace where nspname ~ 'test_\d+' and
not exists(select from pg_class where relnamespace = pg_namespace.oid) loop
for j in 1..Tables_Add loop
execute format('create table %s.test_%s as select generate_series(1,random(0,10))::integer id;',
Actual_Schema,to_char(j,'FM00000'));
end loop;
end loop;
end;$$;
do $$ declare Schemas_To_Add text = (select string_agg(nspname,',') from pg_namespace n where nspname ~ 'test_\d+' and
not exists(select from pg_publication_namespace where pnnspid = n.oid)); begin
execute format('alter publication my_pub add tables in schema %s;',Schemas_To_Add);
end;$$;
--Then you can see what was generated and go to the subscriber side to refresh the subscription and measure time spent to synchronize.
select * from pg_Namespace where nspname ~ 'test_\d+';
select pnnspid::regnamespace, * from pg_publication_namespace;
select oid::regclass, * from pg_Class where relnamespace::regnamespace::text ~ 'test_\d+' and relkind = 'r';
--Later just clean what you do.
drop publication my_pub;
do $$ declare Schema_Drop text; begin
for Schema_Drop in select nspname from pg_Namespace where nspname ~ 'test_\d+' loop
execute format ('drop schema %s cascade;',Schema_Drop);
end loop;
end;$$;
do $$ declare Schemas_Add integer = 100; Actual_Schema text; begin
for Actual_Schema in select 'test_'||(select to_char(coalesce(max(substring(nspname,'test_(\d+)')::integer),0)+g,'FM00000')
from pg_namespace where nspname ~ 'test_\d+') from generate_series(1,Schemas_Add) g loop
execute format('create schema %s',Actual_Schema);
end loop;
end;$$;
do $$ declare Tables_Add integer = 100; Actual_Schema text; begin
for Actual_Schema in select nspname from pg_namespace where nspname ~ 'test_\d+' and
not exists(select from pg_class where relnamespace = pg_namespace.oid) loop
for j in 1..Tables_Add loop
execute format('create table %s.test_%s as select generate_series(1,random(0,10))::integer id;',
Actual_Schema,to_char(j,'FM00000'));
end loop;
end loop;
end;$$;
do $$ declare Schemas_To_Add text = (select string_agg(nspname,',') from pg_namespace n where nspname ~ 'test_\d+' and
not exists(select from pg_publication_namespace where pnnspid = n.oid)); begin
execute format('alter publication my_pub add tables in schema %s;',Schemas_To_Add);
end;$$;
--Then you can see what was generated and go to the subscriber side to refresh the subscription and measure time spent to synchronize.
select * from pg_Namespace where nspname ~ 'test_\d+';
select pnnspid::regnamespace, * from pg_publication_namespace;
select oid::regclass, * from pg_Class where relnamespace::regnamespace::text ~ 'test_\d+' and relkind = 'r';
--Later just clean what you do.
drop publication my_pub;
do $$ declare Schema_Drop text; begin
for Schema_Drop in select nspname from pg_Namespace where nspname ~ 'test_\d+' loop
execute format ('drop schema %s cascade;',Schema_Drop);
end loop;
end;$$;
regards
Marcos
Hi, On Sat, Dec 6, 2025 at 4:19 AM Marcos Pegoraro <marcos@f10.com.br> wrote: > > Subscriber needs to ask publisher about tables and fields to COPY and it uses pg_get_publication_tables for that, and itis too slow when the number of tables is high because on every table it's subscribed it has to run this select. Yeah, if we pass a publication that a lot of tables belong to to pg_get_publication_tables(), it could take a long time to return as it needs to construct many entries. > We can get the same result with a join on pg_publication_rel. You changed the query not to use pg_get_publication_tables(): - "SELECT DISTINCT" - " (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts)" - " THEN NULL ELSE gpt.attrs END)" - " FROM pg_publication p," - " LATERAL pg_get_publication_tables(p.pubname) gpt," - " pg_class c" - " WHERE gpt.relid = %u AND c.oid = gpt.relid" - " AND p.pubname IN ( %s )", - lrel->remoteid, - pub_names->data); + "SELECT CASE WHEN cardinality(r.prattrs) <> relnatts THEN " + "r.prattrs END FROM pg_class c " + "LEFT JOIN LATERAL (SELECT DISTINCT prattrs FROM " + "pg_publication_rel r INNER JOIN pg_publication p " + "ON p.oid = r.prpubid WHERE c.oid = r.prrelid AND " + "pubname in ( %s )) r ON TRUE WHERE c.oid = %u", + pub_names->data, + lrel->remoteid); Simply replacing pg_get_publication_tables() with joining on pg_publication_rel doesn't work since pg_get_publication_tables() cares for several cases, for example where the specified columns are generated columns and the specified table is a partitioned table etc. Therefore the patch doesn't pass the regression tests. I think it would make more sense to introduce a dedicated SQL function that takes the reloid as well as the list of publications and returns the relation's the column list and row filter expression while filtering unnecessary rows inside the function. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
Em sex., 19 de dez. de 2025 às 22:59, Masahiko Sawada <sawada.mshk@gmail.com> escreveu:
Yeah, if we pass a publication that a lot of tables belong to to
pg_get_publication_tables(), it could take a long time to return as it
needs to construct many entries.
Well, I don't know how to help but I'm sure it's working badly.
Today I added some fields on my server, then seeing logs I could see how slow this process is.
duration: 2213.872 ms statement: SELECT DISTINCT (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts) THEN NULL ELSE gpt.attrs END) FROM pg_publication p, LATERAL pg_get_publication_tables(p.pubname) gpt, pg_class c WHERE gpt.relid = 274376788 AND c.oid = gpt.relid AND p.pubname IN ( 'mypub' )
2 seconds to get the list of fields of a table is really too slow.
How can we solve this ?
How can we solve this ?
regards
Marcos
On Mon, Jan 19, 2026 at 9:44 AM Marcos Pegoraro <marcos@f10.com.br> wrote:
>
> Em sex., 19 de dez. de 2025 às 22:59, Masahiko Sawada <sawada.mshk@gmail.com> escreveu:
>>
>> Yeah, if we pass a publication that a lot of tables belong to to
>> pg_get_publication_tables(), it could take a long time to return as it
>> needs to construct many entries.
>
>
> Well, I don't know how to help but I'm sure it's working badly.
> Today I added some fields on my server, then seeing logs I could see how slow this process is.
>
> duration: 2213.872 ms statement: SELECT DISTINCT (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts) THEN NULL
ELSEgpt.attrs END) FROM pg_publication p, LATERAL pg_get_publication_tables(p.pubname) gpt, pg_class c WHERE
gpt.relid= 274376788 AND c.oid = gpt.relid AND p.pubname IN ( 'mypub' )
>
> 2 seconds to get the list of fields of a table is really too slow.
> How can we solve this ?
After more investigation of slowness, it seems that the
list_concat_unique_oid() called below is quite slow when the database
has a lot of tables to publish:
relids = GetPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
pub_elem->pubviaroot ?
PUBLICATION_PART_ROOT :
PUBLICATION_PART_LEAF);
pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
This is simply because it's O(n^2), where n is the number of oids in
schemarelids in the test case. A simple change would be to do sort &
dedup instead. With the attached experimental patch, the
pg_get_publication_tables() execution time gets halved in my
environment (796ms -> 430ms with 50k tables). If the number of tables
is not large, this method might be slower than today but it's not a
huge regression.
In the initial tablesync cases, it could be optimized further in a way
that we introduce a new SQL function that gets the column list and
expr of the specific table. This way, we can filter the result by
relid at an early stage instead of getting all information and
filtering by relid as the tablesync worker does today, avoiding
overheads of gathering system catalog scan results.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
Вложения
On Mon, Jan 26, 2026 at 12:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > On Mon, Jan 19, 2026 at 9:44 AM Marcos Pegoraro <marcos@f10.com.br> wrote: > > > > Em sex., 19 de dez. de 2025 às 22:59, Masahiko Sawada <sawada.mshk@gmail.com> escreveu: > >> > >> Yeah, if we pass a publication that a lot of tables belong to to > >> pg_get_publication_tables(), it could take a long time to return as it > >> needs to construct many entries. > > > > > > Well, I don't know how to help but I'm sure it's working badly. > > Today I added some fields on my server, then seeing logs I could see how slow this process is. > > > > duration: 2213.872 ms statement: SELECT DISTINCT (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts) THEN NULL ELSEgpt.attrs END) FROM pg_publication p, LATERAL pg_get_publication_tables(p.pubname) gpt, pg_class c WHERE gpt.relid= 274376788 AND c.oid = gpt.relid AND p.pubname IN ( 'mypub' ) > > > > 2 seconds to get the list of fields of a table is really too slow. > > How can we solve this ? > > After more investigation of slowness, it seems that the > list_concat_unique_oid() called below is quite slow when the database > has a lot of tables to publish: > > relids = GetPublicationRelations(pub_elem->oid, > pub_elem->pubviaroot ? > PUBLICATION_PART_ROOT : > PUBLICATION_PART_LEAF); > schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid, > pub_elem->pubviaroot ? > PUBLICATION_PART_ROOT : > PUBLICATION_PART_LEAF); > pub_elem_tables = list_concat_unique_oid(relids, schemarelids); > > This is simply because it's O(n^2), where n is the number of oids in > schemarelids in the test case. A simple change would be to do sort & > dedup instead. With the attached experimental patch, the > pg_get_publication_tables() execution time gets halved in my > environment (796ms -> 430ms with 50k tables). If the number of tables > is not large, this method might be slower than today but it's not a > huge regression. > > In the initial tablesync cases, it could be optimized further in a way > that we introduce a new SQL function that gets the column list and > expr of the specific table. This way, we can filter the result by > relid at an early stage instead of getting all information and > filtering by relid as the tablesync worker does today, avoiding > overheads of gathering system catalog scan results. I've drafted this idea and I find it looks like a better approach. The patch introduces the pg_get_publication_table_info() SQL function that returns the column list and row filter expression like pg_get_publication_tables() returns but it checks only the specific table unlike pg_get_publication_tables(). On my env, the tablesync worker's query in question becomes 0.6ms from 288 ms with 50k tables in one publication. Feedback is very welcome. Regards, -- Masahiko Sawada Amazon Web Services: https://aws.amazon.com
Вложения
Hi On Wed Feb 25, 2026 at 4:03 PM -03, Masahiko Sawada wrote: >> After more investigation of slowness, it seems that the >> list_concat_unique_oid() called below is quite slow when the database >> has a lot of tables to publish: >> >> relids = GetPublicationRelations(pub_elem->oid, >> pub_elem->pubviaroot ? >> PUBLICATION_PART_ROOT : >> PUBLICATION_PART_LEAF); >> schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid, >> pub_elem->pubviaroot ? >> PUBLICATION_PART_ROOT : >> PUBLICATION_PART_LEAF); >> pub_elem_tables = list_concat_unique_oid(relids, schemarelids); >> >> This is simply because it's O(n^2), where n is the number of oids in >> schemarelids in the test case. A simple change would be to do sort & >> dedup instead. With the attached experimental patch, the >> pg_get_publication_tables() execution time gets halved in my >> environment (796ms -> 430ms with 50k tables). If the number of tables >> is not large, this method might be slower than today but it's not a >> huge regression. >> >> In the initial tablesync cases, it could be optimized further in a way >> that we introduce a new SQL function that gets the column list and >> expr of the specific table. This way, we can filter the result by >> relid at an early stage instead of getting all information and >> filtering by relid as the tablesync worker does today, avoiding >> overheads of gathering system catalog scan results. > > I've drafted this idea and I find it looks like a better approach. The > patch introduces the pg_get_publication_table_info() SQL function that > returns the column list and row filter expression like > pg_get_publication_tables() returns but it checks only the specific > table unlike pg_get_publication_tables(). On my env, the tablesync > worker's query in question becomes 0.6ms from 288 ms with 50k tables > in one publication. Feedback is very welcome. > Thanks for patch. I did a review and here are my comments: + values[0] = ObjectIdGetDatum(pub->oid); + values[1] = ObjectIdGetDatum(relid); + + values[0] = ObjectIdGetDatum(pub->oid); + values[1] = ObjectIdGetDatum(relid); Duplicated assignments? -------------- + /* ALL TALBES publication */ Typo on TALBES -------------- + * Common routine for pg_get_publication_tables() and + * pg_get_publication_table_info() to construct the result tuple. + */ +static HeapTuple +construct_published_rel_tuple(published_rel *table_info, TupleDesc tuple_desc) construct_published_rel_tuple is only being used on pg_get_publication_table_info(). Perhaps it can also be used on "if (funcctx->call_cntr < list_length(table_infos))" block on pg_get_publication_tables()? -------------- Is new regression tests needed or the current ones already cover the new function engouth? The code of pg_get_publication_table_info seems well coveraged. -------------- It seems that pgindent is missing on src/backend/catalog/pg_publication.c -- Matheus Alcantara EDB: https://www.enterprisedb.com
> On Feb 26, 2026, at 03:03, Masahiko Sawada <sawada.mshk@gmail.com> wrote: > > On Mon, Jan 26, 2026 at 12:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote: >> >> On Mon, Jan 19, 2026 at 9:44 AM Marcos Pegoraro <marcos@f10.com.br> wrote: >>> >>> Em sex., 19 de dez. de 2025 às 22:59, Masahiko Sawada <sawada.mshk@gmail.com> escreveu: >>>> >>>> Yeah, if we pass a publication that a lot of tables belong to to >>>> pg_get_publication_tables(), it could take a long time to return as it >>>> needs to construct many entries. >>> >>> >>> Well, I don't know how to help but I'm sure it's working badly. >>> Today I added some fields on my server, then seeing logs I could see how slow this process is. >>> >>> duration: 2213.872 ms statement: SELECT DISTINCT (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts) THEN NULL ELSEgpt.attrs END) FROM pg_publication p, LATERAL pg_get_publication_tables(p.pubname) gpt, pg_class c WHERE gpt.relid= 274376788 AND c.oid = gpt.relid AND p.pubname IN ( 'mypub' ) >>> >>> 2 seconds to get the list of fields of a table is really too slow. >>> How can we solve this ? >> >> After more investigation of slowness, it seems that the >> list_concat_unique_oid() called below is quite slow when the database >> has a lot of tables to publish: >> >> relids = GetPublicationRelations(pub_elem->oid, >> pub_elem->pubviaroot ? >> PUBLICATION_PART_ROOT : >> PUBLICATION_PART_LEAF); >> schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid, >> pub_elem->pubviaroot ? >> PUBLICATION_PART_ROOT : >> PUBLICATION_PART_LEAF); >> pub_elem_tables = list_concat_unique_oid(relids, schemarelids); >> >> This is simply because it's O(n^2), where n is the number of oids in >> schemarelids in the test case. A simple change would be to do sort & >> dedup instead. With the attached experimental patch, the >> pg_get_publication_tables() execution time gets halved in my >> environment (796ms -> 430ms with 50k tables). If the number of tables >> is not large, this method might be slower than today but it's not a >> huge regression. >> >> In the initial tablesync cases, it could be optimized further in a way >> that we introduce a new SQL function that gets the column list and >> expr of the specific table. This way, we can filter the result by >> relid at an early stage instead of getting all information and >> filtering by relid as the tablesync worker does today, avoiding >> overheads of gathering system catalog scan results. > > I've drafted this idea and I find it looks like a better approach. The > patch introduces the pg_get_publication_table_info() SQL function that > returns the column list and row filter expression like > pg_get_publication_tables() returns but it checks only the specific > table unlike pg_get_publication_tables(). On my env, the tablesync > worker's query in question becomes 0.6ms from 288 ms with 50k tables > in one publication. Feedback is very welcome. > > Regards, > > -- > Masahiko Sawada > Amazon Web Services: https://aws.amazon.com > <0001-Add-pg_get_publication_table_info-to-optimize-logica.patch> A few comments: 1. pg_publication.c needs to pg_indent. When I ran pg_indent agains it, the patch code got a lot format changes. 2 ``` + values[0] = ObjectIdGetDatum(pub->oid); + values[1] = ObjectIdGetDatum(relid); + + values[0] = ObjectIdGetDatum(pub->oid); + values[1] = ObjectIdGetDatum(relid); ``` Looks like a copy-pasto. 3 I think we can optimize pg_get_publication_table_info() a little bit for non-publish tables by setting funcctx->max_calls= 0. I tried this code locally, and test still passed: ``` if (publish) { pubrel = palloc_object(published_rel); pubrel->relid = relid; pubrel->pubid = pub->oid; funcctx->tuple_desc = BlessTupleDesc(tupdesc); funcctx->user_fctx = pubrel; funcctx->max_calls = 1; /* return one row */ } else funcctx->max_calls = 0; /* return no rows */ MemoryContextSwitchTo(oldcontext); } /* stuff done on every call of the function */ funcctx = SRF_PERCALL_SETUP(); if (funcctx->call_cntr < funcctx->max_calls) { HeapTuple rettuple; table_info = (published_rel *) funcctx->user_fctx; rettuple = construct_published_rel_tuple(table_info, funcctx->tuple_desc); SRF_RETURN_NEXT(funcctx, HeapTupleGetDatum(rettuple)); } SRF_RETURN_DONE(funcctx); ``` 4 ``` + int i; + + attnums = palloc_array(int16, desc->natts); + + for (i = 0; i < desc->natts; i++) ``` Nit: Peter (E) ever did some cleanup of changing for loop variable into for. So I think that style is more preferred now: ``` for (int i = 0; i < desc->natts; i++) ``` 5 ``` + attnums = palloc_array(int16, desc->natts); ``` Nit: attnums array is never free-ed, is that intentional? Actually, I’m kinda lost when a memory should be free-ed and whennot. Best regards, -- Chao Li (Evan) HighGo Software Co., Ltd. https://www.highgo.com/
On Wed, Feb 25, 2026 at 11:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
>
> On Mon, Jan 26, 2026 at 12:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:
> >
> > On Mon, Jan 19, 2026 at 9:44 AM Marcos Pegoraro <marcos@f10.com.br> wrote:
> > >
> > > Em sex., 19 de dez. de 2025 às 22:59, Masahiko Sawada <sawada.mshk@gmail.com> escreveu:
> > >>
> > >> Yeah, if we pass a publication that a lot of tables belong to to
> > >> pg_get_publication_tables(), it could take a long time to return as it
> > >> needs to construct many entries.
> > >
> > >
> > > Well, I don't know how to help but I'm sure it's working badly.
> > > Today I added some fields on my server, then seeing logs I could see how slow this process is.
> > >
> > > duration: 2213.872 ms statement: SELECT DISTINCT (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts) THEN
NULLELSE gpt.attrs END) FROM pg_publication p, LATERAL pg_get_publication_tables(p.pubname) gpt, pg_class c WHERE
gpt.relid= 274376788 AND c.oid = gpt.relid AND p.pubname IN ( 'mypub' )
> > >
> > > 2 seconds to get the list of fields of a table is really too slow.
> > > How can we solve this ?
> >
> > After more investigation of slowness, it seems that the
> > list_concat_unique_oid() called below is quite slow when the database
> > has a lot of tables to publish:
> >
> > relids = GetPublicationRelations(pub_elem->oid,
> > pub_elem->pubviaroot ?
> > PUBLICATION_PART_ROOT :
> > PUBLICATION_PART_LEAF);
> > schemarelids = GetAllSchemaPublicationRelations(pub_elem->oid,
> > pub_elem->pubviaroot ?
> > PUBLICATION_PART_ROOT :
> > PUBLICATION_PART_LEAF);
> > pub_elem_tables = list_concat_unique_oid(relids, schemarelids);
> >
> > This is simply because it's O(n^2), where n is the number of oids in
> > schemarelids in the test case. A simple change would be to do sort &
> > dedup instead. With the attached experimental patch, the
> > pg_get_publication_tables() execution time gets halved in my
> > environment (796ms -> 430ms with 50k tables). If the number of tables
> > is not large, this method might be slower than today but it's not a
> > huge regression.
> >
> > In the initial tablesync cases, it could be optimized further in a way
> > that we introduce a new SQL function that gets the column list and
> > expr of the specific table. This way, we can filter the result by
> > relid at an early stage instead of getting all information and
> > filtering by relid as the tablesync worker does today, avoiding
> > overheads of gathering system catalog scan results.
>
> I've drafted this idea and I find it looks like a better approach. The
> patch introduces the pg_get_publication_table_info() SQL function that
> returns the column list and row filter expression like
> pg_get_publication_tables() returns but it checks only the specific
> table unlike pg_get_publication_tables(). On my env, the tablesync
> worker's query in question becomes 0.6ms from 288 ms with 50k tables
> in one publication. Feedback is very welcome.
Another variant of this approach is to extend
pg_get_publication_table() so that it can accept a relid to get the
publication information of the specific table. I've attached the patch
for this idea. I'm going to add regression test cases.
pg_get_publication_table() is a VARIACID array function so the patch
changes its signature to {text[] [, oid]}, breaking the tool
compatibility. Given this function is mostly an internal-use function
(we don't have the documentation for it), it would probably be okay
with it. I find it's clearer than the other approach of introducing
pg_get_publication_table_info(). Feedback is very welcome.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com