Re: Avoid overhead open-close indexes (catalog updates)

Поиск
Список
Период
Сортировка
От Ranier Vilela
Тема Re: Avoid overhead open-close indexes (catalog updates)
Дата
Msg-id CAEudQAr1asfmq9SHRXx6G-TexBQkZpYA1ARWskvuv8JdUvANdw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Avoid overhead open-close indexes (catalog updates)  (Michael Paquier <michael@paquier.xyz>)
Ответы Re: Avoid overhead open-close indexes (catalog updates)  (Michael Paquier <michael@paquier.xyz>)
Список pgsql-hackers
Em sex., 11 de nov. de 2022 às 01:54, Michael Paquier <michael@paquier.xyz> escreveu:
On Thu, Nov 10, 2022 at 08:56:25AM -0300, Ranier Vilela wrote:
> For CopyStatistics() have performance checks.

You are not giving all the details of your tests, though,
Windows 10 64 bits
SSD 256 GB

pgbench -i
pgbench_accounts;
pgbench_tellers;

Simple test, based on tables created by pgbench.
 
so I had a
look with some of my stuff using the attached set of SQL functions
(create_function.sql) to create a bunch of indexes with a maximum
number of expressions, as of:
select create_table_cols('tab', 32);
select create_index_multi_exprs('ind', 400, 'tab', 32);
insert into tab values (1);
analyze tab; -- 12.8k~ pg_statistic records

On HEAD, a REINDEX CONCURRENTLY for the table 'tab' takes 1550ms on my
laptop with an average of 10 runs.  The patch impacts the runtime with
a single session, making the execution down to 1480ms as per an effect
of the maximum number of attributes on an index being 32.  There may
be some noise, but there is a trend, and some perf profiles confirm
the same with CopyStatistics().  My case is a bit extreme, of course,
still that's something.

Anyway, while reviewing this code, it occured to me that we could do
even better than this proposal once we switch to
CatalogTuplesMultiInsertWithInfo() for the data insertion.

This would reduce more the operation overhead by switching to multi
INSERTs rather than 1 INSERT for each index attribute with tuples
stored in a set of TupleTableSlots, meaning 1 WAL record rather than N
records.  The approach would be similar to what you do for
dependencies, see for example recordMultipleDependencies() when it
comes to the number of slots used etc.

I think complexity doesn't pay off.
For example, CopyStatistics not knowing how many tuples will be processed.
IMHO, this step is right now.
CatalogTupleInsertWithInfo offers considerable improvement without introducing bugs and maintenance issues.

regards,
Ranier Vilela

В списке pgsql-hackers по дате отправления:

Предыдущее
От: vignesh C
Дата:
Сообщение: Re: logical replication restrictions
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Remove unused param rte in set_plain_rel_pathlist