BUG #15188: wrong count "tuples removed" in autovacuum log file

Поиск
Список
Период
Сортировка
От PG Bug reporting form
Тема BUG #15188: wrong count "tuples removed" in autovacuum log file
Дата
Msg-id 152552842704.9462.13671871916654476801@wrigleys.postgresql.org
обсуждение исходный текст
Список pgsql-bugs
The following bug has been logged on the website:

Bug reference:      15188
Logged by:          Anton G.
Email address:      a.glushakov86@gmail.com
PostgreSQL version: 9.6.5
Operating system:   Linux RedHat
Description:

Hi.
I have table with high rate update (~2500 updates/sec)
autovacumm processing this table every ~10 min, and log work result.
But in log i see that was vacuumed about 40-45k tuples, but in
pg_stat_user_tables i see that was removed more than 1,5 million dead tuples
(when autovacuum start in table about 1 mln dead tuples)
Here is two result from pg_stat_user_tables - before autovacuum start and
after , and autovacuum log:

# select now(),* from pg_stat_user_tables where relname = 't';
-[ RECORD 1 ]-------+------------------------------
now                 | 2018-04-26 13:36:07.707784+03
relid               | 32170
schemaname          | public
relname             | t
seq_scan            | 151371
seq_tup_read        | 132672944
idx_scan            | 4525649653
idx_tup_fetch       | 9402703531
n_tup_ins           | 33785210
n_tup_upd           | 4502119094
n_tup_del           | 0
n_tup_hot_upd       | 0
n_live_tup          | 33815482
n_dead_tup          | 1926735
n_mod_since_analyze | 492583
last_vacuum         | 2018-04-25 07:51:33.307364+03
last_autovacuum     | 2018-04-26 13:21:53.931788+03
last_analyze        | 2018-04-01 19:40:08.063981+03
last_autoanalyze    | 2018-04-26 13:32:15.845189+03
vacuum_count        | 5
autovacuum_count    | 8628
analyze_count       | 2
autoanalyze_count   | 13407


# select now(),* from pg_stat_user_tables where relname = 't';
-[ RECORD 1 ]-------+------------------------------
now                 | 2018-04-26 13:38:07.320179+03
relid               | 32170
schemaname          | public
relname             | t
seq_scan            | 151371
seq_tup_read        | 132672944
idx_scan            | 4525898773
idx_tup_fetch       | 9403224417
n_tup_ins           | 33789071
n_tup_upd           | 4502366855
n_tup_del           | 0
n_tup_hot_upd       | 0
n_live_tup          | 33830290
n_dead_tup          | 296035
n_mod_since_analyze | 49793
last_vacuum         | 2018-04-25 07:51:33.307364+03
last_autovacuum     | 2018-04-26 13:37:28.072735+03
last_analyze        | 2018-04-01 19:40:08.063981+03
last_autoanalyze    | 2018-04-26 13:37:44.353704+03
vacuum_count        | 5
autovacuum_count    | 8629
analyze_count       | 2
autoanalyze_count   | 13408


2018-04-26 13:37:28 MSK    12076  LOG:  automatic vacuum of table "t": index
scans: 1
        pages: 0 removed, 549695 remain, 0 skipped due to pins, 43363
skipped frozen
        tuples: 53866 removed, 33836682 remain, 13483 are dead but not yet
removable
        buffer usage: 801664 hits, 1280506 misses, 7528 dirtied
        avg read rate: 96.220 MB/s, avg write rate: 0.566 MB/s
        system usage: CPU 10.54s/16.85u sec elapsed 103.96 sec

Autovacuum log display that only 53866 dead tuples was removed (+13483
cannot), but in real was delete > 1,5 million
I think this is some problem with logging algorithm
Thank


В списке pgsql-bugs по дате отправления:

Предыдущее
От: chenhj
Дата:
Сообщение: Re:Re: BUG #15187: When use huge page, there may be a lot of hangedconnections with status startup or authentication
Следующее
От: chenhj
Дата:
Сообщение: Re:Re:Re: BUG #15187: When use huge page, there may be a lot ofhanged connections with status startup or authentication