Re: [GENERAL] Postgres stats collector showing high disk I/O

Поиск
Список
Период
Сортировка
От Alvaro Herrera
Тема Re: [GENERAL] Postgres stats collector showing high disk I/O
Дата
Msg-id 1274390635-sup-4816@alvh.no-ip.org
обсуждение исходный текст
Ответы Re: [GENERAL] Postgres stats collector showing high disk I/O  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
Excerpts from Justin Pasher's message of jue may 20 16:10:53 -0400 2010:

> Whenever I clear out the stats for all of the databases, the file
> shrinks down to <1MB. However, it only takes about a day for it to get
> back up to ~18MB and then the stats collector process start the heavy
> disk writing again. I do know there are some tables in the database that
> are filled and emptied quite a bit (they are used as temporary "queue"
> tables). The code will VACUUM FULL ANALYZE after the table is emptied to
> get the physical size back down and update the (empty) stats. A plain
> ANALYZE is also run right after the table is filled but before it starts
> processing, so the planner will have good stats on the contents of the
> table. Would this lead to pg_stat file bloat like I'm seeing? Would a
> CLUSTER then ANALYZE instead of a VACUUM FULL ANALYZE make any
> difference? The VACUUM FULL code was setup quite a while back before the
> coders knew about CLUSTER.

I wonder if we should make pgstats write one file per database (plus one
for shared objects), instead of keeping everything in a single file.
That would reduce the need for reading and writing so much.

--

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Alvaro Herrera
Дата:
Сообщение: Re: Unexpected data beyond EOF during heavy writes
Следующее
От: Rosser Schwarz
Дата:
Сообщение: Re: Unexpected data beyond EOF during heavy writes