Performance problems deleting data

От: Rafael Martinez
Тема: Performance problems deleting data
Дата: ,
(см: обсуждение, исходный текст)
Ответы: Re: Performance problems deleting data  (Tom Lane)
Список: pgsql-performance

Скрыть дерево обсуждения

Performance problems deleting data  (Rafael Martinez, )
 Re: Performance problems deleting data  (Tom Lane, )
  Re: Performance problems deleting data  (Rafael Martinez, )
   Re: Performance problems deleting data  (Alvaro Herrera, )


Postgresql version: 8.1.10
2x HP 72GB 10K SAS RAID1/smartarray

I have a colleague that is having som performance problems from time to
time when deleting some rows from a table.

We found out that the database having this problem had a severe bloat
problem in many tables and indexes (they were running only autovacuum)
and some misconfiguration in postgresql.conf.

What we did to fix the situation was:

1) Stop the application accessing the database.
2) Change these parameters in postgresql.conf:
shared_buffers = 108157
work_mem = 16384
maintenance_work_mem = 262144

max_fsm_pages = 800000

wal_buffers = 64
checkpoint_segments = 128

random_page_cost = 2.0
effective_cache_size = 255479

default_statistics_target = 400

3) Update /etc/sysctl.conf with new values for kernel.shmmax and

4) Run 'REINDEX DATABASE <dbname>'
6) Define a 'VACUUM VERBOSE ANALYZE' in crontab
7) Start the application.

These changes helped a lot, the size of the database when down from 7GB
to 1GB and most of the deletes work as they are suppose to. But from
time to time a single deletion takes a lot of time to finish. The output
from explain analyze doesn't show anything wrong, as long as I can see.

The definition of the table 'module' is:
manage=# \d module
                    Table "public.module"
  Column   |            Type             | Modifiers
 moduleid  | integer                     | not null default
 deviceid  | integer                     | not null
 netboxid  | integer                     | not null
 module    | integer                     | not null
 model     | character varying           |
 descr     | character varying           |
 up        | character(1)                | not null default 'y'::bpchar
 downsince | timestamp without time zone |
    "module_pkey" PRIMARY KEY, btree (moduleid)
    "module_deviceid_key" UNIQUE, btree (deviceid)
    "module_netboxid_key" UNIQUE, btree (netboxid, module)
Check constraints:
    "module_up" CHECK (up = 'y'::bpchar OR up = 'n'::bpchar)
Foreign-key constraints:
    "$1" FOREIGN KEY (deviceid) REFERENCES device(deviceid) ON UPDATE
    "$2" FOREIGN KEY (netboxid) REFERENCES netbox(netboxid) ON UPDATE
    close_alerthist_modules AS
    ON DELETE TO module DO  UPDATE alerthist SET end_time = now()
  WHERE (alerthist.eventtypeid::text = 'moduleState'::text OR
alerthist.eventtypeid::text = 'linkState'::text) AND alerthist.end_time
= 'infinity'::timestamp without time zone AND alerthist.deviceid =

manage=# EXPLAIN ANALYZE DELETE FROM module WHERE deviceid='7298';
                        QUERY PLAN
 Nested Loop  (cost=0.00..14.63 rows=1 width=67) (actual
time=2.365..2.365 rows=0 loops=1)
   ->  Index Scan using alerthist_end_time_btree on alerthist
(cost=0.00..10.65 rows=1 width=67) (actual time=2.363..2.363 rows=0 loops=1)
         Index Cond: (end_time = 'infinity'::timestamp without time zone)
         Filter: ((((eventtypeid)::text = 'moduleState'::text) OR
((eventtypeid)::text = 'linkState'::text)) AND (7298 = deviceid))
   ->  Index Scan using module_deviceid_key on module  (cost=0.00..3.96
rows=1 width=4) (never executed)
         Index Cond: (deviceid = 7298)
 Total runtime: 2.546 ms

 Index Scan using module_deviceid_key on module  (cost=0.00..3.96 rows=1
width=6) (actual time=0.060..0.061 rows=1 loops=1)
   Index Cond: (deviceid = 7298)
 Trigger for constraint $1: time=3.422 calls=1
 Trigger for constraint $1: time=0.603 calls=1
 Total runtime: 2462558.813 ms
(13 rows)

Any ideas why it is taking 2462558.813 ms to finish when the total time
for the deletion is 2.546 ms + 3.422 ms + 0.603ms?

The deletion of a row in the 'module' table involves several
deletions/updates in many other tables in the database related by
foreign keys (with ON DELETE CASCADE) and triggers.

I suppose that an open transaction in one of these not directly releated
tables to 'module' could lock the deletion without showing in EXPLAIN
ANALYZE?. The two 'Trigger for constraint' in the EXPLAIN ANALYZE output
only show two tables having an attribute as a foreign key in 'module',
but if these two tables have to wait for other tables, that would not
show anywhere? (only in pg_locks)

Thanks in advance
 Rafael Martinez, <>
 Center for Information Technology Services
 University of Oslo, Norway

 PGP Public Key:

В списке pgsql-performance по дате сообщения:

От: Ron Mayer
Сообщение: Re: How to choose a disc array for Postgresql?
От: "alan bryan"
Сообщение: Performance tuning on FreeBSD