Обсуждение: Unexpectedly Long DELETE Wait
Hi,
Below command has been running since ~700 minutes in one of our
PostgreSQL servers.
DELETE FROM mugpsreglog
WHERE NOT EXISTS (SELECT 1
FROM mueventlog
WHERE mueventlog.eventlogid = mugpsreglog.eventlogid);
Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6)
Filter: (NOT (subplan))
SubPlan
-> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0)
Filter: (eventlogid = $0)
Here is some information about related tables:
# SELECT pg_relation_size('emove.mueventlog') / pow(1024, 2);
?column?
----------
11440
(1 row)
# SELECT pg_relation_size('emove.mugpsreglog') / pow(1024, 2);
?column?
-------------
631.8046875
(1 row)
And there isn't any constraints (FK/PK), triggers, indexes, etc. on any
of the tables. (We're in the phase of a migration, many DELETE commands
similar to above gets executed to relax constraints will be introduced.)
Here are related postgresql.conf lines:
shared_buffers = 512MB
max_prepared_transactions = 0
work_mem = 8MB
maintenance_work_mem = 512MB
max_fsm_pages = 204800
max_fsm_relations = 8192
vacuum_cost_delay = 10
wal_buffers = 2MB
checkpoint_segments = 128
checkpoint_timeout = 1h
checkpoint_completion_target = 0.5
checkpoint_warning = 1min
effective_cache_size = 5GB
autovacuum = off
And system hardware & software profile is:
OS : Red Hat Enterprise Linux ES release 4 (Nahant Update 5)
PostgreSQL: 8.3.1
Filesystem: GFS (IBM DS4700 SAN)
CPU : 4 x Quad Core Intel(R) Xeon(TM) CPU 3.00GHz
Memory : 8GB
Does anybody have an idea what might be causing the problem? Any
suggestions to improve the performance during such bulk DELETEs?
Regards.
Volkan YAZICI wrote: > Hi, > > Below command has been running since ~700 minutes in one of our > PostgreSQL servers. > > DELETE FROM mugpsreglog > WHERE NOT EXISTS (SELECT 1 > FROM mueventlog > WHERE mueventlog.eventlogid = mugpsreglog.eventlogid); > > Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6) > Filter: (NOT (subplan)) > SubPlan > -> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0) > Filter: (eventlogid = $0) Ouch - look at the estimated cost on that! > And there isn't any constraints (FK/PK), triggers, indexes, etc. on any > of the tables. (We're in the phase of a migration, many DELETE commands > similar to above gets executed to relax constraints will be introduced.) Well there you go. Add an index on eventlogid for mugpsreglog. Alternatively, if you increased your work_mem that might help. Try SET work_mem='64MB' (or even higher) before running the explain and see if it tries a materialize. For situations like this where you're doing big one-off queries you can afford to increase resource limits. -- Richard Huxton Archonet Ltd
On Thu, 07 Aug 2008, Richard Huxton <dev@archonet.com> writes: > Volkan YAZICI wrote: >> DELETE FROM mugpsreglog >> WHERE NOT EXISTS (SELECT 1 >> FROM mueventlog >> WHERE mueventlog.eventlogid = mugpsreglog.eventlogid); >> >> Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6) >> Filter: (NOT (subplan)) >> SubPlan >> -> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0) >> Filter: (eventlogid = $0) > > Ouch - look at the estimated cost on that! > >> And there isn't any constraints (FK/PK), triggers, indexes, etc. on any >> of the tables. (We're in the phase of a migration, many DELETE commands >> similar to above gets executed to relax constraints will be introduced.) > > Well there you go. Add an index on eventlogid for mugpsreglog. Hrm... Adding an INDEX on "eventlogid" column of "mueventlog" table solved the problem. Anyway, thanks for your kindly help. > Alternatively, if you increased your work_mem that might help. Try SET > work_mem='64MB' (or even higher) before running the explain and see if it tries > a materialize. For situations like this where you're doing big one-off queries > you can afford to increase resource limits. None of 64MB, 128MB, 256MB and 512MB settings make a change in the query plan. Regards.