Re: [PERFORM] DELETE taking too much memory

От: French, Martin
Тема: Re: [PERFORM] DELETE taking too much memory
Дата: ,
Msg-id: 81976671721DF04B9DCA6ECD87941A402B31E830@roundway.Cromwell-tools.co.uk
(см: обсуждение, исходный текст)
Ответ на: DELETE taking too much memory  (vincent dephily)
Ответы: Re: [PERFORM] DELETE taking too much memory  (Vincent de Phily)
Список: pgsql-general

Скрыть дерево обсуждения

DELETE taking too much memory  (vincent dephily, )
 Re: [PERFORM] DELETE taking too much memory  ("French, Martin", )
  Re: [PERFORM] DELETE taking too much memory  (Vincent de Phily, )
 Re: DELETE taking too much memory  (Guillaume Lelarge, )
  Re: DELETE taking too much memory  (Dean Rasheed, )
   Re: DELETE taking too much memory  (Vincent de Phily, )
    Re: DELETE taking too much memory  (Dean Rasheed, )
     Re: [PERFORM] DELETE taking too much memory  (Claudio Freire, )
   Re: [PERFORM] DELETE taking too much memory  (Jose Ildefonso Camargo Tolosa, )
  Re: DELETE taking too much memory  (Vincent de Phily, )
 Re: [PERFORM] DELETE taking too much memory  (Vincent de Phily, )
 Re: [PERFORM] DELETE taking too much memory  ("French, Martin", )

How up to date are the statistics for the tables in question?

What value do you have for effective cache size?

My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the amount of memory
it thinks it has to play with.

-----Original Message-----
From: 
[mailto:] On Behalf Of vincent
dephily
Sent: 07 July 2011 14:34
To: ; 
Subject: [PERFORM] DELETE taking too much memory

Hi,

I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap. Using postgres 8.3.14.

I'm purging some old data from table t1, which should cascade-delete
referencing rows in t2. Here's an anonymized rundown :


# \d t1
                             Table "public.t1"
  Column   |            Type             |             Modifiers
-----------+-----------------------------+------------------------------
---
 t1id      | integer                     | not null default
nextval('t1_t1id_seq'::regclass)
(...snip...)
Indexes:
    "message_pkey" PRIMARY KEY, btree (id)
(...snip...)

# \d t2
                               Table "public.t2"
     Column      |            Type             |        Modifiers
-----------------+-----------------------------+------------------------
-----
 t2id            | integer                     | not null default
nextval('t2_t2id_seq'::regclass)
 t1id            | integer                     | not null
 foo             | integer                     | not null
 bar             | timestamp without time zone | not null default now()
Indexes:
    "t2_pkey" PRIMARY KEY, btree (t2id)
    "t2_bar_key" btree (bar)
    "t2_t1id_key" btree (t1id)
Foreign-key constraints:
    "t2_t1id_fkey" FOREIGN KEY (t1id) REFERENCES t1(t1id) ON UPDATE
RESTRICT ON DELETE CASCADE

# explain delete from t1 where t1id in (select t1id from t2 where
foo=0 and bar < '20101101');
                               QUERY PLAN
------------------------------------------------------------------------
-----
 Nested Loop  (cost=5088742.39..6705282.32 rows=30849 width=6)
   ->  HashAggregate  (cost=5088742.39..5089050.88 rows=30849 width=4)
         ->  Index Scan using t2_bar_key on t2  (cost=0.00..5035501.50
rows=21296354 width=4)
               Index Cond: (bar < '2010-11-01 00:00:00'::timestamp
without time zone)
               Filter: (foo = 0)
   ->  Index Scan using t1_pkey on t1  (cost=0.00..52.38 rows=1
width=10)
         Index Cond: (t1.t1id = t2.t1id)
(7 rows)


Note that the estimate of 30849 rows is way off : there should be
around 55M rows deleted from t1, and 2-3 times as much from t2.

When looking at the plan, I can easily imagine that data gets
accumulated below the nestedloop (thus using all that memory), but why
isn't each entry freed once one row has been deleted from t1 ? That
entry isn't going to be found again in t1 or in t2, so why keep it
around ?

Is there a better way to write this query ? Would postgres 8.4/9.0
handle things better ?



Thanks in advance.


--
Vincent de Phily

--
Sent via pgsql-performance mailing list
()
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

___________________________________________________

This email is intended for the named recipient. The information contained
in it is confidential.  You should not copy it for any purposes, nor
disclose its contents to any other party.  If you received this email
in error, please notify the sender immediately via email, and delete it from
your computer.

Any views or opinions presented are solely those of the author and do not
necessarily represent those of the company.

PCI Compliancy: Please note, we do not send or wish to receive banking, credit
or debit card information by email or any other form of communication.

Please try our new on-line ordering system at http://www.cromwell.co.uk/ice

Cromwell Tools Limited, PO Box 14, 65 Chartwell Drive
Wigston, Leicester LE18 1AT. Tel 0116 2888000
Registered in England and Wales, Reg No 00986161
VAT GB 115 5713 87 900
__________________________________________________



В списке pgsql-general по дате сообщения:

От: "Dr. Tingrong Lu"
Дата:
Сообщение: Re: Add Foreign Keys To Table
От: Joe Lester
Дата:
Сообщение: Re: failed archive command