Postgresql out of memory during big transaction

Поиск
Список
Период
Сортировка
От Marc Van Olmen
Тема Postgresql out of memory during big transaction
Дата
Msg-id C849F1F0-F192-4653-8214-BD928EEA7535@sky4studios.be
обсуждение исходный текст
Ответ на Re: (Solved) Decreasing performance in table partitioning  ("Huang, Suya" <Suya.Huang@au.experian.com>)
Ответы Re: Postgresql out of memory during big transaction  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general

Trying to debug an out of memory error with Postgresql.

Simple:

* Open Connection
 * begin transaction
   * trying to import about 20GBytes of data (40K rows + rest large image blob's)
 * end transaction
* Close Connection

What I notice is that the python app stays around 200Mbytes of memory usage, but the postgres process on my MacOSX 10.9.5 is growing and growing. Until it runs out of memory (running 32-bit version).

Sqlalchemy, 0.9.5, psycopg2 2.6, python 2.7.5, postgresql 9.3.2 (default config settings)

Database:

* The database has several Triggers that are fired written in PL/Python
* some of them are simple "NOTIFY ..;" others are 500 lines of python code

Things that I already figured out:

* If i remove the database triggers PL/Python there is no memory problem
* if i import in chunks of 1000 rows and I do commit each time the postgress process goes back to low memory (so some memory gets deallocated) so I'm able to import everything.
* I added Python pympler and printed out memory diffs to see memory gets leaked in the triggers but nothing seems to show up.

Question:

* any ideas on how to trace this case without going into full debug mode and running gdb etc.

Error I see:

    (45845,0xa0db51a8) malloc: *** mach_vm_map(size=8388608) failed (error code=3)
    *** error: can't allocate region
    *** set a breakpoint in malloc_error_break to debug
    TopMemoryContext: 64727172 total in 7901 blocks; 132784 free (7871 chunks); 64594388 used
      CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
      PL/Python procedures: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used
      SPI exceptions: 8192 total in 1 blocks; 2328 free (0 chunks); 5864 used
      Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used
      TopTransactionContext: 3859664 total in 6 blocks; 98504 free (31 chunks); 3761160 used
        CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used

    .... (thousands of these)
    
        pg_authid_rolname_index: 1024 total in 1 blocks; 552 free (0 chunks); 472 used
      MdSmgr: 8192 total in 1 blocks; 5320 free (0 chunks); 2872 used
      ident parser context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
      hba parser context: 7168 total in 3 blocks; 2680 free (3 chunks); 4488 used
      LOCALLOCK hash: 24576 total in 2 blocks; 13080 free (3 chunks); 11496 used
      Timezones: 78520 total in 2 blocks; 5968 free (0 chunks); 72552 used
      ErrorContext: 8192 total in 1 blocks; 8176 free (3 chunks); 16 used
    ERROR:  spiexceptions.OutOfMemory: out of memory
    DETAIL:  Failed on request of size 2016.
    CONTEXT:  Traceback (most recent call last):
        PL/Python function "metavalue_cleanup", line 25, in <module>
          TD["new"]["id"]))
      PL/Python function "metavalue_cleanup"
    STATEMENT:  INSERT INTO metavalue (id_item, id_metatype, id_employee, date, value) VALUES (5079, 1, 1, now(), 'J107') RETURNING metavalue.id

В списке pgsql-general по дате отправления:

Предыдущее
От: "Huang, Suya"
Дата:
Сообщение: Re: (Solved) Decreasing performance in table partitioning
Следующее
От: Abelard Hoffman
Дата:
Сообщение: Re: pg_dump does not include database-level user-defined GUC variables?