Обсуждение: Decreasing performance in table partitioning
Hello all.
Herouth Maoz <herouth@unicell.co.il> writes: > My problem is the main loop, in which data for one month is moved from the old table to the partition table. > EXECUTE FORMAT ( > 'WITH del AS ( > DELETE FROM %1$I.%2$I > WHERE %3$I >= %4$L AND %3$I < %5$L > RETURNING * > ) > INSERT INTO %6$I.%7$I > SELECT * FROM del', > p_main_schema, > p_table_name, > p_date_field_name, > v_curr_month_str, > v_curr_month_to_str, > p_partition_schema, > v_partition_name > ); > In the first few iterations, this runs in very good times. But as > iterations progress, performance drops, despite the size of the date for > each month being more or less the same. How many of these are you doing in a single transaction? Are you doing them in separate exception blocks? What PG version is this exactly? My guess is that the cycles are going into finding out that tuples deleted by a prior command are in fact dead to the current command (though still live to outside observers, so they can't be hinted as dead). That ought to be relatively cheap if it's all one subtransaction, but if there were a large number of separate subtransactions involved, maybe not so much. regards, tom lane
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.
Put below logic in a function as you like:
Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.
Drop old non-partition table if you’re satisfied with current table structure.
Thanks,
Suya
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.
I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!
I tried to improve this, by first testing whether there are any records for the current month in the table, adding:
(2)
EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;
Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:
EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );
But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.
Here is the number of rows in each month for the table I was trying to partition:
count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00
And the times for each iteration:
Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min
For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.
Can anybody explain the performance deterioration and/or offer a suggestion for a different design?
TIA,
Herouth
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.Put below logic in a function as you like:Create a new partition table.Insert data from original non-partition table to the correct partition of new partition table.Build index and analyze as needed.Rename old non-partition table to something else.Rename new partition table to the correct name as you wanted.Drop old non-partition table if you’re satisfied with current table structure.Thanks,SuyaFrom: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.My problem is the main loop, in which data for one month is moved from the old table to the partition table.(1)EXECUTE FORMAT ('WITH del AS (DELETE FROM %1$I.%2$IWHERE %3$I >= %4$L AND %3$I < %5$LRETURNING *)INSERT INTO %6$I.%7$ISELECT * FROM del',p_main_schema,p_table_name,p_date_field_name,v_curr_month_str,v_curr_month_to_str,p_partition_schema,v_partition_name);In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!I tried to improve this, by first testing whether there are any records for the current month in the table, adding:(2)EXECUTE FORMAT ('SELECT trueFROM %1$I.%2$IWHERE %3$I >= %4$L AND %3$I < %5$LLIMIT 1',p_main_schema,p_table_name,p_date_field_name,v_curr_month_str,v_curr_month_to_str) INTO v_exists;Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.Here is the number of rows in each month for the table I was trying to partition:count | the_month----------+---------------------10907117 | 2013-08-01 00:00:0012715234 | 2013-09-01 00:00:0014902928 | 2013-10-01 00:00:0010933566 | 2013-11-01 00:00:0011394906 | 2013-12-01 00:00:009181051 | 2014-01-01 00:00:008487028 | 2014-02-01 00:00:009892981 | 2014-03-01 00:00:008830191 | 2014-04-01 00:00:008368638 | 2014-05-01 00:00:008014685 | 2014-06-01 00:00:006780589 | 2014-07-01 00:00:00And the times for each iteration:Month Statement 2 Statement 12013-08 3 sec 3 min2013-09 2 min 17 min2013-10 4 min 21 min2013-11 8 min 20 min2013-12 9 min 32 min2014-01 16 min 21 min2014-02 19 min 20 min2014-03 14 min 23 minFor April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.Can anybody explain the performance deterioration and/or offer a suggestion for a different design?TIA,Herouth
--
Herouth Maoz <herouth@unicell.co.il> writes:My problem is the main loop, in which data for one month is moved from the old table to the partition table.EXECUTE FORMAT ('WITH del AS (DELETE FROM %1$I.%2$IWHERE %3$I >= %4$L AND %3$I < %5$LRETURNING *)INSERT INTO %6$I.%7$ISELECT * FROM del',p_main_schema,p_table_name,p_date_field_name,v_curr_month_str,v_curr_month_to_str,p_partition_schema,v_partition_name);In the first few iterations, this runs in very good times. But asiterations progress, performance drops, despite the size of the date foreach month being more or less the same.
How many of these are you doing in a single transaction? Are you doing
them in separate exception blocks? What PG version is this exactly?
My guess is that the cycles are going into finding out that tuples deleted
by a prior command are in fact dead to the current command (though still
live to outside observers, so they can't be hinted as dead). That ought
to be relatively cheap if it's all one subtransaction, but if there were a
large number of separate subtransactions involved, maybe not so much.
regards, tom lane
--
The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.
But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.
Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions…
From: Herouth Maoz [mailto:herouth@unicell.co.il]
Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Decreasing performance in table partitioning
Thank you. Sorry I have been away for a few days and couldn't thank you before.
Wouldn't this have an impact if there are things like views or functions based on the old table?
On 08/09/2014, at 04:57, Huang, Suya wrote:
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.
Put below logic in a function as you like:
Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.
Drop old non-partition table if you’re satisfied with current table structure.
Thanks,
Suya
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.
I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!
I tried to improve this, by first testing whether there are any records for the current month in the table, adding:
(2)
EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;
Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:
EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );
But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.
Here is the number of rows in each month for the table I was trying to partition:
count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00
And the times for each iteration:
Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min
For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.
Can anybody explain the performance deterioration and/or offer a suggestion for a different design?
TIA,
Herouth
--
חרות מעוז
יוניסל פתרונות סלולריים מתקדמים
☎ 03-5181717 שלוחה 742
The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions…From: Herouth Maoz [mailto:herouth@unicell.co.il]
Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Decreasing performance in table partitioningThank you. Sorry I have been away for a few days and couldn't thank you before.Wouldn't this have an impact if there are things like views or functions based on the old table?On 08/09/2014, at 04:57, Huang, Suya wrote:Instead of deleting from the original non-partition table which is not efficient, you can try below approach.Put below logic in a function as you like:Create a new partition table.Insert data from original non-partition table to the correct partition of new partition table.Build index and analyze as needed.Rename old non-partition table to something else.Rename new partition table to the correct name as you wanted.Drop old non-partition table if you’re satisfied with current table structure.Thanks,SuyaFrom: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.My problem is the main loop, in which data for one month is moved from the old table to the partition table.(1)EXECUTE FORMAT ('WITH del AS (DELETE FROM %1$I.%2$IWHERE %3$I >= %4$L AND %3$I < %5$LRETURNING *)INSERT INTO %6$I.%7$ISELECT * FROM del',p_main_schema,p_table_name,p_date_field_name,v_curr_month_str,v_curr_month_to_str,p_partition_schema,v_partition_name);In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!I tried to improve this, by first testing whether there are any records for the current month in the table, adding:(2)EXECUTE FORMAT ('SELECT trueFROM %1$I.%2$IWHERE %3$I >= %4$L AND %3$I < %5$LLIMIT 1',p_main_schema,p_table_name,p_date_field_name,v_curr_month_str,v_curr_month_to_str) INTO v_exists;Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.Here is the number of rows in each month for the table I was trying to partition:count | the_month----------+---------------------10907117 | 2013-08-01 00:00:0012715234 | 2013-09-01 00:00:0014902928 | 2013-10-01 00:00:0010933566 | 2013-11-01 00:00:0011394906 | 2013-12-01 00:00:009181051 | 2014-01-01 00:00:008487028 | 2014-02-01 00:00:009892981 | 2014-03-01 00:00:008830191 | 2014-04-01 00:00:008368638 | 2014-05-01 00:00:008014685 | 2014-06-01 00:00:006780589 | 2014-07-01 00:00:00And the times for each iteration:Month Statement 2 Statement 12013-08 3 sec 3 min2013-09 2 min 17 min2013-10 4 min 21 min2013-11 8 min 20 min2013-12 9 min 32 min2014-01 16 min 21 min2014-02 19 min 20 min2014-03 14 min 23 minFor April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.Can anybody explain the performance deterioration and/or offer a suggestion for a different design?TIA,Herouth
--חרות מעוזיוניסל פתרונות סלולריים מתקדמים☎ 03-5181717 שלוחה 742
--
That’s a good idea, thanks for sharing Herouth!
From: Herouth Maoz [mailto:herouth@unicell.co.il]
Sent: Tuesday, September 16, 2014 10:22 PM
To: Huang, Suya
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] (Solved) Decreasing performance in table partitioning
Thank you. My solution is based on your suggestion, but instead of creating a new partition table and inserting into it, I create partitions as I originally planned, under the existing table, and insert to them. But without deleting.
I use INSERT INTO... SELECT ONLY, without deleting, and so the data becomes duplicated (each row is both in the main table and in the partition). At the end of the loop, when all partitions are populated and indexed, I use TRUNCATE ONLY on the main table, and drop its indexes. This way, the views are not affected.
So thank you for your suggestion and your help. Now for a comparable-size table, run time was less than 2 hours which is quite acceptable.
On 11/09/2014, at 07:26, Huang, Suya wrote:
The views will go with the table. if you rename table, view definition will be automatically changed accordingly. In your situation, you may need to recreate views or other objects have dependency on that old table.
But functions will remain the same, so as long as your new table has been renamed to the same name as the old table, it should be ok.
Note, it’s tested in my 9.3 environment, not sure how it behaves in older versions…
From: Herouth Maoz [mailto:herouth@unicell.co.il]
Sent: Wednesday, September 10, 2014 6:26 PM
To: Huang, Suya
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Decreasing performance in table partitioning
Thank you. Sorry I have been away for a few days and couldn't thank you before.
Wouldn't this have an impact if there are things like views or functions based on the old table?
On 08/09/2014, at 04:57, Huang, Suya wrote:
Instead of deleting from the original non-partition table which is not efficient, you can try below approach.
Put below logic in a function as you like:
Create a new partition table.
Insert data from original non-partition table to the correct partition of new partition table.
Build index and analyze as needed.
Rename old non-partition table to something else.
Rename new partition table to the correct name as you wanted.
Drop old non-partition table if you’re satisfied with current table structure.
Thanks,
Suya
From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Herouth Maoz
Sent: Monday, September 08, 2014 12:00 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Decreasing performance in table partitioning
Hello all.
I have created a function that partitions a large table into monthly partitions. Since the name of the table, target schema for partitions, name of the date field etc. are all passed as strings, the function is heavily based on EXECUTE statements.
My problem is the main loop, in which data for one month is moved from the old table to the partition table.
(1)
EXECUTE FORMAT (
'WITH del AS (
DELETE FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
RETURNING *
)
INSERT INTO %6$I.%7$I
SELECT * FROM del',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str,
p_partition_schema,
v_partition_name
);
In the first few iterations, this runs in very good times. But as iterations progress, performance drops, despite the size of the date for each month being more or less the same. Eventually I end up with iterations that run for hours, when I started with only a few minutes. The odd thing is that the last iteration, which is actually for a month not yet inserted into that table (0 records to move) it took 6 hours for the above statement to run!
I tried to improve this, by first testing whether there are any records for the current month in the table, adding:
(2)
EXECUTE FORMAT (
'SELECT true
FROM %1$I.%2$I
WHERE %3$I >= %4$L AND %3$I < %5$L
LIMIT 1',
p_main_schema,
p_table_name,
p_date_field_name,
v_curr_month_str,
v_curr_month_to_str
) INTO v_exists;
Before the above statement, and putting it in an IF statement on v_exists. Also, after each move, I added:
EXECUTE FORMAT ( 'ANALYZE %I.%I', p_main_schema, p_table_name );
But to no avail. In fact, in each iteration, the execution of statement 2 above takes more and more time.
Here is the number of rows in each month for the table I was trying to partition:
count | the_month
----------+---------------------
10907117 | 2013-08-01 00:00:00
12715234 | 2013-09-01 00:00:00
14902928 | 2013-10-01 00:00:00
10933566 | 2013-11-01 00:00:00
11394906 | 2013-12-01 00:00:00
9181051 | 2014-01-01 00:00:00
8487028 | 2014-02-01 00:00:00
9892981 | 2014-03-01 00:00:00
8830191 | 2014-04-01 00:00:00
8368638 | 2014-05-01 00:00:00
8014685 | 2014-06-01 00:00:00
6780589 | 2014-07-01 00:00:00
And the times for each iteration:
Month Statement 2 Statement 1
2013-08 3 sec 3 min
2013-09 2 min 17 min
2013-10 4 min 21 min
2013-11 8 min 20 min
2013-12 9 min 32 min
2014-01 16 min 21 min
2014-02 19 min 20 min
2014-03 14 min 23 min
For April I had to cancel it in the middle. My problem is that I can't let this run into the evening, when we have backup followed by large data collection. These times are just for the given statements, and additional time is spent creating indexes on the partitions and so on. So this thing ran from 11:24 until I had to cancel it at around 6PM.
Can anybody explain the performance deterioration and/or offer a suggestion for a different design?
TIA,
Herouth
--
חרות מעוז
יוניסל פתרונות סלולריים מתקדמים
☎ 03-5181717 שלוחה 742
--
חרות מעוז
יוניסל פתרונות סלולריים מתקדמים
☎ 03-5181717 שלוחה 742
Trying to debug an out of memory error with Postgresql.
Marc Van Olmen <mvo@sky4studios.be> writes: > [ out of memory while ] > * trying to import about 20GBytes of data (40K rows + rest large image blob's) > What I notice is that the python app stays around 200Mbytes of memory usage, but the postgres process on my MacOSX 10.9.5is growing and growing. Until it runs out of memory (running 32-bit version). Hm ... why are you running a 32-bit build on recent OS X? I'm pretty sure that OS X release doesn't support any non-64-bit hardware. Not that the leak isn't a problem, but you'd have a bunch more headroom with a 64-bit executable. > * The database has several Triggers that are fired written in PL/Python > * some of them are simple "NOTIFY ..;" others are 500 lines of python code > ... > Error I see: > TopMemoryContext: 64727172 total in 7901 blocks; 132784 free (7871 chunks); 64594388 used > CFuncHash: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used > PL/Python procedures: 8192 total in 1 blocks; 4872 free (0 chunks); 3320 used > SPI exceptions: 8192 total in 1 blocks; 2328 free (0 chunks); 5864 used > Rendezvous variable hash: 8192 total in 1 blocks; 3848 free (0 chunks); 4344 used > TopTransactionContext: 3859664 total in 6 blocks; 98504 free (31 chunks); 3761160 used > CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used > .... (thousands of these) Thousands of CurTransactionContext lines? That would suggest that you're creating a subtransaction (savepoint) in each trigger firing. You should try to avoid that if you can. I'm too tired to look up which plpython constructs might cause that exactly. It may be that there is also some internal leak in plpython; the fact that we only seem to have accounted for ~75MB of memory in the memory context dump suggests that something outside the core pgsql code is eating memory, and python might be the something. It's unlikely we can debug that unless you can provide a self-contained test case, though. regards, tom lane