Re: PostgreSQL partition tables use more private memory

Поиск
Список
Период
Сортировка
От Pavel Stehule
Тема Re: PostgreSQL partition tables use more private memory
Дата
Msg-id CAFj8pRDL6K_aeqMXMx=UPE=Wp5UT0mF8oa=LNN7D5YT1d-r-uQ@mail.gmail.com
обсуждение исходный текст
Ответ на PostgreSQL partition tables use more private memory  (大松 <dasong2410@163.com>)
Список pgsql-hackers
Hi

čt 27. 12. 2018 v 11:48 odesílatel 大松 <dasong2410@163.com> napsal:
# PostgreSQL partition tables use more private memory

Hi, there is a process private memory issue about partition tables in our production environment. We're not sure if it's a bug or Pg just works in this way. 

when dml operated on partition tables, the pg process will occupy more memory(I saw this in top command result, RES-SHR) than normal tables, it could be 10x more;

PostgreSQL uses process memory for catalog caches. Partitions are like tables - if you use lot of partitions, then you use lot of tables, and you need lot of memory for caches. This caches are dropped when some in system catalog is changed.
 

it related to partition and column quantity, the more partitions and columns the partition table has, the more memory the related process occupies;

it also related table quantity refered to dml statments which executed in the process, two tables could double the memory, valgrind log will show you the result;

pg process will not release this memory until the process is disconnected, unfortunately our applications use connection pool that will not release connections.

It is expected behave - a) glibc memory holds allocated memory inside process to process end, b) when there are not changes in system catalog, then caches are not cleaned.

When you have this issue, then it is necessary to close processes - a pooling software can define "dirty" time, and should be able to close session after this time. Maybe one hour, maybe twenty minutes.

Regards

Pavel


Our PostgreSQL database server which encounters this problem has about 48GB memory, there are more than one hundred pg processes in this server, and each process comsumes couple hundreds MB of private memory. It frequently runs out of the physical memory and swap recently.

I did a test using valgrind in test environment to repeat this scene, the following is the steps. 

## 1. env

RHEL 6.3 X86_64
PostgreSQL 10.2

## 2. non-partition table sql

    drop table tb_part_test cascade;
    
    create table tb_part_test
    (
        STATIS_DATE          int NOT NULL, 
        ORDER_NUM            int DEFAULT NULL,
        CMMDTY_CODE          varchar(40) default '',
        RECEIVE_PLANT        varchar(4) DEFAULT  '',
        RECEIVE_LOCAT        varchar(10) DEFAULT  '',
        SUPPLIER_CODE        varchar(20) DEFAULT  '',
        RECEIVE_PLANT_TYPE   varchar(2) DEFAULT  '',
        
        c1                   varchar(2) DEFAULT  '',
        c2                   varchar(2) DEFAULT  '',
        c3                   varchar(2) DEFAULT  '',
        c4                   varchar(2) DEFAULT  '',
        c5                   varchar(2) DEFAULT  '',
        c6                   varchar(2) DEFAULT  '',
        c7                   varchar(2) DEFAULT  '',
        c8                   varchar(2) DEFAULT  '',
        c9                   varchar(2) DEFAULT  '',
        c10                   varchar(2) DEFAULT  '',
        c11                   varchar(2) DEFAULT  '',
        c12                   varchar(2) DEFAULT  '',
        c13                   varchar(2) DEFAULT  '',
        c14                   varchar(2) DEFAULT  '',
        c15                   varchar(2) DEFAULT  '',
        c16                   varchar(2) DEFAULT  '',
        c17                   varchar(2) DEFAULT  '',
        c18                   varchar(2) DEFAULT  '',
        c19                   varchar(2) DEFAULT  '',
        c20                   varchar(2) DEFAULT  '',
        c21                   varchar(2) DEFAULT  '',
        c22                   varchar(2) DEFAULT  '',
        c23                   varchar(2) DEFAULT  '',
        c24                   varchar(2) DEFAULT  ''
    );

## 3. partition table sql

    drop table tb_part_test cascade;
    
    create table tb_part_test
    (
        STATIS_DATE          int NOT NULL, 
        ORDER_NUM            int DEFAULT NULL,
        CMMDTY_CODE          varchar(40) default '',
        RECEIVE_PLANT        varchar(4) DEFAULT  '',
        RECEIVE_LOCAT        varchar(10) DEFAULT  '',
        SUPPLIER_CODE        varchar(20) DEFAULT  '',
        RECEIVE_PLANT_TYPE   varchar(2) DEFAULT  '',
        
        c1                   varchar(2) DEFAULT  '',
        c2                   varchar(2) DEFAULT  '',
        c3                   varchar(2) DEFAULT  '',
        c4                   varchar(2) DEFAULT  '',
        c5                   varchar(2) DEFAULT  '',
        c6                   varchar(2) DEFAULT  '',
        c7                   varchar(2) DEFAULT  '',
        c8                   varchar(2) DEFAULT  '',
        c9                   varchar(2) DEFAULT  '',
        c10                   varchar(2) DEFAULT  '',
        c11                   varchar(2) DEFAULT  '',
        c12                   varchar(2) DEFAULT  '',
        c13                   varchar(2) DEFAULT  '',
        c14                   varchar(2) DEFAULT  '',
        c15                   varchar(2) DEFAULT  '',
        c16                   varchar(2) DEFAULT  '',
        c17                   varchar(2) DEFAULT  '',
        c18                   varchar(2) DEFAULT  '',
        c19                   varchar(2) DEFAULT  '',
        c20                   varchar(2) DEFAULT  '',
        c21                   varchar(2) DEFAULT  '',
        c22                   varchar(2) DEFAULT  '',
        c23                   varchar(2) DEFAULT  '',
        c24                   varchar(2) DEFAULT  ''
    )PARTITION BY LIST (STATIS_DATE);  
    
    DO $$
    DECLARE r record;
    BEGIN
        FOR r IN SELECT to_char(dd, 'YYYYMMDD') dt FROM generate_series( '2018-01-01'::date, '2018-12-31'::date, '1 day'::interval) dd
        LOOP
            EXECUTE 'CREATE TABLE P_tb_part_test_' || r.dt || ' PARTITION OF tb_part_test FOR VALUES IN (' || r.dt || ')';
        END LOOP;
    END$$;


## 4. test.sql

    copy (select pg_backend_pid()) to '/tmp/test.pid';
    
    update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';

## 5. test1.sql(tb_part_test1 is a partition table, and it has the same structure with tb_part_test)

    copy (select pg_backend_pid()) to '/tmp/test.pid';

    update tb_part_test set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';

    update tb_part_test1 set ORDER_NUM = '6' where CMMDTY_CODE = '10558278714' AND RECEIVE_PLANT = 'DC44' AND RECEIVE_LOCAT = '974L' AND SUPPLIER_CODE = '10146741' AND STATIS_DATE = '20181219' AND RECEIVE_PLANT_TYPE = '04';

## 6. valgrind command

    valgrind --leak-check=full --gen-suppressions=all --time-stamp=yes --log-file=/tmp/%p.log --trace-children=yes --track-origins=yes --read-var-info=yes --show-leak-kinds=all -v postgres --log_line_prefix="%m %p " --log_statement=all --shared_buffers=4GB

## 7. test steps

1. Start pg using valgrind, create non-partition table, run pgbench for 1000s, get 29201\_nonpart\_1000s.log

    pgbench -n -T 1000 -r -f  test.sql

2. Start pg using valgrind, create partition table, run pgbench for 1000s, get 27064\_part\_1000s.log

    pgbench -n -T 1000 -r -f  test.sql

3. Start pg using valgrind, create partition table, run pgbench for 2000s, get 864\_part\_2000s.log

    pgbench -n -T 2000 -r -f  test.sql

4. Start pg using valgrind, create partition table, run pgbench for 1000s, get 16507\_part\_2tb\_1000s.log

    pgbench -n -T 1000 -r -f  test1.sql

The attachments are valgrind logs. Thanks. 

Sincerely,
Marcus Mo


 

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Re: tickling the lesser contributor's withering ego
Следующее
От: Peter Eisentraut
Дата:
Сообщение: Re: could recovery_target_timeline=latest be the default in standbymode?