append_array causes rapid growth according to pg_total_relation_size

Поиск
Список
Период
Сортировка
От Randall Smith
Тема append_array causes rapid growth according to pg_total_relation_size
Дата
Msg-id 1282442394.5256.30.camel@randall-laptop
обсуждение исходный текст
Ответы Re: append_array causes rapid growth according to pg_total_relation_size  (Randall Smith <randall@tnr.cc>)
Список pgsql-general
Hi,

Using Postgres 8.4, I create this table:

rcs_test=> \d array_big1;
    Table "public.array_big1"
  Column   |   Type   | Modifiers
-----------+----------+-----------
 id        | integer  | not null
 chunk_ids | bigint[] |
Indexes:
    "array_big1_pkey" PRIMARY KEY, btree (id)

It's initially populated with a single record with chunk_ids having a
1024^2 sized array of big integers.  Immediately after creation, things
look as expected.

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size
------------------------
                8757248
(1 row)

rcs_test=> select pg_column_size(chunk_ids) from array_big1;
 pg_column_size
----------------
        8388628
(1 row)



Now, after I append the array with a single big integer, the table size
nearly doubles according to pg_total_relation_size, but pg_column_size
looks OK.


rcs_test=> update array_big1 set chunk_ids = array_append(chunk_ids,
'12345678912345') where id = 0;
UPDATE 1

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size
------------------------
               17448960
(1 row)

rcs_test=> select pg_column_size(chunk_ids) from array_big1;
 pg_column_size
----------------
        8388636
(1 row)


And If I append again:

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size
------------------------
               26255360
(1 row)


A few more and it gets out of hand:

rcs_test=> select pg_total_relation_size('array_big1');
 pg_total_relation_size
------------------------
              104529920
(1 row)


VACUUM ANALYZE, doesn't seem to help but VACUUM FULL does.  Though
according to the docs, VACUUM FULL is bad.


I'm wondering if this is something that Postgres will take care of on
its own or something that could cause trouble.


If you're wondering why I'm using this approach instead of using another
table, that's probably for another thread, but in short, the overhead of
another table plus indexes is enormous compared to the array, which has
almost no overhead.


-Randall


В списке pgsql-general по дате отправления:

Предыдущее
От: Christopher Browne
Дата:
Сообщение: Re: nntp not working
Следующее
От: Arturo Pérez
Дата:
Сообщение: Re: nntp not working