Re: Tuple storage overhead

Поиск
Список
Период
Сортировка
От Szymon Guz
Тема Re: Tuple storage overhead
Дата
Msg-id q2pe4edc9361004160259gb202f2d0wd226357ea8d19d50@mail.gmail.com
обсуждение исходный текст
Ответ на Tuple storage overhead  (Peter Bex <Peter.Bex@xs4all.nl>)
Ответы Re: Tuple storage overhead  (Peter Bex <Peter.Bex@xs4all.nl>)
Список pgsql-general


2010/4/16 Peter Bex <Peter.Bex@xs4all.nl>
Hi all,

I have a table with three columns: one integer and two doubles.
There are two indexes defined (one on the integer and one on one
of the doubles).  This table stores 700000 records, which take up
30 Mb according to pg_relation_size(), and the total relation size
is 66 Mb.

I expected the disk space usage to be halved by changing the doubles
to floats, but it only dropped by 5 MB!  (I tried various approaches,
including dumping and restoring to make sure there was no uncollected
garbage lying around)

Someone on IRC told me the per-tuple storage overhead is pretty big,
and asked me to create a similar table containing only integers:

db=# create table testing (  x integer );
db=# INSERT INTO testing (x) VALUES (generate_series(1, 700000));
dacolt_development=# SELECT pg_size_pretty(pg_total_relation_size('testing'));
 pg_size_pretty
 ----------------
 24 MB
 (1 row)
db=# SELECT pg_size_pretty(pg_relation_size('testing'));
 pg_size_pretty
 ----------------
 24 MB
db=# CREATE INDEX testing_1 ON testing (x);
db=# CREATE INDEX testing_2 ON testing (x);
db=# SELECT pg_size_pretty(pg_relation_size('testing'));
 pg_size_pretty
 ----------------
 24 MB
 (1 row)
db=# SELECT pg_size_pretty(pg_total_relation_size('testing'));
 pg_size_pretty
 ----------------
 54 MB
 (1 row)

Is there a way to reduce the per-tuple storage overhead?

The reason I'm asking is that I have tons of tables like this,
and some data sets are much bigger than this.  In a relatively
simple testcase I'm importing data from text files which are
5.7 Gb in total, and this causes the db size to grow to 34Gb.

This size is just one small sample of many such datasets that I
need to import, so disk size is really an important factor.


File pages are not fully filled from the start as that could result in bad performance of queries later. If you want to have those pages fully filled, then you can set the table parameter like fillfactor. But be aware that many queries can be much slower and changing those parameters isn't usually a good idea. If you won't ever do any updates on those tables then it could work.



regards
Szymon Guz
 

В списке pgsql-general по дате отправления:

Предыдущее
От: raghavendra t
Дата:
Сообщение: tar error, in pg_start_backup()
Следующее
От: Peter Bex
Дата:
Сообщение: Re: Tuple storage overhead