Re: Weight BLOB objects in postgreSQL? How?

Поиск
Список
Период
Сортировка
От Jim Nasby
Тема Re: Weight BLOB objects in postgreSQL? How?
Дата
Msg-id 54F7FDF9.3000703@BlueTreble.com
обсуждение исходный текст
Ответ на Re: Weight BLOB objects in postgreSQL? How?  (Bill Moran <wmoran@potentialtech.com>)
Список pgsql-general
On 3/4/15 10:12 AM, Bill Moran wrote:
>  From there, if the data is large enough to trigger out-of-line
> storage, the data will be broken down into chunks and stored in a
> toast table, this increases the overhead because each row in the
> toast table will have it's own overhead, and the number of rows
> required in the toast table depends on the size of the data, which
> in-turn depends on how well the data compressed ...

If you want to back-of-the-envelope estimate, (with 8k blocks) you need
a new chunk for every ~2000 bytes of *compressed* data. Each chunk will
take 2048 bytes in the heap and a bit more in the index on the toast
table (probably around 16 bytes). So it's somewhere around 95% efficient
(assuming zero compression).

> So, your answer is:
> Take_your_  data and store a bunch of it, then check the resultant
> size of the tables on disk vs. the actual size of the data. That's
> really the only way to know since the actual efficiency of data
> storage depends a lot on the data itself.

And perhaps a better question is: why do you want to know? In my
experience there are other factors with keeping blobs in the database
that are much more important than toast overhead...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Jim Nasby
Дата:
Сообщение: RFC: template system for Postgres
Следующее
От: Jim Nasby
Дата:
Сообщение: Re: Partitioning with the index on the master table.