overhead of "small" large objects

Поиск
Список
Период
Сортировка
От Philip Crotwell
Тема overhead of "small" large objects
Дата
Msg-id Pine.GSO.4.10.10012101404140.4870-100000@tigger.seis.sc.edu
обсуждение исходный текст
Ответы Re: overhead of "small" large objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-general
Hi

I'm putting lots of small (~10Kb) chunks of binary seismic data into large
objects in postgres 7.0.2. Basically just arrays of 2500 or so ints that
represent about a minutes worth of data. I put in the data at the rate of
about 1.5Mb per hour, but the disk usage of the database is growing at
about 6Mb per hour! A factor of 4 seems a bit excessive.

Is there significant overhead involoved in using large objects that aren't
very large?

What might I be doing wrong?

Is there a better way to store these chunks?

thanks,
Philip



В списке pgsql-general по дате отправления:

Предыдущее
От: Juriy Goloveshkin
Дата:
Сообщение: Re: ilike and --enable-multibyte=KOI8
Следующее
От: Tom Lane
Дата:
Сообщение: Re: overhead of "small" large objects