Re: Performance impact of record sizes

Поиск
Список
Период
Сортировка
От Bruce Momjian
Тема Re: Performance impact of record sizes
Дата
Msg-id 200207041947.g64JlSM04801@candle.pha.pa.us
обсуждение исходный текст
Ответ на Performance impact of record sizes  (John Moore <postgres@tinyvital.com>)
Ответы Re: Performance impact of record sizes
Список pgsql-admin
John Moore wrote:
> We have a need to store text data which typically is just a hundred or so
> bytes, but in some cases may extend to a few thousand. Our current field
> has a varchar of 1024, which is not large enough. Key data is fixed sized
> and much smaller in this same record.
>
> Our application is primarily transaction oriented, which means that records
> will normally be fetched via random access, not sequential scans.
>
> The question  is: what size thresholds exist? I assume that there is a
> "page" size over which the record will be split into more than one. What is
> that size, and does the spill cost any more or less than I had split the
> record into two or more individual records in order to handle the same data?
>
> Obviously, the easiest thing for me to do is just set the varchar to
> something big (say - 10K) but I don't want to do this without understanding
> the OLTP performance impact.
>

If you don't want a limit, use TEXT.  Long values are automatically
stored in TOAST tables to avoid performance problems with sequential
scans over long row values.

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026



В списке pgsql-admin по дате отправления:

Предыдущее
От: John Moore
Дата:
Сообщение: views: performance implication
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: views: performance implication