Re: Performance impact of record sizes

Поиск
Список
Период
Сортировка
От John Moore
Тема Re: Performance impact of record sizes
Дата
Msg-id 5.1.1.6.2.20020704131448.02288fa8@pop3.norton.antivirus
обсуждение исходный текст
Ответ на Re: Performance impact of record sizes  (Bruce Momjian <pgman@candle.pha.pa.us>)
Ответы Re: Performance impact of record sizes
Re: Performance impact of record sizes
Список pgsql-admin
>If you don't want a limit, use TEXT.  Long values are automatically
>stored in TOAST tables to avoid performance problems with sequential
>scans over long row values.


Thanks...

I wasn't quite clear enough in my question.... I am focused on OLTP
performance, and in my case the vast majority of the rows will have only a
few hundred bytes in that column, while a few (1%?) may be larger.

I assume that internally there is a fixed page size (by which I mean cache
buffer size  or disk read size)  for normal records. In my case, 99% of the
physical rows should be short, and would fit easily in whatever that size
is. So I *suspect* I want to keep the data in the physical row, rather than
using TEXT and having it stored separately from the record.. The question
is... are there any unexpected consequences. For example, if I have a whole
bunch of rows with, say, a 10K varchar field which is only populated with a
hundred or two bytes each, will it perform just as well as if that field
was defined as a 200 byte field?

A related question: is it more expensive to use varchar than fixed char
fields? I assume some additional work in physically unpacking the record.

My past experience is with Informix, and a lot with very old versions where
views were high cost, and so were varchars. Likewise, you didn't want your
physical row to exceed the size of a physical page if you could avoid it.

John





В списке pgsql-admin по дате отправления:

Предыдущее
От: Bruce Momjian
Дата:
Сообщение: Re: views: performance implication
Следующее
От: Bruce Momjian
Дата:
Сообщение: Re: Performance impact of record sizes