Performance impact of record sizes
От | John Moore |
---|---|
Тема | Performance impact of record sizes |
Дата | |
Msg-id | 5.1.1.6.2.20020704111915.04499de8@pop3.norton.antivirus обсуждение исходный текст |
Список | pgsql-admin |
We have a need to store text data which typically is just a hundred or so bytes, but in some cases may extend to a few thousand. Our current field has a varchar of 1024, which is not large enough. Key data is fixed sized and much smaller in this same record. Our application is primarily transaction oriented, which means that records will normally be fetched via random access, not sequential scans. The question is: what size thresholds exist? I assume that there is a "page" size over which the record will be split into more than one. What is that size, and does the spill cost any more or less than I had split the record into two or more individual records in order to handle the same data? Obviously, the easiest thing for me to do is just set the varchar to something big (say - 10K) but I don't want to do this without understanding the OLTP performance impact. Thanks in advance John Moore http://www.tinyvital.com/personal.html UNITED WE STAND
В списке pgsql-admin по дате отправления: