I have some questions about the subject.
in the postgresql F.A.Q.:
http://www.postgresql.org/docs/faqs.FAQ.html#4.4
"...
4.4) What is the maximum size for a row, a table, and a database?
These are the limits:
    Maximum size for a database?    unlimited (32 TB databases exist)
    Maximum size for a table?    32 TB
    Maximum size for a row?    1.6TB
    Maximum size for a field?    1 GB
    Maximum number of rows in a table?    unlimited
    Maximum number of columns in a table?    250-1600 depending on
column types
    Maximum number of indexes on a table?    unlimited
..."
1) How are computed the maximum size of a table?
On the manual:
http://www.postgresql.org/docs/8.0/interactive/sql-createtable.html
"...
A table cannot have more than 1600 columns. (In practice, the effective
limit is lower because of tuple-length constraints.)
..."
2) What's the tuple-length constraint? (and what's the max tuple-length?)
3) These are teoretical limits, but exists some licterature (articles,
examples, case-studies) what addresses on pratical performance
degradation derived from use of large tables with large amount of data?
(eg.: max tables size suggested, max rows, max tuple size...)
Thanks for the answers and best regards.