Andrew Ayers <aayers@eldocomp.com> writes:
> I had in my Access DB several tables which utilized Memo-type fields to
> store data in a "multi-value" delimited format (will never do that
> again). I tried to first convert them to TEXT type fields on the
> PostgreSQL tables. These didn't work right (more on that later), so I
> converted them to large VARCHAR field (ie, VARCHAR(100000), and larger,
> in some cases).
> I noticed when doing selects (via the ODBC driver) that any accesses to
> these fields caused MASSIVE slowdowns on the select - whether I was
> selecting for them, or if the field was part of the WHERE clause of the
> SQL statement. I found that if I decreased the size of the field, the
> speed would increase.
I believe what you're reporting here is problems on the Access side, not
problems in the underlying database. (That doesn't make them any less
of a real problem if you're using Access, of course.) Access doesn't
work very well with datatypes that aren't found in MS SQL Server...
I think if you look in the pgsql-odbc list archives you will find some
discussion of workarounds for Access with TEXT fields.
As far as the original question goes: there is no reason within Postgres
to choose one of these three types on performance grounds; you should
make the choice based on the semantics you want. Do you really want
every value blank-padded to exactly N characters? Use char(N). If you
don't want padding, but do want a specific upper limit on the field
width, use varchar(N). If you haven't got any specific upper limit in
mind (and if you're putting in numbers like 100000 then you don't ;-))
then use text. The performance differences that exist come directly
from the cycles expended to add padding blanks, check that the width
limit is not exceeded, etc.
regards, tom lane