Hello,
On Wed, 2017-01-04 at 07:11 -0800, Adrian Klaver wrote:
> On 01/04/2017 06:54 AM, Pavel Stehule wrote:
> > Hi
> >
> > 2017-01-04 14:00 GMT+01:00 vod vos <vodvos@zoho.com
> > <mailto:vodvos@zoho.com>>:
> >
> > __
> > Now I am confused about I can create 1100 columns in a table in
> > postgresql, but I can't copy 1100 values into the table. And I
> > really dont want to split the csv file to pieces to avoid
> > mistakes
> > after this action.
> >
> >
> > The PostgreSQL limit is "Maximum Columns per Table250 - 1600
> > depending
> > on column types" - this limit is related to placing values or
> > pointers
> > to values to one page (8KB).
> >
> > You can hit this limit not in CREATE TABLE time, but in INSERT
> > time.
> >
> >
> >
> > I create a table with 1100 columns with data type of varchar,
> > and
> > hope the COPY command will auto transfer the csv data that
> > contains
> > some character and date, most of which are numeric.
> >
> >
> > Numeric is expensive type - try to use float instead, maybe double.
>
> If I am following the OP correctly the table itself has all the
> columns
> declared as varchar. The data in the CSV file is a mix of text, date
> and
> numeric, presumably cast to text on entry into the table.
>
> >
> > Regards
> >
> > Pavel
> >
> >
> > I use the command: COPY rius FROM "/var/www/test/test.csv" WITH
> > DELIMITER ';' ;
> >
> > Then it shows:
> >
> > ERROR: row is too big: size 11808, maximum size 8160
> >
> >
Assuming this is a brand new database instance and not an existing
application, could the OP not compile from source and specify the
--with-blocksize=16384 so as to overcome the 8k default page size
limit?
My 2 cents.
Rob