Re: [GENERAL] COPY: row is too big

Поиск
Список
Период
Сортировка
От vod vos
Тема Re: [GENERAL] COPY: row is too big
Дата
Msg-id 15969913dd3.ea2ff58529997.7460368287916683127@zoho.com
обсуждение исходный текст
Ответ на Re: [GENERAL] COPY: row is too big  (John McKown <john.archie.mckown@gmail.com>)
Ответы Re: [GENERAL] COPY: row is too big
Re: [GENERAL] COPY: row is too big
Список pgsql-general
Now I am confused about I can create 1100 columns in a table in postgresql, but I can't copy 1100 values into the table. And I really dont want to split the csv file to pieces to avoid mistakes after this action.

I create a table with 1100 columns with data type of varchar, and hope the COPY command will auto transfer the csv data that contains some character and date, most of which are numeric.

I use the command: COPY rius FROM "/var/www/test/test.csv" WITH DELIMITER ';' ;

Then it shows: 

ERROR:  row is too big: size 11808, maximum size 8160







---- On 星期二, 03 一月 2017 05:24:18 -0800 John McKown <john.archie.mckown@gmail.com> wrote ----

On Mon, Jan 2, 2017 at 2:57 PM, Rob Sargent <robjsargent@gmail.com> wrote:
Perhaps this is your opportunity to correct someone else's mistake. You need to show the table definition to convince us that it cannot be improved. That it may be hard work really doesn't mean it's not the right path.

​This may not be possible. The data might be coming in from an external source. I imagine you've run into the old "well, _we_ don't have any problems, so it must be on your end!" scenario. 

Example: we receive CSV files from an external source. These files are _supposed_ to be validated. But we have often received files where NOT NULL fields have "nothing" in them them. E.g. a customer bill which has _everything_ in it _except_ the customer number (or an invalid one such as "123{"); or missing some other vital piece of information.

In this particular case, the OP might want to do what we did in a similar case. We had way too many columns in a table. The performance was horrible. We did an analysis and, as usual, the majority of the selects were for a subset of the columns, about 15% of the total. We "split" the table into the "high use" columns table & the "low use" columns table. We then used triggers to make sure that if we added a new / deleted an old row from one table, the corresponding row in the other was created / deleted.

 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:



--
There’s no obfuscated Perl contest because it’s pointless.

—Jeff Polk

Maranatha! <><
John McKown

В списке pgsql-general по дате отправления:

Предыдущее
От: Andreas Kretschmer
Дата:
Сообщение: Re: [GENERAL] Performance degradation when using auto_explain
Следующее
От: Kevin Grittner
Дата:
Сообщение: Re: [GENERAL] Row value expression much faster than equivalent OR clauses