Обсуждение: Overriding default psql behavior | how to ignore missing fields

Поиск
Список
Период
Сортировка

Overriding default psql behavior | how to ignore missing fields

От
Håvard Wahl Kongsgård
Дата:
Hi,
I am trying to insert new records from multiple SQL dumps into an existing table. My problem is that the database table does not have some of the columns used in the sql dumps. So when I try to import the dumps psql fails with: "ERROR:  current transaction is aborted, commands ignored until end of transaction block"

As there are thousands of files with multiple missing fields, manually adding every field to the table is not a option.

Is it possible to overriding the default psql behavior, so that the sql session simply ignores any missing fields?

--
Håvard Wahl Kongsgård

http://havard.security-review.net/

Re: Overriding default psql behavior | how to ignore missing fields

От
Vick Khera
Дата:
2010/12/31 Håvard Wahl Kongsgård <haavard.kongsgaard@gmail.com>:
> Is it possible to overriding the default psql behavior, so that the sql
> session simply ignores any missing fields?
>

Do you still have the original database?  Re-run your table exports
without the unneeded columns using COPY, then import those outputs
instead.

Overriding default psql behavior | how to ignore missing fields

От
Håvard Wahl Kongsgård
Дата:
Well I created the SQL files from multiple shapefiles. Used the shp2sql ( postgis 1.5) to generate the SQL dumps.


On Fri, Dec 31, 2010 at 5:00 PM, Vick Khera <vivek@khera.org> wrote:
2010/12/31 Håvard Wahl Kongsgård <haavard.kongsgaard@gmail.com>:
> Is it possible to overriding the default psql behavior, so that the sql
> session simply ignores any missing fields?
>

Do you still have the original database?  Re-run your table exports
without the unneeded columns using COPY, then import those outputs
instead.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Håvard Wahl Kongsgård
Peace Research Institute Oslo (PRIO)



--
Håvard Wahl Kongsgård
Peace Research Institute Oslo (PRIO)

http://havard.security-review.net/

Re: Overriding default psql behavior | how to ignore missing fields

От
Jasen Betts
Дата:
On 2010-12-31, Håvard Wahl Kongsgård <haavard.kongsgaard@gmail.com> wrote:
> --90e6ba488359a7721f0498b43825
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: quoted-printable
>
> Hi,
> I am trying to insert new records from multiple SQL dumps into an existing
> table. My problem is that the database table does not have some of the
> columns used in the sql dumps. So when I try to import the dumps psql fails
> with: "ERROR:  current transaction is aborted, commands ignored until end o=
> f
> transaction block"
>
> As there are thousands of files with multiple missing fields, manually
> adding every field to the table is not a option.

In that case you'll need to do that automatically (write some software),
or find some other way to load the data.

> Is it possible to overriding the default psql behavior, so that the sql
> session simply ignores any missing fields?

no.


Perhaps load the tables into a temporary database and then copy the
columns you want, or drop the columns you dont and dump the whole
thing...

--
⚂⚃ 100% natural

Re: Overriding default psql behavior | how to ignore missing fields

От
Adrian Klaver
Дата:
On Friday 31 December 2010 9:06:19 am Håvard Wahl Kongsgård wrote:
> Well I created the SQL files from multiple shapefiles. Used the shp2sql (
> postgis 1.5) to generate the SQL dumps.
>

And the shp2pgsql docs say:

"Appends data from the Shape file into the database table. Note that to use this
option to load multiple files, the files must have the same attributes and same
data types. "

The problem is just not adding/removing the field(s) from the target table but
also maintaining field order. The inserts are assuming a set order and number
of fields. My guess is the answer lies in using the -c option to shp2pgsql and
have it create the tables for each file in the database. You could then move
the fields you need to the target table. Another route would be to use the -D
option which creates a Postgres dump format for the data i.e COPY see below for
example:

COPY table2 (fulldate, id, meanvalue) FROM stdin;
2009-01-01 00:00:00 1 12.3
2009-01-01 01:00:00 1 11.8
2009-01-01 02:00:00 1 82.099998
2009-01-01 03:00:00 1 79.800003
2009-01-01 04:00:00 1 77.199997
2009-01-01 05:00:00 1 13.1

You could probably use that and a script to feed the data to your target.



--
Adrian Klaver
adrian.klaver@gmail.com