Обсуждение: 1 milion data insertion

Поиск
Список
Период
Сортировка

1 milion data insertion

От
Elielson Fontanezi
Дата:
Hi fellows!

    I have tried to insert 1.000.000 of record to the following table;

--------------------------------------------------
zakal=# \d teste;
 codigo | bigint                 | not null
 nome   | character varying(100) |
--------------------------------------------------

    and I got these errors:

--------------------------------------------------
zakal$ psql -c "copy teste from 'teste.dat' using delimeters '|'"
ERROR:  parser: parse error at or near "delimeters"
ERROR:  parser: parse error at or near "delimeters"
zakal$ psql -c "copy teste from 'teste.dat' using delimiters '|'"
ERROR:  COPY command, running in backend with effective uid 504, could not
open
file 'teste.dat' for reading.  Errno = No such file or directory (2).
ERROR:  COPY command, running in backend with effective uid 504, could not
open
file 'teste.dat' for reading.  Errno = No such file or directory (2).
zakal$ pwd
/home/zakal/tmp
zakal$ psql -c "copy teste from '`pwd`/teste.dat' using delimiters '|'"
DEBUG:  copy: line 27536, XLogWrite: new log file created - consider
increasing
WAL_FILES
DEBUG:  copy: line 93146, XLogWrite: new log file created - consider
increasing
WAL_FILES
DEBUG:  recycled transaction log file 0000000000000000


ERROR:  copy: line 164723, Bad int8 external representation "16722"
ERROR:  copy: line 164723, Bad int8 external representation "16722"
zakal$
zakal$
zakal$ DEBUG:  recycled transaction log file 0000000000000001
----------------------------------------------------------------------

the log has overflowed.

    Ok, this was a test. I'd like to know what would be happen.
    But, from you, great PostGres DBA's, what is the best way to
insert a large number of data?
    Is there a way to turn off the log?
    Is there a way to commit each 100 records?

regards,

..............................................
A Question...
Since before your sun burned hot in space and before your race was born, I
have awaited a question.

Elielson Fontanezi
DBA Technical Support - PRODAM
+55 11 5080 9493



Re: 1 milion data insertion

От
Andrew Sullivan
Дата:
On Fri, Jul 26, 2002 at 03:34:12PM -0300, Elielson Fontanezi wrote:
>
> the log has overflowed.

No, new WAL files were created.  You can increase the number of
preallocated WAL files so that you don't get these messages.  See the
administrator docs.

>     But, from you, great PostGres DBA's, what is the best way to
> insert a large number of data?

With COPY, just as you did.

>     Is there a way to turn off the log?

No.

>     Is there a way to commit each 100 records?

Sure.  Use Perl or something to copy and commit every so many records.
A

----
Andrew Sullivan                               87 Mowat Avenue
Liberty RMS                           Toronto, Ontario Canada
<andrew@libertyrms.info>                              M6K 3E3
                                         +1 416 646 3304 x110


Re: 1 milion data insertion

От
Chris Albertson
Дата:
>
>     Ok, this was a test. I'd like to know what would be happen.
>     But, from you, great PostGres DBA's, what is the best way to
> insert a large number of data?
>     Is there a way to turn off the log?
>     Is there a way to commit each 100 records?

Yes, "COPY" actually does an append.  So just do what you
do now 10,000 times for 100 records.  It's a bit safer.
I've done 1M recod COPYsmany times on a low-end PC, no trouble

Put the log file someplace with more room.  You should be able to
run for a month without worrying about log files over filling

Logging is controled likely from the startup script.  Maybe in
/etc/rc.d  details depend on your OS.


=====
Chris Albertson
  Home:   310-376-1029  chrisalbertson90278@yahoo.com
  Cell:   310-990-7550
  Office: 310-336-5189  Christopher.J.Albertson@aero.org

__________________________________________________
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com