Re: Batch Inserts
От | jco@cornelius-olsen.dk |
---|---|
Тема | Re: Batch Inserts |
Дата | |
Msg-id | OF82825A9E.E13F3CD8-ONC1256C8D.0004C8E7@dk обсуждение исходный текст |
Список | pgsql-general |
Hi Doug,
The latter is the case. Only one transaction is done because transactions cannot be nested and so when you use explicit begin-commit, no autocommit is done.
/Jørn Cornelius Olsen
Doug Fields <dfields-pg-general@pexicom.com> Sent by: pgsql-general-owner@postgresql.org 12-12-2002 00:03 | To: "Ricardo Ryoiti S. Junior" <suga@netbsd.com.br> cc: pgsql-general@postgresql.org, pgsql-jdbc@postgresql.org Subject: Re: [GENERAL] Batch Inserts |
Hi Ricardo, list,
One quick question:
> - If your "data importing" is done via inserts, make sure that the
>batch uses transactions for each (at least or so) 200 inserts. If you
>don't, each insert will be a transaction, what will slow down you.
I use JDBC and use it with the default "AUTOCOMMIT ON."
Does doing a statement, in one JDBC execution, of the form:
BEGIN WORK; INSERT ... ; INSERT ... ; INSERT ...; COMMIT;
Count as N individual inserts (due to the autocommit setting) or does the
BEGIN WORK;...COMMIT; surrounding it override that setting?
Thanks,
Doug
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
В списке pgsql-general по дате отправления: