Обсуждение: MS-server <-> Ppostgresql
Apology for my so poop english... and my cross-postage.
I plan to extract/copy about 120.000 lines from a MS SQL 7 server database
(on a NT machione) to a postresql database on a Linux machine (debian/woody).
But what a pity : I obtain (after so many tests, so many compilation with
so many different options) a copy of about 1200 lines/minute ;~{
And horror, I plan to make the same thing whith a 3.000.000 lines database.
Any idea ?
Best regards
On 21 Nov 2002 at 13:35, Francois Sauterey wrote:
> I plan to extract/copy about 120.000 lines from a MS SQL 7 server database
> (on a NT machione) to a postresql database on a Linux machine (debian/woody).
> But what a pity : I obtain (after so many tests, so many compilation with
> so many different options) a copy of about 1200 lines/minute ;~{
> And horror, I plan to make the same thing whith a 3.000.000 lines database.
> Any idea ?
Yes.. Copy inside a transaction. Put something like 10000 lines in a
transaction and then commit. See what speed difference that would make.
If possible use copy. That's fastest. It would insert all of them in one
transaction..
HTH
Bye
Shridhar
--
Battle, n.: A method of untying with the teeth a political knot that will not
yield to the tongue. -- Ambrose Bierce
Why copy is faster? You can do everything in a transaction turning off
autocommit...
On Thu, 2002-11-21 at 10:46, Shridhar Daithankar wrote:
> On 21 Nov 2002 at 13:35, Francois Sauterey wrote:
>
> > I plan to extract/copy about 120.000 lines from a MS SQL 7 server database
> > (on a NT machione) to a postresql database on a Linux machine (debian/woody).
> > But what a pity : I obtain (after so many tests, so many compilation with
> > so many different options) a copy of about 1200 lines/minute ;~{
> > And horror, I plan to make the same thing whith a 3.000.000 lines database.
> > Any idea ?
>
> Yes.. Copy inside a transaction. Put something like 10000 lines in a
> transaction and then commit. See what speed difference that would make.
>
> If possible use copy. That's fastest. It would insert all of them in one
> transaction..
>
> HTH
>
> Bye
> Shridhar
>
> --
> Battle, n.: A method of untying with the teeth a political knot that will not
> yield to the tongue. -- Ambrose Bierce
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/users-lounge/docs/faq.html
--
Felipe Schnack
Analista de Sistemas
felipes@ritterdosreis.br
Cel.: (51)91287530
Linux Counter #281893
Faculdade Ritter dos Reis
www.ritterdosreis.br
felipes@ritterdosreis.br
Fone/Fax.: (51)32303328
On 21 Nov 2002 at 10:49, Felipe Schnack wrote: > Why copy is faster? You can do everything in a transaction turning off > autocommit... Remind me, when was autocommit introduced? Postgresql 7.2.x still does everything in a separate transaction unless one is opened explicitly.. And AFAIK, 7.3 is not yet out. I won't assume that to be a widespread version now.. Correct me if I am wrog.. Bye Shridhar -- There comes to all races an ultimate crisis which you have yet to face.... One day our minds became so powerful we dared think of ourselves as gods. -- Sargon, "Return to Tomorrow", stardate 4768.3
Felipe Schnack <felipes@ritterdosreis.br> writes: > Why copy is faster? You can do everything in a transaction turning off > autocommit... COPY will still be a little faster because each INSERT statement has to go through the SQL parser and query planner. The parsing that COPY does is a lot simpler. -Doug
7.3 is on RC Autocommit is avaliable in 7.2, I use it in JDBC... On Thu, 2002-11-21 at 10:59, Shridhar Daithankar wrote: > On 21 Nov 2002 at 10:49, Felipe Schnack wrote: > > > Why copy is faster? You can do everything in a transaction turning off > > autocommit... > > Remind me, when was autocommit introduced? Postgresql 7.2.x still does > everything in a separate transaction unless one is opened explicitly.. > > And AFAIK, 7.3 is not yet out. I won't assume that to be a widespread version > now.. > > Correct me if I am wrog.. > > Bye > Shridhar > > -- > There comes to all races an ultimate crisis which you have yet to face.... One > day our minds became so powerful we dared think of ourselves as gods. -- > Sargon, "Return to Tomorrow", stardate 4768.3 > > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org -- Felipe Schnack Analista de Sistemas felipes@ritterdosreis.br Cel.: (51)91287530 Linux Counter #281893 Faculdade Ritter dos Reis www.ritterdosreis.br felipes@ritterdosreis.br Fone/Fax.: (51)32303328
Just an idea: we recently migrated a bigger database from BTrieve to Oracle, through intermediate text files. Can't you
dosomething similar? Exporting on one side and importing on the other. That worked better for us than copying on a
recordby record basis.
Regards,
Hegyvari Krisztian
>>> Francois Sauterey <fs@snes.edu> 11/21/02 01:35pm >>>
Apology for my so poop english... and my cross-postage.
I plan to extract/copy about 120.000 lines from a MS SQL 7 server database
(on a NT machione) to a postresql database on a Linux machine (debian/woody).
But what a pity : I obtain (after so many tests, so many compilation with
so many different options) a copy of about 1200 lines/minute ;~{
And horror, I plan to make the same thing whith a 3.000.000 lines database.
Any idea ?
Best regards
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
I think that's indeed a good solution...
Here are some ideas to help you.
First use bcp to extract info from MSSQLSERVER
bcp table1 out c:\_tmp\table1.txt -c -k -S. -U sa -P
The -k is important to be able to get back NULL values....
Then copy this file on Linux and apply the following tcl script :
tablelist.txt contains your path to table1.txt file and more files for
multi-table migration....
tclconvert.txt
******************************************************
set file [open tablelist.txt r]
set flist [read -nonewline $file]
close $file
set flist [split $flist \n]
foreach f $flist {
set file [open $f r]
set data [read -nonewline $file]
close $file
regsub -all "\t\t" $data "\t_NULL_\t" data
regsub -all "\t\n" $data "\t_NULL_\n" data
regsub -all {\\} $data {\\\\} data
set file [open $f w]
puts $file $data
puts $file
close $file
}
*******************************************************
run the command
tclsh tclconvert.txt
Then import the resulting file in postgresql using COPY command
COPY table1 FROM 'table1.txt' with null as '_NULL_';"
If you are interested in, I will send you my complete migration script...
You will have to adapt them but they will give you an idea of how to do it.
-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Hegyvari
Krisztian
Sent: Thursday, November 21, 2002 2:19 PM
To: fs@snes.edu
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] MS-server <-> Ppostgresql
Just an idea: we recently migrated a bigger database from BTrieve to Oracle,
through intermediate text files. Can't you do something similar? Exporting
on one side and importing on the other. That worked better for us than
copying on a record by record basis.
Regards,
Hegyvari Krisztian
>>> Francois Sauterey <fs@snes.edu> 11/21/02 01:35pm >>>
Apology for my so poop english... and my cross-postage.
I plan to extract/copy about 120.000 lines from a MS SQL 7 server database
(on a NT machione) to a postresql database on a Linux machine
(debian/woody).
But what a pity : I obtain (after so many tests, so many compilation with
so many different options) a copy of about 1200 lines/minute ;~{
And horror, I plan to make the same thing whith a 3.000.000 lines database.
Any idea ?
Best regards
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
Francois Sauterey wrote:
> Apology for my so poop english... and my cross-postage.
>
> I plan to extract/copy about 120.000 lines from a MS SQL 7 server
> database (on a NT machione) to a postresql database on a Linux machine
> (debian/woody).
>
> But what a pity : I obtain (after so many tests, so many compilation
> with so many different options) a copy of about 1200 lines/minute ;~{
>
> And horror, I plan to make the same thing whith a 3.000.000 lines
> database.
>
> Any idea ?
> Best regards
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>
The fastest way is :
i) write the table data from MS-SQL to a file
ii) use COPY to put the data into Postgresql from the file
if you do this then 3x106 rows should be < 7 minute operation !
regards
Mark