Обсуждение: Is Oracle really so much faster
Hi, today I did my first test with Oracle instead of PG (I had to really the boss wanted it \:). I have some perl-scripts using DBI, that insert a large amount of data, reduce it and then create some report. When I insert I do something like $dbh->{'AutoCommit'} = 0 and every 100,000 entries or so I do a commit. I use DBI's prepard_statements, although if I re- member the manpage correctly that does not have too much impact on PG? However the result was (running on 64bit Sol 7, E250/1Gig RAM) that Oracle was roughly twice as fast. Is this real or might there be stuff I could do, to im- prove postgres performance? Konstantin -- Dipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: elwood@agouros.de Otkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185 ---------------------------------------------------------------------------- "Captain, this ship will not sustain the forming of the cosmos." B'Elana Torres
> However the result was (running on 64bit Sol 7, E250/1Gig > RAM) that Oracle was roughly twice as fast. Is this real > or might there be stuff I could do, to improve postgres performance? PostgreSQL version? You didn't use -F, did you? Vadim
Hi, Konstantinos Agouros schrieb: > > When I insert I do something like $dbh->{'AutoCommit'} = 0 and every 100,000 > entries or so I do a commit. try to make the commit every 500 or 1000 inserts. It seems to me, that PG slow down if there are lots of MBs waiting for commit ... And also try to make a lot of inserts in one statement (50 or 100 is OK), this reduces some overhead. > I use DBI's prepard_statements, although if I re- > member the manpage correctly that does not have too much impact on PG? at least it reduces the perl time. I testes it sometime ago: With very simple (!) statements, it was twice faster then without, as far as i remember. Ciao Alvar -- Alvar C.H. Freude | alvar.freude@merz-akademie.de Demo: http://www.online-demonstration.org/ | Mach mit! Blast-DE: http://www.assoziations-blaster.de/ | Blast-Dich-Fit Blast-EN: http://www.a-blast.org/ | Blast/english
> try to make the commit every 500 or 1000 inserts. It seems to me, that > PG slow down if there are lots of MBs waiting for commit ... MB == Memory Buffers? If so then it shouldn't be the case for 7.0.X with Tom' work in bufmgr area. Vadim
Hi, > > try to make the commit every 500 or 1000 inserts. It seems to me, that > > PG slow down if there are lots of MBs waiting for commit ... > > MB == Memory Buffers? > If so then it shouldn't be the case for 7.0.X with Tom' work in bufmgr area. Megabytes :-) tested in 7.1 The inserts slow down if there are too much not-committed inserts pending. But, I'm only sure about the time for the insert statement itself, and not the complete time -- perhaps a final commit is faster then the sum of a lot of commits in between. At my test, I used two insert per object, and each object needed about 35 milliseconds (on P2/350 MHz) at each table size. If i inserted ~15000 obejcts in one transaction, the insertion time for each request slows down to ~200 milliseconds at the end. So, I guess this has to be tested in detail again ;) Ciao Alvar -- Alvar C.H. Freude | alvar.freude@merz-akademie.de Demo: http://www.online-demonstration.org/ | Mach mit! Blast-DE: http://www.assoziations-blaster.de/ | Blast-Dich-Fit Blast-EN: http://www.a-blast.org/ | Blast/english
On Thu, Feb 01, 2001 at 12:55:02PM -0800, Mikheev, Vadim wrote: > > However the result was (running on 64bit Sol 7, E250/1Gig > > RAM) that Oracle was roughly twice as fast. Is this real > > or might there be stuff I could do, to improve postgres performance? > > PostgreSQL version? 7.0.2 > You didn't use -F, did you? What's -F ? Konstantin -- Dipl-Inf. Konstantin Agouros aka Elwood Blues. Internet: elwood@agouros.de Otkerstr. 28, 81547 Muenchen, Germany. Tel +49 89 69370185 ---------------------------------------------------------------------------- "Captain, this ship will not sustain the forming of the cosmos." B'Elana Torres
> > > However the result was (running on 64bit Sol 7, E250/1Gig > > > RAM) that Oracle was roughly twice as fast. Is this real > > > or might there be stuff I could do, to improve postgres > > > performance? > > > > PostgreSQL version? > 7.0.2 Try 7.1 or use -F. > > You didn't use -F, did you? > What's -F ? This option disables fsync(). It's not honest to use -F for comparison with other DBMSes, but before 7.1 it was only means to be comparable -:) Vadim
A quick question: Have someone made effort to do profiling of pgsql during execution of certain things (inserts, selects, sorting, indices)? I have a feeling (based on stopping postgres from gdb periodically), that a lot of time is used in strcoll() (if table and index has string columns). Column in question is declared char(3). So, why's postgres collating anything at all? I'll get back to you with more details and tracebacks, but I just wanted to check if anyone done any profiling... -alex
Alex Pilosov <alex@pilosoft.com> writes: > Have someone made effort to do profiling of pgsql during execution of > certain things (inserts, selects, sorting, indices)? Yes ... > I have a feeling (based on stopping postgres from gdb periodically), that > a lot of time is used in strcoll() (if table and index has string > columns). > Column in question is declared char(3). > So, why's postgres collating anything at all? Because textual comparisons are defined in terms of strcoll() if you've enabled locale support. There is no way around this; either don't use locales or write a faster version of strcoll(). regards, tom lane
Thanks Tom. I think that in future, locale setting should be a disclosed part of benchmark results whenever PostgreSQL is benchmarked. -alex On Sat, 3 Feb 2001, Tom Lane wrote: > Alex Pilosov <alex@pilosoft.com> writes: > > Have someone made effort to do profiling of pgsql during execution of > > certain things (inserts, selects, sorting, indices)? > > Yes ... > > > I have a feeling (based on stopping postgres from gdb periodically), that > > a lot of time is used in strcoll() (if table and index has string > > columns). > > Column in question is declared char(3). > > So, why's postgres collating anything at all? > > Because textual comparisons are defined in terms of strcoll() if you've > enabled locale support. There is no way around this; either don't use > locales or write a faster version of strcoll(). > > regards, tom lane > >