Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)

Поиск
Список
Период
Сортировка
От ismo.tuononen@solenovo.fi
Тема Re: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)
Дата
Msg-id Pine.LNX.4.64.0712051024350.9934@ismoli.solenovo.jns
обсуждение исходный текст
Список pgsql-performance
I don't know if this is true in this case, but transaction level can be
different, in mssql it is normally something like
TRANSACTION_READ_UNCOMMITTED
in postgres
TRANSACTION_READ_COMMITTED
and that makes huge difference in performance.

other thing can be the queries in procedures, if you use same queries,
performance can be very bad. databases handles queries differently, and
changing query can drop execution times to 1/100th easily.

Ismo

On Wed, 5 Dec 2007, Robert Bernabe wrote:

> Hi All,
>     I've been tasked to evaluate PG as a possible replacement of our MS SQL 2000 solution. Our solution is 100%
storedprocedure/function centric. It's a report generation system whose sole task is to produce text files filled with
processeddata that is post-processed by a secondary system. Basically options are selected via a web interface and all
theseparameters are passed unto the stored procedure and then the stored procedure would run and in the process call
otherstored procedures until eventually a single formatted text file is produced.  
>     I decided on Fedora Core 7 and the 8.3 Beta release of Enterprise DB PostgreSQL. I decided to port 1 stored
procedureplus it's several support stored procedures into pl/pgsql from T-SQL and compare the performance by measuring
howlong each system takes to produce the text file. For this test,  the output to the text file was discarded and the
storedprocedure/function would end once the final temporary table is filled with the information that is eventually
dumpedinto the text file.  
>
> Windows 2000 Professional + MSDE (/MS SQL) Box    vs.   FC7 + EnterpriseDB PG Box
>
> Note that both boxes have EXACTLY the same hardware (not VMWARE or anything)
> AMD X2 3800
> 2 G RAM DDR 400
> 80 G Barracuda Sata
>
> The data was copied to the Linux box and checked lightly for consistency versus the windows box (number of tables /
columnsand records) and they all match. After data transfer to the Linux Box, I ran REINDEX and ANALYZE.  
>
> For the actual run the following tables were used and I'm displaying the results of analyze.
>
> INFO:  analyzing "public.AreaDefn"
> INFO:  "AreaDefn": scanned 15 of 15 pages, containing 2293 live rows and 0 dead rows; 2293 rows in sample, 2293
estimatedtotal rows 
> INFO:  analyzing "public.AreaDefn2"
> INFO:  "AreaDefn2": scanned 30 of 30 pages, containing 3439 live rows and 0 dead rows; 3000 rows in sample, 3439
estimatedtotal rows 
> INFO:  analyzing "public.Areas"
> INFO:  "Areas": scanned 2 of 2 pages, containing 164 live rows and 0 dead rows; 164 rows in sample, 164 estimated
totalrows 
> INFO:  analyzing "public.Brands"
> INFO:  "Brands": scanned 1 of 1 pages, containing 11 live rows and 0 dead rows; 11 rows in sample, 11 estimated total
rows
> INFO:  analyzing "public.Categories"
> INFO:  "Categories": scanned 1 of 1 pages, containing 26 live rows and 0 dead rows; 26 rows in sample, 26 estimated
totalrows 
> INFO:  analyzing "public.CategoryDefn"
> INFO:  "CategoryDefn": scanned 1 of 1 pages, containing 133 live rows and 0 dead rows; 133 rows in sample, 133
estimatedtotal rows 
> INFO:  analyzing "public.CategoryDefn2"
> INFO:  "CategoryDefn2": scanned 2 of 2 pages, containing 211 live rows and 0 dead rows; 211 rows in sample, 211
estimatedtotal rows 
> INFO:  analyzing "public.CategorySets"
> INFO:  "CategorySets": scanned 1 of 1 pages, containing 3 live rows and 0 dead rows; 3 rows in sample, 3 estimated
totalrows 
> INFO:  analyzing "public.CATemplateGroup"
> INFO:  analyzing "public.Channels"
> INFO:  "Channels": scanned 1 of 1 pages, containing 7 live rows and 0 dead rows; 7 rows in sample, 7 estimated total
rows
> INFO:  analyzing "public.ClientCodes"
> INFO:  analyzing "public.Clients"
> INFO:  "Clients": scanned 7 of 7 pages, containing 366 live rows and 0 dead rows; 366 rows in sample, 366 estimated
totalrows 
> INFO:  analyzing "public.Customers"
> INFO:  "Customers": scanned 2 of 2 pages, containing 129 live rows and 0 dead rows; 129 rows in sample, 129 estimated
totalrows 
> NFO:  analyzing "public.Databases"
> INFO:  "Databases": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total
rows
> INFO:  analyzing "public.DataSources"
> INFO:  "DataSources": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated
totalrows 
> INFO:  analyzing "public.DateToWeekConversion"
> INFO:  "DateToWeekConversion": scanned 4 of 4 pages, containing 371 live rows and 0 dead rows; 371 rows in sample,
371estimated total rows 
> INFO:  analyzing "public.Divisions"
> INFO:  "Divisions": scanned 1 of 1 pages, containing 1 live rows and 0 dead rows; 1 rows in sample, 1 estimated total
rows
> INFO:  analyzing "public.MetricTable"
> INFO:  "MetricTable": scanned 1 of 1 pages, containing 48 live rows and 0 dead rows; 48 rows in sample, 48 estimated
totalrows 
> INFO:  analyzing "public.Offtake"
> INFO:  "Offtake": scanned 3000 of 13824 pages, containing 141000 live rows and 0 dead rows; 3000 rows in sample,
649728estimated total rows 
> INFO:  analyzing "public.SKUs"
> INFO:  "SKUs": scanned 3 of 3 pages, containing 73 live rows and 0 dead rows; 73 rows in sample, 73 estimated total
rows
> INFO:  analyzing "public.SMSDefaults"
> INFO:  "SMSDefaults": scanned 1 of 1 pages, containing 43 live rows and 0 dead rows; 43 rows in sample, 43 estimated
totalrows 
> INFO:  analyzing "public.StandardPeriods"
> INFO:  "StandardPeriods": scanned 1 of 1 pages, containing 8 live rows and 0 dead rows; 8 rows in sample, 8 estimated
totalrows 
> INFO:  analyzing "public.StandardUnits"
> INFO:  "StandardUnits": scanned 1 of 1 pages, containing 9 live rows and 0 dead rows; 9 rows in sample, 9 estimated
totalrows 
> INFO:  analyzing "public.SubDataSources"
> INFO:  "SubDataSources": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated
totalrows 
> INFO:  analyzing "public.VolumeUnitDefn"
> INFO:  "VolumeUnitDefn": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated
totalrows 
> INFO:  analyzing "public.VolumeUnits"
> INFO:  "VolumeUnits": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated
totalrows 
>
> as you can see the biggest one only has 600k records.
>
> Here are the settings used for postgresql.conf ( will just list those that were modified)
> #---------------------------------------------------------------------------
> # RESOURCE USAGE (except WAL)
> #---------------------------------------------------------------------------
>
> # - Memory -
> #shared_buffers = 32MB
> shared_buffers = 128MB            # min 128kB or max_connections*16kB
>                     # (change requires restart)
> #temp_buffers = 8MB            # min 800kB
> temp_buffers = 32MB
> #max_prepared_transactions = 5        # can be 0 or more
>                     # (change requires restart)
> #max_prepared_transactions = 20
> # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> #work_mem = 1MB                # min 64kB
> work_mem = 2MB
> #maintenance_work_mem = 16MB        # min 1MB
> maintenance_work_mem = 32MB
> #max_stack_depth = 2MB            # min 100kB
>
>
> Everything else was kept as is (given by the installer).
>
> /etc/sysctl.conf  settings below :
> kernel.shmmax = 1536000000
> kernel.msgmni = 1024
> fs.file-max = 8192
> kernel.sem = "250 32000 32 1024"
>
> The main stored function has approximately 1200 lines of pl/pgsql code. While it is running it calls 7 other support
storedfunctions plus a small view.  
>
> The test basically was run two ways :
>
> a) for the linux box, we used PG Admin III to run the core stored function and in the windows box we used query
analyzer. 
>
> b) Created a small vb program that just calls the stored function for both boxes.
>
>     Since I'm a total newbie in PG, I was expecting dismal results in the initial run since our port of the code
wouldnot be optimized for PG and sure enough I got them. 
>
> Windows 2k Pro + MSDE - 4 seconds
> FC7 + postgresql-8.3-beta3 - 77 seconds
>
>     thinking that maybe the GUI of FC7 is hampering the load, I decided to run it again using runlevel 3. The result
wasapproximately a 1-2 % gain on the FC7 but that is insignificant compared to the discrepancy I'm measuring so I
decidedto just use the w/GUI results. We noticed that the CPU for the linux box was tapped out (at least one of the
cores)nearly 100% for the duration of the process. There was plenty of ram available and there was little to no disk
accessduring the entire run.  
>
> I decided to further normalize the test and make the OS constant.
>
> Windows 2k Pro + MSDE - 4 seconds
> Windows 2k Pro +  postgresql-8.3-beta3 - 54 seconds
>
> Turns out that for our code, running PG in windows is faster significantly. This was a surprise coming from all the
researchon the internet but it does jive with the results of this guy : 
>
> http://channel9.msdn.com/ShowPost.aspx?PostID=179064
>
> Note that this guy used MS SQL 2005. Which we have proved to be 2-3 times slower than MS SQL 2000 and hence our
researchinto other options. :) 
>
> Anyways I'd like to break up this request/begging for help into two parts.
>
> 1) First would be settings of postgresql.conf. Did I do it correctly? The sample data is so small....I based my
settingson the recommendations researched for data centers. 
>
> 2) Code optimization which I plan to start in another email thread since the discussions there would be more
detailed.
>
> Would it also make sense to optimize (as far as possible) everything (including the code) for windows first? The
targethere would be a linux OS but since the discrepancy is so big...the unified Windows OS might be a good place to
startfor now.  
>
> Many Thanks in advance.
>
> Regards
>
>
>
>


В списке pgsql-performance по дате отправления:

Предыдущее
От: Robert Bernabe
Дата:
Сообщение: Evaluation of PG performance vs MSDE/MSSQL 2000 (not 2005)
Следующее
От: Gregory Stark
Дата:
Сообщение: Re: Bad query plans for queries on partitioned table