Re: Performance Problems

Поиск
Список
Период
Сортировка
От nikolaus@dilger.cc
Тема Re: Performance Problems
Дата
Msg-id 20020828161434.17439.h010.c001.wm@mail.dilger.cc.criticalpath.net
обсуждение исходный текст
Ответ на Performance Problems  (Alex Paulusberger <alexp@meta-bit.com>)
Список pgsql-admin
Alex,

databases are designed for set operations, i.e. process
multiple rows at a time.  The process you describe is
inherently inefficient.  Looping through the whole
table; one line at a time with a DBI call each.  This
will result in many uneccessary table scans and lots of
overhead for your several thousand DBI calls.

So you would get the greatest performance improvment if
you could rewrite your logic in a way to completely
eliminate the loop.  Write a querry that will create
the whole temp table in one shot.  The write a second
querry to insert into the results table.  Depending on
the complexety of your calculations you may need to
create your own SQL functions and add them to your
database.

Regards,
Nikolaus Dilger

On Fri, 23 August 2002, Alex Paulusberger wrote:

>
> Hi,
> i came across a problem for which I don't really have
> an explanation. If
> anyone has some ideas, help would be greatly
> appreciated.
>
> Here we go...
>
> I run postgers 7.2.1 on a SunFire, 1 GB memory.
> I run a perl batch job (DBI, same machine) that does
> the following.
>
> 1. Iterates through all the records in a table (total
> 4500 records) by
>     selecting one record at the time
> 2. The record then is used to select a couple of
> records from another
> table (total 400,000 records in the table)
>     and then selects a couple of records (average
> 10-15), complements
> the records with additional information
>     and stores the information in a temp table.
> 3. The temp table is then consulted, one record
> selected and then
> inserted in a results table.
>     the results table is growing by 4,500 records a
> day. the temp table
> is primarily used to apply a logic by sorting
>     records.
>
> The process:
> The whole process loops 4,500 times.
> For every loop
> - a temp table is cleared
> - 10-15 records are inserted into the temp table
> - one record is selected from the temp table and
> inserted into the
> results table.
> in-between some calculations are done in perl.
>
> the total number of selects is ca. 400,000
> total number of inserts of table deletes are 4,500
> total number of inserts are ca. 400,000
>
> The problem:
> initially if am able to evaluate 5-6 records a second
> and insert them
> into the results table, however towards the end
> of the evaluation, the figure drops to under 2. I
> optimized the code,
> use transaction blocks and tuned the database (has
> 700mb of memory
> allocated to it). What I don't quite understand is
why
> the system slows
> down.  since the process steps are the same. Could
perl
> be the reason ?
>
> Any ideas ?
>
> Thanks
> Alex
>
>
>
>
>
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to
> majordomo@postgresql.org

В списке pgsql-admin по дате отправления:

Предыдущее
От: "Jules Alberts"
Дата:
Сообщение: Re: unable to reconnect to database server
Следующее
От: nikolaus@dilger.cc
Дата:
Сообщение: Re: ProstGreSQL on RAID Question