Обсуждение: Performance decrease

Поиск
Список
Период
Сортировка

Performance decrease

От
"Radovan Antloga"
Дата:
I'm new to PG and I'm testing default PG settings
for now.

I have PG 8.1.3. installed with autovacuum=on.

My test table has 15830 records with 190 fields.
I have different fields types (date, numeric, varchar,
integer, smallint,...).

I decided to evaluate PG because I need to use schemas.

First test I did is not very promising.

I tried to update one fields in test table several times
to see how PG react on this.

I do like this:

update table
set field = null

After first execute I get time 3 seconds. Then I repeat
this update. After each update time increase. I get
4 sec, 7 sec, 10 sec, 12 sec, 15 sec, 18 sec, 21 sec.

Is this normal (default) behaviour or I must do something
to prevent this.

Regards,
Radovan Antloga


Re: Performance decrease

От
Tom Lane
Дата:
"Radovan Antloga" <radovan.antloga@siol.net> writes:
> My test table has 15830 records with 190 fields.

190 fields in a table seems like rather a lot ... is that actually
representative of your intended applications?

> I do like this:

> update table
> set field = null

Again, is that representative of something you'll be doing a lot in
practice?  Most apps don't often update every row of a table, in my
experience.

> After first execute I get time 3 seconds. Then I repeat
> this update. After each update time increase. I get
> 4 sec, 7 sec, 10 sec, 12 sec, 15 sec, 18 sec, 21 sec.

There should be some increase because of the addition of dead rows,
but both the original 3 seconds and the rate of increase seem awfully
high for such a small table.  What are you running this on?

For comparison purposes, here's what I see on a full-table UPDATE
of a 10000-row table on a rather slow HP box:

regression=# \timing
Timing is on.
regression=# create table t1 as select * from tenk1;
SELECT
Time: 1274.213 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 565.664 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 589.839 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 593.735 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 615.575 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 755.456 ms
regression=#

Vacuuming brings the time back down:

regression=# vacuum t1;
VACUUM
Time: 242.406 ms
regression=# update t1 set unique2 = null;
UPDATE 10000
Time: 458.028 ms
regression=#

            regards, tom lane

Re: Performance decrease

От
"Radovan Antloga"
Дата:
>190 fields in a table seems like rather a lot ... is that actually
>representative of your intended applications?

Test table is like table I use in production
with Firebird and Oracle db. Table has a lot of smallint
and integer fields. As you can see I have Firebird for
low cost projects (small companies) and Oracle medium
or large project.

>Again, is that representative of something you'll be doing a lot in
>practice?  Most apps don't often update every row of a table, in my
>experience.

I agree with you !
I have once or twice a month update on many records (~6000) but
not so many. I did not expect PG would have problems with
updating 15800 records.

My test was on Windows XP SP2.
I have AMD 64 2.1 GHz cpu with
1GB ram.

Regards,
Radovan Antloga


Re: Performance decrease

От
"Jim C. Nasby"
Дата:
On Thu, Apr 20, 2006 at 06:10:21PM +0200, Radovan Antloga wrote:
> I have once or twice a month update on many records (~6000) but
> not so many. I did not expect PG would have problems with
> updating 15800 records.

And generally speaking, it doesn't. But you do need to ensure that
you're vacuuming the database frequently enough. Autovacuum is a good
way to do that.

> My test was on Windows XP SP2.
> I have AMD 64 2.1 GHz cpu with
> 1GB ram.

One think to keep in mind is that the windows code is rather new, so it
is possible to find some performance issues there.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: Performance decrease

От
Guido Neitzer
Дата:
On 20.04.2006, at 18:10 Uhr, Radovan Antloga wrote:

> I have once or twice a month update on many records (~6000) but
> not so many. I did not expect PG would have problems with
> updating 15800 records.

It has no problems with that. We have a database where we often
update/insert rows with about one hundred columns. No problem so far.
Performance is in the sub 10ms range. The whole table has about
100000 records.

Do you wrap every update in a separate transaction? I do commits
every 200 updates for bulk updates.

cug

--
PharmaLine, Essen, GERMANY
Software and Database Development



Вложения