Обсуждение: Performance issue

Поиск
Список
Период
Сортировка

Performance issue

От
peter
Дата:
Hello,

I have been trying to get my Postgres database to do faster inserts.

The environment is basically a single user situation.

The part that I would like to speed up is when a User copys a Project.
A Project consists of a number of Rooms(say 60). Each room contains a
number of items.
A project will contain say 20,000 records.

Anyway the copying process gets slower and slower, as more projects are
added to the database.

My statistics(Athlon 1.8Ghz)
----------------
20,000 items            Takes on average 0.078seconds/room
385,000 items          Takes on average .11seconds/room
690,000 items          takes on average .270seconds/room
1,028,000 items       Takes on average .475seconds/room

As can be seen the time taken to process each room increases. A commit
occurs when a room has been copied.
The hard drive  is not being driven very hard. The hard drive light
only flashes about twice a second when there are a million records in
the database.

I thought that the problem could have been my plpgsql procedure because
I assume the code is interpreted.
However I have just rewriten the code using straight sql(with some temp
fields),
and the times turn out to be almost exactly the same as the plpgsql
version.

The read speed for the Application is fine. The sql planner seems to be
doing a good job. There has been only one problem
that I have found with one huge select, which was fixed by a cross join.

  I am running Red hat 8. Some of my conf entries that I have changed
follow
shared_buffers = 3700
effective_cache_size = 4000
sort_mem = 32168

Are the increasing times reasonable?
The times themselves might look slow, but thats because there are a
number of tables involved in a Copy

I can increase the shared buffer sizes above 32M, but would this really
help?

TIA

peter Mcgregor


Re: Performance issue

От
Joseph Bove
Дата:
Peter,

One possibility is to drop all the indexes, do the insert and re-add the
indexes.

The more indexes that exist and the more rows that exist, the more costly
the insert.

Regards,

Joseph

At 05:48 PM 9/24/2003 +1200, peter wrote:
>Hello,
>
>I have been trying to get my Postgres database to do faster inserts.
>
>The environment is basically a single user situation.
>
>The part that I would like to speed up is when a User copys a Project.
>A Project consists of a number of Rooms(say 60). Each room contains a
>number of items.
>A project will contain say 20,000 records.
>
>Anyway the copying process gets slower and slower, as more projects are
>added to the database.
>
>My statistics(Athlon 1.8Ghz)
>----------------
>20,000 items            Takes on average 0.078seconds/room
>385,000 items          Takes on average .11seconds/room
>690,000 items          takes on average .270seconds/room
>1,028,000 items       Takes on average .475seconds/room
>
>As can be seen the time taken to process each room increases. A commit
>occurs when a room has been copied.
>The hard drive  is not being driven very hard. The hard drive light only
>flashes about twice a second when there are a million records in the database.
>
>I thought that the problem could have been my plpgsql procedure because I
>assume the code is interpreted.
>However I have just rewriten the code using straight sql(with some temp
>fields),
>and the times turn out to be almost exactly the same as the plpgsql version.
>
>The read speed for the Application is fine. The sql planner seems to be
>doing a good job. There has been only one problem
>that I have found with one huge select, which was fixed by a cross join.
>
>  I am running Red hat 8. Some of my conf entries that I have changed follow
>shared_buffers = 3700
>effective_cache_size = 4000
>sort_mem = 32168
>
>Are the increasing times reasonable?
>The times themselves might look slow, but thats because there are a number
>of tables involved in a Copy
>
>I can increase the shared buffer sizes above 32M, but would this really help?
>
>TIA
>
>peter Mcgregor
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Don't 'kill -9' the postmaster



Re: Performance issue

От
Richard Jones
Дата:
get rid of any unnecessary indexes?
i've found that droping indexes and re-creating them isn't usually worth the
effort

mount the disk with the noatime option which saves you the time involved in
updating the last access time on files

make sure you're doing all the inserts in one transaction..  wrapping a bunch
of INSERTS in BEGIN & COMMIT speeds them up loads.




> At 05:48 PM 9/24/2003 +1200, peter wrote:
> >Hello,
> >
> >I have been trying to get my Postgres database to do faster inserts.
> >
> >The environment is basically a single user situation.
> >
> >The part that I would like to speed up is when a User copys a Project.
> >A Project consists of a number of Rooms(say 60). Each room contains a
> >number of items.
> >A project will contain say 20,000 records.
> >
> >Anyway the copying process gets slower and slower, as more projects are
> >added to the database.
> >
> >My statistics(Athlon 1.8Ghz)
> >----------------
> >20,000 items            Takes on average 0.078seconds/room
> >385,000 items          Takes on average .11seconds/room
> >690,000 items          takes on average .270seconds/room
> >1,028,000 items       Takes on average .475seconds/room
> >
> >As can be seen the time taken to process each room increases. A commit
> >occurs when a room has been copied.
> >The hard drive  is not being driven very hard. The hard drive light only
> >flashes about twice a second when there are a million records in the
> > database.
> >
> >I thought that the problem could have been my plpgsql procedure because I
> >assume the code is interpreted.
> >However I have just rewriten the code using straight sql(with some temp
> >fields),
> >and the times turn out to be almost exactly the same as the plpgsql
> > version.
> >
> >The read speed for the Application is fine. The sql planner seems to be
> >doing a good job. There has been only one problem
> >that I have found with one huge select, which was fixed by a cross join.
> >
> >  I am running Red hat 8. Some of my conf entries that I have changed
> > follow shared_buffers = 3700
> >effective_cache_size = 4000
> >sort_mem = 32168
> >
> >Are the increasing times reasonable?
> >The times themselves might look slow, but thats because there are a number
> >of tables involved in a Copy
> >
> >I can increase the shared buffer sizes above 32M, but would this really
> > help?
> >
> >TIA
> >
> >peter Mcgregor
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 4: Don't 'kill -9' the postmaster
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)


Re: Performance issue

От
Sean Chittenden
Дата:
> My statistics(Athlon 1.8Ghz)
> ----------------
> 20,000 items            Takes on average 0.078seconds/room
> 385,000 items          Takes on average .11seconds/room
> 690,000 items          takes on average .270seconds/room
> 1,028,000 items       Takes on average .475seconds/room
[snip]
>  I am running Red hat 8. Some of my conf entries that I have changed
> follow
> shared_buffers = 3700
> effective_cache_size = 4000
> sort_mem = 32168

Have you twiddled with your wal_buffers or checkpoint_segments?  Might
be something to look at.

-sc

--
Sean Chittenden

Re: Performance issue

От
Rod Taylor
Дата:
> 20,000 items            Takes on average 0.078seconds/room
> 385,000 items          Takes on average .11seconds/room
> 690,000 items          takes on average .270seconds/room
> 1,028,000 items       Takes on average .475seconds/room
>
> As can be seen the time taken to process each room increases. A commit
> occurs when a room has been copied.

It probably isn't the insert that is getting slower, but a select.
Foreign keys to growing tables will exhibit this behaviour.

Since the time is doubling with the number of items, you might want to
check for a SELECT working with a sequential scan rather than an index
scan.

Вложения