Jean-Paul ARGUDO wrote:
> This is the Linux Red Hat 7.2 / PostgreSQL 7.2 port of the Pro*C program
> producing the output
>
> As you'll understand, it is not the COMPLETE batch, we had to stop it..:
>
> Time : 00:16:26
>
> Transaction : 750
> Item : 7391
> Transaction (ms) : 1314
> Item (ms) : 133
>
> Errors : 1
> Warnings : 0
> PLU not found : 0
> NOM not found : 0
> Alloc NOM : 739
> Free NOM : 0
> Error 1555 : 0
>
> Read : 45127.000
> Write : 37849.000
> Read/Write : 82976.000
>
> PLU SELECT : 7391
> NOM SELECT : 29564
> T04 SELECT : 31
> T01 INSERT : 378
> T01 UPDATE : 29186
Are you updating 29186 records in a table here? If so, is this table used in
the following queries?
> T02 INSERT : 3385
> T02 UPDATE : 4006
Ditto here, is T02 updated and then used in subsequent queries?
> T03 INSERT : 613
> T13 INSERT : 281
> RJT INSERT : 0
> RJT SELECT : 0
Are these queries run in this order, or are the inserts/updates/selects
intermingled?
A judicial vacuum on a couple of the tables may help.
Also, I noticed you had 19000 buffers. I did some experimentation with buffers
and found more is not always better. Depending on the nature of your database,
2048~4096 seem to be a sweet spot for some of he stuff that I do.
Again, have you "analyzed" the database? PostgreSQL will do badly if you have
not analyzed. (Oracle also benefits from analyzing, depending on the nature of
the data.)
Have you done an "explain" on the queries used in your batch? You may be able
to see what's going on.