Обсуждение: Update in trigger

Поиск
Список
Период
Сортировка

Update in trigger

От
Iklódi Lajos
Дата:
Hi,
I have a problem with trigger in 7.1.3.
I have a table with after insert trigger, which is making updates in
another table.
I recognized that the more records I have in first table the slower is
my insert.
If my trigger is doing something else, for example inserts in another
table, it becomes * 20 times * quicker, and the speed of my insert
doesn't depend on tablesize.
Don't I have to make updates in trigger?

Thanks in advance for any help.
Lajos




Re: Update in trigger

От
Jan Wieck
Дата:
[Ikl_di] Lajos wrote:
[Charset iso-8859-2 unsupported, filtering to ASCII...]
> Hi,
> I have a problem with trigger in 7.1.3.
> I have a table with after insert trigger, which is making updates in
> another table.
> I recognized that the more records I have in first table the slower is
> my insert.
> If my trigger is doing something else, for example inserts in another
> table, it becomes * 20 times * quicker, and the speed of my insert
> doesn't depend on tablesize.
> Don't I have to make updates in trigger?
   Are  there  appropriate  indexes  on the table updated in the   trigger and is the database "VACUUM ANALYZE"ed so
theindexes   get used?
 
   Even  if  it  is  an AFTER INSERT trigger, your INSERT has to   wait until the trigger finishes, because it still
hasa right   to abort the transaction.
 


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #




Re: Update in trigger

От
Iklódi Lajos
Дата:
Hi,

Jan Wieck írta:

>     Are  there  appropriate  indexes  on the table updated in the
>     trigger and is the database "VACUUM ANALYZE"ed so the indexes
>     get used?

There is a primary key.
VACUUM ANALYZE is really helpful, but only for a few minutes (3000 inserted
records).


William Meloney írta:

> Please forward a copy of the trigger.  This is exactly the same thing I
> am trying to do but I have not had any success.

Here is the whole structure:

CREATE TABLE nyadat (  nykod       serial primary key,  nymekod     int4 not null references mekonf,  nyatip      int4
notnull references nyatip,  nymeido     timestamp,  nytarido    timestamp default now(),  nyhossz     int4,  nyadat
bytea  ) ;
 

CREATE TABLE szm (  kod      serial primary key,  konfx    int4,  csatx    int4,  dparx    int4,  nparx    int4,
nyugtax int4,  parancsx int4,  esemenyx int4,  kalibrx  int4,  nyx      int4,  feldx    int4,  kparx    int4  ) ;
 

create function nyx_fnc() returns opaque as '
begin  update szm set nyx = new.nykod where kod=1 ;  return new ;
end ;
' language 'plpgsql' ;

create trigger ny_x_trig after insert   on nyadat for each row execute procedure nyx_fnc() ;


Thanks for any help.
Lajos




Re: Update in trigger

От
Jan Wieck
Дата:
[Ikl_di] Lajos wrote:
[Charset iso-8859-2 unsupported, filtering to ASCII...]
> Hi,
>
> Jan Wieck _rta:
>
> >     Are  there  appropriate  indexes  on the table updated in the
> >     trigger and is the database "VACUUM ANALYZE"ed so the indexes
> >     get used?
>
> There is a primary key.
> VACUUM ANALYZE is really helpful, but only for a few minutes (3000 inserted
> records).
>
>
> William Meloney _rta:
>
> > Please forward a copy of the trigger.  This is exactly the same thing I
> > am trying to do but I have not had any success.
>
> Here is the whole structure:
>
> [...]
   That's a well known problem with PostgreSQL's non-overwriting   storage management system. After your 3000 INSERTs,
you also   have UPDATEed szm 3000 times, leaving 3000 dead rows and 3000   dead index entries in it. Thus, another
UPDATE looking  for   kod=1 will find 3000 index entries and has to check 3000 data   rows only to find that they have
tobe ignored.
 
   The only way I see at the time beeing  is  to  vacuum  tables   that  have  very  few rows with a high update
frequency"very   often".  No need to vacuum the rest of the  database  at  the   same rate, just these special tables.
 


Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #