Обсуждение: Does trigger only accept functions?
Hi, It's version 15.4 of postgres. We have a requirement to have the audit enabled for the delete queries on the base table. And for that we are planning to have one audit table created for each base table and have triggers on each of the base tables to be fired on delete which will insert records into the audit table.
But I see the trigger is not accepting the insert query directly, rather it's asking to call a function and to put the business logic inside that function, something as below. So does that mean, to enable audit on the ~50 base table , we will have ~50 functions to be created and also they need to be called from ~50 triggers? or any other better approach exists to handle this?
CREATE OR REPLACE FUNCTION log_deletes_source_table()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO source_table_delete_history (record_id, delete_timestamp, col1, col2,col3)
VALUES (OLD.id, current_timestamp, old.col1,old.col2,old.col3);
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
-- Trigger for deletes
CREATE TRIGGER before_delete
BEFORE DELETE ON source_table
FOR EACH ROW EXECUTE FUNCTION log_deletes_source_table ();
Regards
Veem
On 6/10/24 12:17, veem v wrote: > Hi, It's version 15.4 of postgres. We have a requirement to have the > audit enabled for the delete queries on the base table. And for that we > are planning to have one audit table created for each base table and > have triggers on each of the base tables to be fired on delete which > will insert records into the audit table. > > But I see the trigger is not accepting the insert query directly, rather > it's asking to call a function and to put the business logic inside that > function, something as below. So does that mean, to enable audit on the > ~50 base table , we will have ~50 functions to be created and also they > need to be called from ~50 triggers? or any other better approach exists > to handle this? The below tells you what you need: https://www.postgresql.org/docs/15/sql-createtrigger.html That is either a function or a procedure. You could create one function with dynamic SQL and call that from each trigger. Yes there would need to be trigger on each table in that case. As to alternatives: https://www.pgaudit.org/ > > CREATE OR REPLACE FUNCTION log_deletes_source_table() > RETURNS TRIGGER AS $$ > BEGIN > INSERT INTO source_table_delete_history (record_id, > delete_timestamp, col1, col2,col3) > VALUES (OLD.id, current_timestamp, old.col1,old.col2,old.col3); > RETURN OLD; > END; > $$ LANGUAGE plpgsql; > > -- Trigger for deletes > CREATE TRIGGER before_delete > BEFORE DELETE ON source_table > FOR EACH ROW EXECUTE FUNCTION log_deletes_source_table (); > > Regards > Veem -- Adrian Klaver adrian.klaver@aklaver.com
On Mon, 2024-06-10 at 13:20 -0700, Adrian Klaver wrote: > https://www.postgresql.org/docs/15/sql-createtrigger.html > > That is either a function or a procedure. The trigger function must be a function, it cannot be a procedure. The syntax EXECUTE PROCEDURE is just for backward compatibility with the time before PostgreSQL had procedures. Yours, Laurenz Albe
On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote: > to be called from ~50 triggers? or any other better approach exists to > handle this? pgaudit extension? Or just write all the changes to single table? Or use dynamic queries that will build the insert based on the name of table the event happened on? Or pass arguments? Best regards, depesz
On Tue, 11 Jun 2024 at 17:03, hubert depesz lubaczewski <depesz@depesz.com> wrote:
On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote:
> to be called from ~50 triggers? or any other better approach exists to
> handle this?
pgaudit extension?
Or just write all the changes to single table?
Or use dynamic queries that will build the insert based on the name of
table the event happened on?
Or pass arguments?
Best regards,
depesz
Thank you so much. I hope you mean something as below when you say making it dynamic. Because we have the audit tables having more number of columns as compared to the source table and for a few the column name is a bit different.
CREATE TRIGGER before_delete
BEFORE DELETE ON source_table
FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table1
CREATE TRIGGER before_delete_source_table1
BEFORE DELETE ON source_table1
FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table2
CREATE TRIGGER before_delete_source_table2
BEFORE DELETE ON source_table2
FOR EACH ROW EXECUTE FUNCTION log_deletes();
CREATE OR REPLACE FUNCTION log_deletes()
RETURNS TRIGGER AS $$
BEGIN
IF TG_TABLE_NAME = 'source_table1' THEN
INSERT INTO delete_audit1 ( col1, col2, col3)
VALUES (OLD.col1, OLD.col2, OLD.col3);
ELSIF TG_TABLE_NAME = 'source_table2' THEN
INSERT INTO delete_audit2 ( col4, col5, col6)
VALUES (OLD.col4, OLD.col5, OLD.col6);
-- Add more conditions for other tables
ELSE
RAISE EXCEPTION 'Audit table not defined for %', TG_TABLE_NAME;
END IF;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
On Wed, Jun 12, 2024 at 12:19:55AM +0530, veem v wrote: > CREATE OR REPLACE FUNCTION log_deletes() > RETURNS TRIGGER AS $$ > BEGIN > IF TG_TABLE_NAME = 'source_table1' THEN > INSERT INTO delete_audit1 ( col1, col2, col3) > VALUES (OLD.col1, OLD.col2, OLD.col3); > ELSIF TG_TABLE_NAME = 'source_table2' THEN > INSERT INTO delete_audit2 ( col4, col5, col6) > VALUES (OLD.col4, OLD.col5, OLD.col6); > -- Add more conditions for other tables > ELSE > RAISE EXCEPTION 'Audit table not defined for %', TG_TABLE_NAME; > END IF; > RETURN OLD; > END; > $$ LANGUAGE plpgsql; No, I meant building dynamic queries and then EXECUTE-ing, like docs show: https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN Best regards, depesz
On Wed, 12 Jun 2024 at 00:26, hubert depesz lubaczewski <depesz@depesz.com> wrote:
No, I meant building dynamic queries and then EXECUTE-ing, like docs
show:
https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
Best regards,
depesz
My apology, if interpreting it wrong way. It doesn't make much difference though, but do you mean something like below?
RETURNS TRIGGER AS $$
DECLARE
audit_table_name TEXT;
audit_query TEXT;
BEGIN
IF TG_TABLE_NAME = 'source_table1' THEN
audit_table_name := 'delete_audit1';
audit_query := 'INSERT INTO ' || audit_table_name || ' (record_id, delete_timestamp, col1, col2, col3) VALUES ( $2, $3, $4)';
EXECUTE audit_query USING OLD.id, OLD.col1, OLD.col2, OLD.col3;
ELSIF TG_TABLE_NAME = 'source_table2' THEN
audit_table_name := 'delete_audit2';
audit_query := 'INSERT INTO ' || audit_table_name || ' ( col4, col5, col6) VALUES ( $2, $3, $4)';
EXECUTE audit_query USING OLD.id, OLD.col4, OLD.col5, OLD.col6;
ELSE
RAISE EXCEPTION 'Audit table not defined for %', TG_TABLE_NAME;
END IF;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
On 6/11/24 12:20, veem v wrote: > > > On Wed, 12 Jun 2024 at 00:26, hubert depesz lubaczewski > <depesz@depesz.com <mailto:depesz@depesz.com>> wrote: > > > > No, I meant building dynamic queries and then EXECUTE-ing, like docs > show: > https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN <https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN> > > Best regards, > > depesz > > > My apology, if interpreting it wrong way. It doesn't make much > difference though, but do you mean something like below? > > CREATE OR REPLACE FUNCTION log_deletes() > RETURNS TRIGGER AS $$ > DECLARE > audit_table_name TEXT; > audit_query TEXT; > BEGIN > IF TG_TABLE_NAME = 'source_table1' THEN > audit_table_name := 'delete_audit1'; > audit_query := 'INSERT INTO ' || audit_table_name || ' > (record_id, delete_timestamp, col1, col2, col3) VALUES ( $2, $3, $4)'; > EXECUTE audit_query USING OLD.id, OLD.col1, OLD.col2, OLD.col3; > ELSIF TG_TABLE_NAME = 'source_table2' THEN > audit_table_name := 'delete_audit2'; > audit_query := 'INSERT INTO ' || audit_table_name || ' ( col4, > col5, col6) VALUES ( $2, $3, $4)'; > EXECUTE audit_query USING OLD.id, OLD.col4, OLD.col5, OLD.col6; > > ELSE > RAISE EXCEPTION 'Audit table not defined for %', TG_TABLE_NAME; > END IF; > > RETURN OLD; > END; > $$ LANGUAGE plpgsql; I'm guessing depesz meant using TG_TABLE_NAME to pull column information from: https://www.postgresql.org/docs/current/catalog-pg-attribute.html and use that to build the INSERT query. The issue with dynamic or a fixed SQL is going to be with audit_query, in particular audit_table_name := 'delete_audit2. If your source tables change, add or delete columns or column types change, your audit table will need to change to match. One possible solution is something I outlined here: https://aklaver.org/wordpress/2021/12/07/postgres-and-json/ Other folks have done similar things, you can search on postgresql audit tables using json for alternatives. -- Adrian Klaver adrian.klaver@aklaver.com
On Wed, Jun 12, 2024 at 12:50:27AM +0530, veem v wrote: > My apology, if interpreting it wrong way. It doesn't make much difference > though, but do you mean something like below? if you really have totally different structures across all tables, and you don't want to use pgaudit (which is the best solution), and you don't want to have custom function per table, then i'd use hstore datatype, and store all deleted rows, regardless of where they came from, in single log table (potentially partitioned). Something like: create table deleted_rows ( id int8 generated always as identity primary key, source_schema text, source_table text, deleting_user text, deleted_at timestamptz, deleted_row hstore ); create function log_deletes() returns trigger as $$ DECLARE BEGIN INSERT INTO deleted_rows (source_schema, source_table, deleting_user, deleted_at, deleted_row) VALUES (TG_TABLE_SCHEMA, TG_TABLE_NAME, CURRENT_USER, now(), hstore(OLD) ); return OLD; END; $$ language plpgsql; and then just: create trigger x after delete on tablex for each row execute function log_deletes(); or something like this, if I made any typos. Best regards, depesz
On Tue, Jun 11, 2024 at 2:53 PM veem v <veema0000@gmail.com> wrote:
On Tue, 11 Jun 2024 at 17:03, hubert depesz lubaczewski <depesz@depesz.com> wrote:On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote:
> to be called from ~50 triggers? or any other better approach exists to
> handle this?
pgaudit extension?
Or just write all the changes to single table?
Or use dynamic queries that will build the insert based on the name of
table the event happened on?
Or pass arguments?
Best regards,
depeszThank you so much. I hope you mean something as below when you say making it dynamic. Because we have the audit tables having more number of columns as compared to the source table and for a few the column name is a bit different.-- Trigger for deletes
CREATE TRIGGER before_delete
BEFORE DELETE ON source_table
FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table1
CREATE TRIGGER before_delete_source_table1
BEFORE DELETE ON source_table1
FOR EACH ROW EXECUTE FUNCTION log_deletes();
-- Trigger for source_table2
CREATE TRIGGER before_delete_source_table2
BEFORE DELETE ON source_table2
FOR EACH ROW EXECUTE FUNCTION log_deletes();
CREATE OR REPLACE FUNCTION log_deletes()
RETURNS TRIGGER AS $$
BEGIN
IF TG_TABLE_NAME = 'source_table1' THEN
INSERT INTO delete_audit1 ( col1, col2, col3)
VALUES (OLD.col1, OLD.col2, OLD.col3);
ELSIF TG_TABLE_NAME = 'source_table2' THEN
INSERT INTO delete_audit2 ( col4, col5, col6)
VALUES (OLD.col4, OLD.col5, OLD.col6);
-- Add more conditions for other tables
Dear god, no.
Since all the functions are going to be similar, I'd write a shell script to generate all the triggers, one per relevant. If you're going to record every field, then save effort, and don't bother enumerating them. You'll need to dig into the PG catalog's guts to list columns in the correct order, but Google and Stack Exchange makes that easy enough.
(And, of course, that single trigger would be SLOW.)
This is essentially what we did 25 years ago to "logically replicate" data from our OLTP system to the OLAP system. There were two log tables for every table to be replicated: foo_LOG1 and foo_LOG2. The trigger wrote to foo_LOG1 on even days, and foo_LOG2 on odd days. It even added a current_timestamp column, and action_code ("I" for insert, "D" for delete, and "U" for update).
At around 01:00, a batch job copied out all of "yesterday's" log data (there were 80-90 tables), and then truncated the table.
On Tue, 11 Jun 2024 at 18:25, Ron Johnson <ronljohnsonjr@gmail.com> wrote:
Since all the functions are going to be similar, I'd write a shell script to generate all the triggers, one per relevant. If you're going to record every field, then save effort, and don't bother enumerating them. You'll need to dig into the PG catalog's guts to list columns in the correct order, but Google and Stack Exchange makes that easy enough.
I'd use a DO block and write a loop in PL/PGSQL. Then everything stays in Postgres and you have all the support of Postgres when writing your SQL-writing code (quote_ident, the reg* types, etc.).