Обсуждение: Can LISTEN/NOTIFY deal with more than 100 every second?
Hi, I am prototyping a system which sends all INSERT/UPDATE/DELETE events to a third party software, I do: CREATE TABLE data (id Serial PRIMARY KEY, data VARCHAR(255)); CREATE TABLE log (op CHAR(6), id integer, data VARCHAR(255)); CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO ALSO NOTIFY logevent; CREATE OR REPLACE FUNCTION log_event() RETURNS TRIGGER AS $$ BEGIN IF (TG_OP = 'DELETE') THEN INSERT INTO log VALUES ('DELETE', OLD.id, OLD.data); ELSIF (TG_OP = 'UPDATE') THEN INSERT INTO log VALUES ('UPDATE', NEW.id, NEW.data); ELSIF (TG_OP = 'INSERT') THEN INSERT INTO log VALUES ('INSERT', NEW.id, NEW.data); END IF; RETURN NULL; END; $$ LANGUAGE plpgsql; CREATE TRIGGER log_event_trigger AFTER INSERT OR UPDATE OR DELETE ON data FOR EACH ROW EXECUTE PROCEDURE log_event(); A simple client program is used to wait for the NOTIFY logevent and query the log table to send the changes, then delete what he has sent. When I inserted data to TABLE data with the rate of about 25 every second, the client can receive the notifies without any problem, and when I use 3 similar programs to feed data, which means about 75 events every second, I found that Postgres didn't send NOTIFY opportunely, since the client do SELECT query every several hundreds seconds, which is too long to be acceptable. So what I want to know is, is there anything wrong with my idea? and how frequence can LISTEN/NOTIFY support? Thanks. Regards, Gavin Mu
Gavin Mu wrote: > CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO ALSO NOTIFY logevent; > .. > when I use 3 similar programs to feed data, which means about 75 > events every second, I found that Postgres didn't send NOTIFY > opportunely, since the client do SELECT query every several hundreds > seconds, which is too long to be acceptable. > Hello Gavin, The following might help from the notify docs: "NOTIFY behaves like Unix signals in one important respect: if the same notification name is signaled multiple times in quick succession, recipients might get only one notification event for several executions of NOTIFY." So if your notify for instance could also add a unique number to the notification name, then it will probably work as expected. Regards, Yeb Havinga
-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 > I am prototyping a system which sends all INSERT/UPDATE/DELETE events > to a third party software, I do: ... > CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO > ALSO NOTIFY logevent; This is better off as a statement-level trigger, so you won't have to issue one notify per insert, but one per group of inserts. It also looks like you might be reinventing a wheel - maybe can you do what you want with Slony's log shipping? > When I inserted data to TABLE data with the rate of about 25 every > second, the client can receive the notifies without any problem, and > when I use 3 similar programs to feed data, which means about 75 > events every second, I found that Postgres didn't send NOTIFY > opportunely, since the client do SELECT query every several hundreds > seconds, which is too long to be acceptable. > > So what I want to know is, is there anything wrong with my idea? and > how frequence can LISTEN/NOTIFY support? Thanks. The question is, how often does the other side need to get notified? If things are coming in that fast, there is no need for the client to check for notifies, just have it continously poll your log table. We can probably answer your question better if it were clearer what your program is doing and where it is failing, particularly this bit: "the client do SELECT query every several hundreds seconds" - -- Greg Sabino Mullane greg@turnstep.com End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201002010912 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAktm4f8ACgkQvJuQZxSWSshLUQCg2/TLbE0L8o6SncclQg3eNtVX UUsAnjRx9Ki6j0ATebUqTXjEs9zMrQIu =1cnk -----END PGP SIGNATURE-----
Gavin Mu wrote: > CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO ALSO NOTIFY logevent; > .. > when I use 3 similar programs to feed data, which means about 75 > events every second, I found that Postgres didn't send NOTIFY > opportunely, since the client do SELECT query every several hundreds > seconds, which is too long to be acceptable. > Hello Gavin, The following might help from the notify docs: "NOTIFY behaves like Unix signals in one important respect: if the same notification name is signaled multiple times in quick succession, recipients might get only one notification event for several executions of NOTIFY." So if your notify for instance could also add a unique number to the notification name, then it will probably work as expected. Regards, Yeb Havinga
with your reminder I had a look at the code of the LISTEN/NOTIFY implementation, NOTIFY <name> will send SIGUSR2 signal to the backend if it's not for itself. I guess frequent singal handling can't be handled on time. 2010/2/1 Yeb Havinga <yhavinga@gmail.com>: > Gavin Mu wrote: >> >> CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO ALSO NOTIFY >> logevent; >> > > .. >> >> when I use 3 similar programs to feed data, which means about 75 >> events every second, I found that Postgres didn't send NOTIFY >> opportunely, since the client do SELECT query every several hundreds >> seconds, which is too long to be acceptable. >> > > Hello Gavin, > > The following might help from the notify docs: > > "NOTIFY behaves like Unix signals in one important respect: if the same > notification name is signaled multiple times in quick succession, recipients > might get only one notification event for several executions of NOTIFY." > > So if your notify for instance could also add a unique number to the > notification name, then it will probably work as expected. > > Regards, > Yeb Havinga > > >
Hi, Greg, Thanks for your reply, and I described my case more clearly inline. Regards, Gavin Mu 2010/2/1 Greg Sabino Mullane <greg@turnstep.com>: > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: RIPEMD160 > > >> I am prototyping a system which sends all INSERT/UPDATE/DELETE events >> to a third party software, I do: > > ... >> CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO >> ALSO NOTIFY logevent; > > This is better off as a statement-level trigger, so you won't have > to issue one notify per insert, but one per group of inserts. I need all detail info of a row to be logged, though I defined only one data column in this case. > > It also looks like you might be reinventing a wheel - maybe can you do > what you want with Slony's log shipping? Thanks for your reminder. I have quickly gone through the documents of Slony-I today, though I can't fully understand how it works, it seems that the slon does this work by polling when a threshold value is exceeded. Maybe this is a good solution when the performance is not satisfied by LISTEN/NOTIFY signal. > >> When I inserted data to TABLE data with the rate of about 25 every >> second, the client can receive the notifies without any problem, and >> when I use 3 similar programs to feed data, which means about 75 >> events every second, I found that Postgres didn't send NOTIFY >> opportunely, since the client do SELECT query every several hundreds >> seconds, which is too long to be acceptable. >> >> So what I want to know is, is there anything wrong with my idea? and >> how frequence can LISTEN/NOTIFY support? Thanks. > > The question is, how often does the other side need to get notified? If > things are coming in that fast, there is no need for the client to > check for notifies, just have it continously poll your log table. I preferred a 'real-time' solution since any INSERT event is urgent, you can understand it as an alert event which need to be reported immediately. That's also why I don't like polling. > > We can probably answer your question better if it were clearer what your > program is doing and where it is failing, particularly this bit: > > "the client do SELECT query every several hundreds seconds" Sorry that I didn't tell this clearly. My demo client waits on the select(3C) for the notifies, once received, it does select query on log table and prints the result on the screen, then delete them from log table. When the INSERT rate is about 75 every second, the client didn't have any output for about several hundreds of seconds, meantime, I can see the rows in log table increased persistently to about 30K+ before the client deleted them from 'SELECT COUNT(*) from log' in psql. I guess the backend can't deal with the signals on time. > > - -- > Greg Sabino Mullane greg@turnstep.com > End Point Corporation http://www.endpoint.com/ > PGP Key: 0x14964AC8 201002010912 > http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 > -----BEGIN PGP SIGNATURE----- > > iEYEAREDAAYFAktm4f8ACgkQvJuQZxSWSshLUQCg2/TLbE0L8o6SncclQg3eNtVX > UUsAnjRx9Ki6j0ATebUqTXjEs9zMrQIu > =1cnk > -----END PGP SIGNATURE----- > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 >>> CREATE OR REPLACE RULE send_notify AS ON INSERT TO log DO >>> ALSO NOTIFY logevent; > >> This is better off as a statement-level trigger, so you won't have >> to issue one notify per insert, but one per group of inserts. > I need all detail info of a row to be logged, though I defined only > one data column in this case. You misundertood - replace the RULE that does the NOTIFY with a statement-level trigger, but keep the log insertion as row-level. >> It also looks like you might be reinventing a wheel - maybe can you do >> what you want with Slony's log shipping? > Thanks for your reminder. I have quickly gone through the documents of > Slony-I today, though I can't fully understand how it works, it seems > that the slon does this work by polling when a threshold value is > exceeded. Maybe this is a good solution when the performance is not > satisfied by LISTEN/NOTIFY signal. Probably not - Slony uses LISTEN and NOTIFY as well. I meant more of the wheel reinvention of logging changes and shipping them elsewhere. But if it's just one table, you are probably better off rolling your own. > Sorry that I didn't tell this clearly. My demo client waits on the > select(3C) for the notifies, once received, it does select query on > log table and prints the result on the screen, then delete them from > log table. When the INSERT rate is about 75 every second, the client > didn't have any output for about several hundreds of seconds, > meantime, I can see the rows in log table increased persistently to > about 30K+ before the client deleted them from 'SELECT COUNT(*) from > log' in psql. I guess the backend can't deal with the signals on time. It's still not entirely clear what's going on, but we have a better idea now. Why would the table ever have 30,000 rows? At 75 per second, that means about a seven minute gap - are you saying that's how long it takes before the client notices the NOTIFY? If so, that's very wrong - the time lag should be measured in sub-second intervals, so perhaps your client is doing something wrong. - -- Greg Sabino Mullane greg@turnstep.com End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201002021022 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAktoQ0wACgkQvJuQZxSWSsgSbQCgoXZkrq/nDxx4vJRDx7o4IT1A BSMAoNK5y9KpQrAYNeb5MktoXxhCj9lU =Rb0o -----END PGP SIGNATURE-----