Обсуждение: How to silence constraint violation logging for an INSERT
Hi all, I've a question regarding unique constraints, which I've tried to describe in general terms, to keep things simple. I've working on an application that, amongst other things, may add a row to a table. This table has a primary key defined over two (of the three) fields, which forces the combined value to be unique. An end-user of this system can cause the application to add a row to a table, based on data supplied by that end-user (a create-like command). It can happen that end-users repeat themselves: rerunning the same activity with the same data. Logically, repeating the activity doesn't make sense; the application should fail the second (and all subsequent) attempts with a meaningful error message. When writing the application, there was a deliberate design decision: rather than a read-modify-write cycle, with the corresponding overhead of locking and the resulting serialisation, the software simply lets the INSERT fail (due to the primary-key uniqueness constraint). If this happens, the transaction is rolled back and an error message returned. The software can identify whether the problem is due to an end-user repeating an earlier action by looking at the class code from the SQL error ("23" == Constraint Violation). This allows us to return the correct error. This works fine: the correct error message is reported and the system behaves as it should. There's one problem: if a user repeats their activity then PostgreSQL logs the corresponding constraint violation: ERROR: duplicate key value violates unique constraint [..] The log files may contain many such messages, depending on the usage-pattern of the end-user. Including all these messages in the log file is distracting. The question is: can we suppress the logging of these message .. but allow other error messages to be logged normally? Cheers, Paul.
On Fri, Apr 15, 2011 at 05:55:07PM +0200, Paul Millar wrote: > > ERROR: duplicate key value violates unique constraint [..] > > The log files may contain many such messages, depending on the usage-pattern > of the end-user. Including all these messages in the log file is distracting. > > The question is: can we suppress the logging of these message .. but allow > other error messages to be logged normally? No, but I should think grep -v is your friend here. A -- Andrew Sullivan ajs@crankycanuck.ca
On Fri, Apr 15, 2011 at 10:55 AM, Paul Millar <paul.millar@desy.de> wrote: > Hi all, > > I've a question regarding unique constraints, which I've tried to describe in > general terms, to keep things simple. > > I've working on an application that, amongst other things, may add a row to a > table. This table has a primary key defined over two (of the three) fields, > which forces the combined value to be unique. > > An end-user of this system can cause the application to add a row to a table, > based on data supplied by that end-user (a create-like command). It can > happen that end-users repeat themselves: rerunning the same activity with the > same data. Logically, repeating the activity doesn't make sense; the > application should fail the second (and all subsequent) attempts with a > meaningful error message. > > When writing the application, there was a deliberate design decision: rather > than a read-modify-write cycle, with the corresponding overhead of locking and > the resulting serialisation, the software simply lets the INSERT fail (due to > the primary-key uniqueness constraint). If this happens, the transaction is > rolled back and an error message returned. > > The software can identify whether the problem is due to an end-user repeating > an earlier action by looking at the class code from the SQL error ("23" == > Constraint Violation). This allows us to return the correct error. > > This works fine: the correct error message is reported and the system behaves > as it should. There's one problem: if a user repeats their activity then > PostgreSQL logs the corresponding constraint violation: > > ERROR: duplicate key value violates unique constraint [..] > > The log files may contain many such messages, depending on the usage-pattern > of the end-user. Including all these messages in the log file is distracting. > > The question is: can we suppress the logging of these message .. but allow > other error messages to be logged normally? Consider using INSERT...SELECT WHERE NOT EXISTS... from the app so as to not insert row if the key is already in the table, and checking the number of rows affected (or use 'returning') to see if the insert hit. You can still see duplicate key errors if you do this (race condition), but they should be much rarer. You could also write a function to wrap the insert and trap the error, but that's pretty hacky imo. merlin
Paul Millar <paul.millar@desy.de> writes: > Hi all, > > I've a question regarding unique constraints, which I've tried to describe in > general terms, to keep things simple. > > I've working on an application that, amongst other things, may add a row to a > table. This table has a primary key defined over two (of the three) fields, > which forces the combined value to be unique. > > An end-user of this system can cause the application to add a row to a table, > based on data supplied by that end-user (a create-like command). It can > happen that end-users repeat themselves: rerunning the same activity with the > same data. Logically, repeating the activity doesn't make sense; the > application should fail the second (and all subsequent) attempts with a > meaningful error message. > > When writing the application, there was a deliberate design decision: rather > than a read-modify-write cycle, with the corresponding overhead of locking and > the resulting serialisation, the software simply lets the INSERT fail (due to > the primary-key uniqueness constraint). If this happens, the transaction is > rolled back and an error message returned. > > The software can identify whether the problem is due to an end-user repeating > an earlier action by looking at the class code from the SQL error ("23" == > Constraint Violation). This allows us to return the correct error. > > This works fine: the correct error message is reported and the system behaves > as it should. There's one problem: if a user repeats their activity then > PostgreSQL logs the corresponding constraint violation: > > ERROR: duplicate key value violates unique constraint [..] > > The log files may contain many such messages, depending on the usage-pattern > of the end-user. Including all these messages in the log file is distracting. > > The question is: can we suppress the logging of these message .. but allow > other error messages to be logged normally? Sure. Modify the application as such... begin; do whatever; set log_min_messages to fatal; insert...; -could fail reset log_min_messages; commit; > Cheers, > > Paul. -- Jerry Sievers Postgres DBA/Development Consulting e: gsievers19@comcast.net p: 305.321.1144
On Fri, Apr 15, 2011 at 02:04:49PM -0400, Jerry Sievers wrote: > set log_min_messages to fatal; I thought changing the log_min_messages required superuser access? (That's what the docs say, and what I'd expect too.) A -- Andrew Sullivan ajs@crankycanuck.ca
Andrew Sullivan <ajs@crankycanuck.ca> writes: > On Fri, Apr 15, 2011 at 02:04:49PM -0400, Jerry Sievers wrote: > >> set log_min_messages to fatal; > > I thought changing the log_min_messages required superuser access? > (That's what the docs say, and what I'd expect too.) Oops!... Yup. Forgot about that. I'll go eat some worms now :-) > > > -- > Andrew Sullivan > ajs@crankycanuck.ca -- Jerry Sievers Postgres DBA/Development Consulting e: gsievers19@comcast.net p: 305.321.1144