Обсуждение: lock - feedback
hi, i have a short question and i don't find an answer is the archive. i'm using the odbc interface to the db and if i lock rows with SELECT FOR UPDATE and someone else try the same or try to read the rows, i will give him a feedback that he must wait or abort. The Problem is that the SELECT statement wait without feedback until the end of the world if the row is lock. So, is there a easy way to do it, or must i implement it by my self with threads or something? thankx Thomas Rokohl
Thomas Rokohl wrote: > hi, > > i have a short question and i don't find an answer is the archive. > i'm using the odbc interface to the db and if i lock rows with > SELECT FOR UPDATE and someone else try the same or > try to read the rows, i will give him a feedback that he must wait or > abort. > The Problem is that the SELECT statement wait without feedback until > the end of the world if the row is lock. Actually, it will timeout, and you can set that with PG's configuration setting of "statement_timeout". See the "run-time configuration" section of the manual for details. -- Richard Huxton Archonet Ltd
Richard Huxton wrote: > Thomas Rokohl wrote: >> hi, >> >> i have a short question and i don't find an answer is the archive. >> i'm using the odbc interface to the db and if i lock rows with >> SELECT FOR UPDATE and someone else try the same or >> try to read the rows, i will give him a feedback that he must wait or >> abort. >> The Problem is that the SELECT statement wait without feedback until >> the end of the world if the row is lock. > > Actually, it will timeout, and you can set that with PG's > configuration setting of "statement_timeout". See the "run-time > configuration" section of the manual for details. > first: thanks for your fast answer. if had found it in the manual but there is only a short clause about this topic: "Abort any statement that takes over the specified number of milliseconds. A value of zero (the default) turns off the limitation. " ( by the way in my config file is it set to zero by default and that means that it will be wait without a timeout. #statement_timeout = 0 # 0 is disabled, in milliseconds ) so but i'm not sure, doesn't it mean that all statements that take longer than this time will be abort? if it is so, that isn't a solution for this problem because than i can't make a "big" request anymore.
Thomas Rokohl wrote: > Richard Huxton wrote: >> Thomas Rokohl wrote: >>> hi, >>> >>> i have a short question and i don't find an answer is the archive. >>> i'm using the odbc interface to the db and if i lock rows with >>> SELECT FOR UPDATE and someone else try the same or >>> try to read the rows, i will give him a feedback that he must wait or >>> abort. >>> The Problem is that the SELECT statement wait without feedback until >>> the end of the world if the row is lock. >> >> Actually, it will timeout, and you can set that with PG's >> configuration setting of "statement_timeout". See the "run-time >> configuration" section of the manual for details. >> > first: thanks for your fast answer. > > if had found it in the manual but there is only a short clause about > this topic: > > "Abort any statement that takes over the specified number of > milliseconds. A value of zero (the default) turns off the limitation. " > > ( by the way in my config file is it set to zero by default and that > means that it will be wait without a timeout. > #statement_timeout = 0 # 0 is disabled, in milliseconds > ) > > so but i'm not sure, doesn't it mean that all statements that take > longer than this time will be abort? > if it is so, that isn't a solution for this problem because than i can't > make a "big" request anymore. You can change it during a session "set statement_timeout=2000" -- Richard Huxton Archonet Ltd
Richard Huxton wrote: > Thomas Rokohl wrote: >> Richard Huxton wrote: >>> Thomas Rokohl wrote: >>>> hi, >>>> >>>> i have a short question and i don't find an answer is the archive. >>>> i'm using the odbc interface to the db and if i lock rows with >>>> SELECT FOR UPDATE and someone else try the same or >>>> try to read the rows, i will give him a feedback that he must wait >>>> or abort. >>>> The Problem is that the SELECT statement wait without feedback until >>>> the end of the world if the row is lock. >>> >>> Actually, it will timeout, and you can set that with PG's >>> configuration setting of "statement_timeout". See the "run-time >>> configuration" section of the manual for details. >>> >> first: thanks for your fast answer. >> >> if had found it in the manual but there is only a short clause about >> this topic: >> >> "Abort any statement that takes over the specified number of >> milliseconds. A value of zero (the default) turns off the limitation. " >> >> ( by the way in my config file is it set to zero by default and that >> means that it will be wait without a timeout. >> #statement_timeout = 0 # 0 is disabled, in milliseconds >> ) >> >> so but i'm not sure, doesn't it mean that all statements that take >> longer than this time will be abort? >> if it is so, that isn't a solution for this problem because than i >> can't make a "big" request anymore. > > You can change it during a session "set statement_timeout=2000" > yes, but that isn't a certain event that say that the data is locked! also the network for example can determine different timeouts. and also i don't know the working time for each request before. that isn't a solution for me. i will open a "please wait, while the data is locked" - window, if the data is locked. and it seems as if this should be a problem with postgres :-(
> >> first: thanks for your fast answer. > >> > >> if had found it in the manual but there is only a short clause about > >> this topic: > >> > >> "Abort any statement that takes over the specified number of > >> milliseconds. A value of zero (the default) turns off the limitation. " > >> > >> ( by the way in my config file is it set to zero by default and that > >> means that it will be wait without a timeout. > >> #statement_timeout = 0 # 0 is disabled, in milliseconds > >> ) > >> > >> so but i'm not sure, doesn't it mean that all statements that take > >> longer than this time will be abort? > >> if it is so, that isn't a solution for this problem because than i > >> can't make a "big" request anymore. > > > > You can change it during a session "set statement_timeout=2000" > > > yes, but that isn't a certain event that say that the data is locked! > also the network for example can determine different timeouts. > and also i don't know the working time for each request before. > that isn't a solution for me. i will open a "please wait, while the data > is locked" - window, if the data is locked. > > and it seems as if this should be a problem with postgres :-( I am not a big fan of twiddling statement_timeout, it creates more problems than it solves. Thomas, select...for update is not really designed to hold pessimistic locks, only in-transaction locks. For example, it is considered (very) bad practices to hold a lock while waiting for input from the user. Have you looked at contrib\userlock? With it, you can simulate pessimistic locks with a non-blocking result code. Merlin
Thomas Rokohl wrote: >>> >>> so but i'm not sure, doesn't it mean that all statements that take >>> longer than this time will be abort? >>> if it is so, that isn't a solution for this problem because than i >>> can't make a "big" request anymore. >> >> You can change it during a session "set statement_timeout=2000" >> > yes, but that isn't a certain event that say that the data is locked! > also the network for example can determine different timeouts. Yes, but network timeouts will be of the order of several minutes - hours. You're not holding your locks for that long are you? Are you really saying your SELECT FOR UPDATE queries can take more than a couple of seconds to complete? > and also i don't know the working time for each request before. > that isn't a solution for me. i will open a "please wait, while the data > is locked" - window, if the data is locked. So, you want to write code something like: WHILE (still_trying) SELECT ... FOR UPDATE IF (failed) THEN IF (errcode=lock) THEN display_please_wait_message() ELSE display_error_message_and_abort() END IF ELSE got_lock = true END IF END WHILE IF got_lock THEN do_my_updates() END IF All I'm saying is around "SELECT ... FOR UPDATE" you put two "SET statement_timeout" to set a short timeout and then revert it. > and it seems as if this should be a problem with postgres :-( Perhaps - you can apply NOWAIT to "LOCK TABLE" and I think in 8.1 to "SELECT FOR UPDATE" also. That's not much use if you're stuck with 8.0 for the moment though. It also doesn't actually change the logic of the code above. Or am I missing something here? -- Richard Huxton Archonet Ltd
> > Have you looked at contrib\userlock? With it, you can simulate > > pessimistic locks with a non-blocking result code. > > > > Merlin > > > > > > > contrib/userlock? where i can find something about this? in the manual i http://developer.postgresql.org/cvsweb.cgi/~checkout~/pgsql/contrib/user lock/README.user_locks?rev=1.3;content-type=text%2Fplain > can't find something ?!?! > mhm what is with the "lost update" problem, can i solve it with it?!?! probably. However, in many cases proper use of transactions is more appropriate. This will be even easier to do when we get proper assertions. > i can't visualize how it can works? select user_write_lock(oid) from my_table where value = x; returns 1 on success, 0 on failure. just be careful... select user_write_lock(oid) from my_table where value = x order by y limit 1; can acquire more locks than you might think since the table has to be materialized to do the order. better to write: select user_write_lock(oid) from ( select oid, * from my_table where value = x order by y limit 1; ) also, don't use oid :). In my applications, I make a domain type called 'cuid' which pulls nextval() from a public sequence. Put that into your tables and lock on it. Just watch for dump/restore. Merlin
Richard Huxton wrote: > Yes, but network timeouts will be of the order of several minutes - > hours. You're not holding your locks for that long are you? Are you > really saying your SELECT FOR UPDATE queries can take more than a > couple of seconds to complete? yes normally not, but it can happened. let something goes wrong, for example the network is busy or the db server is busy or something like that. if the number of request to a server is high enough it will be slow. whatever it isn't a certain event for locks. > All I'm saying is around "SELECT ... FOR UPDATE" you put two "SET > statement_timeout" to set a short timeout and then revert it. yes it is clear what you mean but it can also make trouble and a deadlock if the timeout is the short for the request. >> and it seems as if this should be a problem with postgres :-( > > Perhaps - you can apply NOWAIT to "LOCK TABLE" and I think in 8.1 to > "SELECT FOR UPDATE" also. That's not much use if you're stuck with 8.0 > for the moment though. It also doesn't actually change the logic of > the code above. i had read it and i had trouble with the beta so i us the stable version. but NOWAIT is a certain event for locks, the timeout isn't one.
Merlin Moncure wrote: > select user_write_lock(oid) from my_table where value = x; > returns 1 on success, 0 on failure. > > just be careful... > select user_write_lock(oid) from my_table where value = x order by y > limit 1; > can acquire more locks than you might think since the table has to be > materialized to do the order. > > better to write: > > select user_write_lock(oid) from > ( > select oid, * from my_table where value = x order by y limit 1; > ) > > also, don't use oid :). In my applications, I make a domain type called > 'cuid' which pulls nextval() from a public sequence. Put that into your > tables and lock on it. Just watch for dump/restore. > > Merlin > > > ok i understand, thanks. that work's. but i'm a little bit confused. this problem is, from my point of view, a highly frequently appearing problem. on all places where it can be that two users edit the same record, the "lost update" problem or the "waiting" problem ;-) can be appeared. and this is not rare i think. your solution is good and works surely <http://dict.leo.org/se?lp=ende&p=lURE.&search=surely> fine( i will test it ), but it is also unhandy. my opinion is that this problem should be solved by the database and not by the user, so i think it is a good point for a wish list ;-).
Thomas Rokohl wrote: > Richard Huxton wrote: >> Yes, but network timeouts will be of the order of several minutes - >> hours. You're not holding your locks for that long are you? Are you >> really saying your SELECT FOR UPDATE queries can take more than a >> couple of seconds to complete? > yes normally not, but it can happened. let something goes wrong, for > example the network is busy or the db server is busy or something like > that. if the number of request to a server is high enough it will be slow. > whatever it isn't a certain event for locks. > >> All I'm saying is around "SELECT ... FOR UPDATE" you put two "SET >> statement_timeout" to set a short timeout and then revert it. > yes it is clear what you mean but it can also make trouble and a > deadlock if the timeout is the short for the request. No, you can't get a deadlock. You could get an update failing when it could theoretically have worked if you had a longer timeout. >>> and it seems as if this should be a problem with postgres :-( >> >> Perhaps - you can apply NOWAIT to "LOCK TABLE" and I think in 8.1 to >> "SELECT FOR UPDATE" also. That's not much use if you're stuck with 8.0 >> for the moment though. It also doesn't actually change the logic of >> the code above. > i had read it and i had trouble with the beta so i us the stable version. > but NOWAIT is a certain event for locks, the timeout isn't one. Actually, in either case you'll need to check the return-code. Just because you added NOWAIT doesn't mean there can't be some other reason the query failed. As you have pointed out though, if your query needs 3 seconds and you set the statement timeout to 2 seconds then it'll never work. In practice, it's less of a problem than you might think. -- Richard Huxton Archonet Ltd
Merlin Moncure wrote: > There is no 100% correct answer to this problem. However, you are > thinking in terms of pessimistic locking (like most people do), so by > all means use it. > > By 'lost update' problem, I assume you mean > user A reads > user B reads > user A writes > user B writes <-- overwrites user A's changes > > I suggest you read up on locking strategies. Pessimistic is the easiest > way to solve this problem but rarely the best way. Worst problem here > is you are blocking on reads which (drastically) lowers concurrency and > forces your app to be prepared to handle read failures...this can be a > huge headache. > > Pure optimistic locking (in example above) is obviously not suitable in > multi user applications if you care about your data. However, you could > use timestamps + assertions to throw an error during an update if you > are out of synch with the server here. The major difference here is > that you are making user aware of problem when it happens, not when your > app thinks it might happen. This type of lock resolution can be made > via rules, triggers, or a specialized dynamic sql procedure to resolve > the record. > > also, this is off-topic for odbc list :) > > merlin > > > sure it is big problem, but i think you can solve it with different locking types. if you have a write look and no read looks it should work. i mean if you never ban the reading of the data but you guarantee with write locks than only one user can have a record in write-mode everything is fine and nothing is slow. and this is the point where i want to pop up a dialog with the message that someone else edit the record yet.
> > sure it is big problem, but i think you can solve it with different > locking types. > if you have a write look and no read looks it should work. > i mean if you never ban the reading of the data but you guarantee with > write locks > than only one user can have a record in write-mode everything is fine > and nothing is slow. > and this is the point where i want to pop up a dialog with the message > that someone else edit the record yet. with user locks I do exactly this. I develop a ISAM (COBOL) driver for postgresql with pessimistic locking including ISAM style table locks. Play with the userlock module and you will find there are various lock types meeting different criteria. Merlin
> > > > ok i understand, thanks. that work's. > > but i'm a little bit confused. this problem is, from my point of view, a > highly frequently appearing problem. > on all places where it can be that two users edit the same record, the > "lost update" problem or the "waiting" problem ;-) can be appeared. > and this is not rare i think. > your solution is good and works surely > <http://dict.leo.org/se?lp=ende&p=lURE.&search=surely> fine( i will test > it ), but it is also unhandy. > my opinion is that this problem should be solved by the database and not > by the user, so i think it is a good point for a wish list ;-). There is no 100% correct answer to this problem. However, you are thinking in terms of pessimistic locking (like most people do), so by all means use it. By 'lost update' problem, I assume you mean user A reads user B reads user A writes user B writes <-- overwrites user A's changes I suggest you read up on locking strategies. Pessimistic is the easiest way to solve this problem but rarely the best way. Worst problem here is you are blocking on reads which (drastically) lowers concurrency and forces your app to be prepared to handle read failures...this can be a huge headache. Pure optimistic locking (in example above) is obviously not suitable in multi user applications if you care about your data. However, you could use timestamps + assertions to throw an error during an update if you are out of synch with the server here. The major difference here is that you are making user aware of problem when it happens, not when your app thinks it might happen. This type of lock resolution can be made via rules, triggers, or a specialized dynamic sql procedure to resolve the record. also, this is off-topic for odbc list :) merlin