Re: Really unique session ID - PID + connection timestamp?

Поиск
Список
Период
Сортировка
От David G. Johnston
Тема Re: Really unique session ID - PID + connection timestamp?
Дата
Msg-id CAKFQuwZx7dikECsMtG8eF5vwOhuq-w7DR7jDxZ2ppWCqKeGJyQ@mail.gmail.com
обсуждение исходный текст
Ответ на Re: Really unique session ID - PID + connection timestamp?  ("durumdara@gmail.com" <durumdara@gmail.com>)
Список pgsql-general
On Sun, Apr 10, 2016 at 6:29 AM, durumdara@gmail.com <durumdara@gmail.com> wrote:

Dear Alban!


2016.04.10. 13:05 keltezéssel, Alban Hertroys írta:
On 10 Apr 2016, at 9:07, Durumdara <durumdara@gmail.com> wrote:

Dear Adrian!

Again. As I see the beginning blocks are removed by mailing system in the code.

We have an "ourlocks" table which hold records (TableName, RecordID, SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).

If anybody wants to lock record "for long time", "over the transactions" it try to insert a new record here.
Why are those records being locked? Reading on, it seems like you're trying to solve a fairly standard concurrency problem. Any RDBMS worth their salt can handle that for you, you don't need to manually do any of that.

This is not real locks. They are logical locks.
Products, offers are edited for long time.
But we must save subdata. This is not a "word like document" which can saved at once, in a transaction.
When a product edited, we must protect it from other user's edit.
But it's subdata must be posted/commited to the DB, for example shipping, article quantity changes, vouchers, etc.




This sounds much more like a use-case for sub-transactions and select for update (which puts a temporary RDBMS-controlled lock on the relevant records) than for manual locking.

Yes, this is like sub-transaction.
But for only sub-data. The main data must be edited by only the first user who started the edit.
This is a long time "lock" like thing. This what we simulate here.

Thanks for your suggestions. I will check this in our client library.

​I get what you are trying to do, and was trying to figure out a way to make "FOR UPDATE" and "SKIP LOCKED" or "NOWAIT" work in concert to solve the problem.  I'm not familiar enough with the problem to have come up with a viable solution.  But I kept coming back to the theory that maybe making the database solve the "long running transaction" problem isn't the best idea.  I'd tend to gravitate toward having a dedicated "librarian" application who is responsible accepting checkout (select) requests and processing returns (updates) over the relevant data.

PostgreSQL doesn't make doing pessimistic concurrency via long-running transactions that fun to implement - namely because it is not very scalable and can wreck havoc on the system if done improperly as many of the maintenance routines can be prevented from doing their work.  That said its likely there is some way to accomplish your goal.

In fact, I just remembered that we implemented "advisory locks" for just that reason.


David J.



В списке pgsql-general по дате отправления:

Предыдущее
От: "Bannert Matthias"
Дата:
Сообщение: Re: max_stack_depth problem though query is substantially smaller
Следующее
От: Michael Nolan
Дата:
Сообщение: Re: Bypassing NULL elements in row_to_json function