Обсуждение: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?

Поиск
Список
Период
Сортировка
Hi List.

Our application does something like this, through psqlodbc :
------------------------------------------------------------------------------
Open transaction (« BEGIN »)
FOR x TO y STEP 1
   Do Stuff
   « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
  Do Stuff
  « UPDATE table1 SET col1=z WHERE condition1 ; »
  Do Stuff
NEXT x
End transaction (« COMMIT »)
------------------------------------------------------------------------------

Against PostgreSQL 8.4 : no problem.
Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 

We tried with different languages (C#, Omnis Studio, plain plpgsql ...), different versions of psqlodbc (8.4.2 to
9.3.x),different versions of PostgreSQL (8.4, 9.3, 9.4 RC1) , without psqlODBC ... and it all comes down to this :  
The slow-down only happens with psqlodbc and PostgreSQL 9.x

So we guess it's abug ?

Regards,
Nathanael TERRIEN
Must  Informatique



On 12/05/2014 11:14 AM, Nathanael Terrien wrote:
> Hi List.
>
> Our application does something like this, through psqlodbc :
> ------------------------------------------------------------------------------
> Open transaction (« BEGIN »)
> FOR x TO y STEP 1
>     Do Stuff
>     « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
>    Do Stuff
>    « UPDATE table1 SET col1=z WHERE condition1 ; »
>    Do Stuff
> NEXT x
> End transaction (« COMMIT »)
> ------------------------------------------------------------------------------
>
> Against PostgreSQL 8.4 : no problem.
> Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 
>
> We tried with different languages (C#, Omnis Studio, plain plpgsql ...), different versions of psqlodbc (8.4.2 to
9.3.x),different versions of PostgreSQL (8.4, 9.3, 9.4 RC1) , without psqlODBC ... and it all comes down to this : 
> The slow-down only happens with psqlodbc and PostgreSQL 9.x
>
> So we guess it's abug ?

Maybe. Or just changed behavior between 8.4 and 9.0.

Could you write a self-contained test case to demonstrate this?
Something with the CREATE TABLE and other statements required to create
the database, and a small test program. It's pretty hard to debug otherwise.

- Heikki



> Maybe. Or just changed behavior between 8.4 and 9.0.

The tests we performed with other connection methods don't slow down  (Omnis Studio + PGDAM, C# + npgsql, or just a
functioncalled multiple times in the same query window from pgAdmin). 

> Could you write a self-contained test case to demonstrate this?

Yes.
Do you mind if it's a DotNet EXE  ?

-----Message d'origine-----
De : Heikki Linnakangas [mailto:hlinnakangas@vmware.com]
Envoyé : vendredi 5 décembre 2014 11:48
À : Nathanael Terrien; pgsql-odbc@postgresql.org
Objet : Re: [ODBC] Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with
PostgreSQL9.x ? 

On 12/05/2014 11:14 AM, Nathanael Terrien wrote:
> Hi List.
>
> Our application does something like this, through psqlodbc :
> ----------------------------------------------------------------------
> --------
> Open transaction (« BEGIN »)
> FOR x TO y STEP 1
>     Do Stuff
>     « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
>    Do Stuff
>    « UPDATE table1 SET col1=z WHERE condition1 ; »
>    Do Stuff
> NEXT x
> End transaction (« COMMIT »)
> ----------------------------------------------------------------------
> --------
>
> Against PostgreSQL 8.4 : no problem.
> Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 
>
> We tried with different languages (C#, Omnis Studio, plain plpgsql ...), different versions of psqlodbc (8.4.2 to
9.3.x),different versions of PostgreSQL (8.4, 9.3, 9.4 RC1) , without psqlODBC ... and it all comes down to this : 
> The slow-down only happens with psqlodbc and PostgreSQL 9.x
>
> So we guess it's abug ?

Maybe. Or just changed behavior between 8.4 and 9.0.

Could you write a self-contained test case to demonstrate this?
Something with the CREATE TABLE and other statements required to create the database, and a small test program. It's
prettyhard to debug otherwise. 

- Heikki



On 12/05/2014 01:03 PM, Nathanael Terrien wrote:
>> Maybe. Or just changed behavior between 8.4 and 9.0.
>
> The tests we performed with other connection methods don't slow down  (Omnis Studio + PGDAM, C# + npgsql, or just a
functioncalled multiple times in the same query window from pgAdmin). 
>
>> Could you write a self-contained test case to demonstrate this?
>
> Yes.

Thanks!

> Do you mind if it's a DotNet EXE  ?

I'd prefer plain C, I don't have a .net development environment
installed. But if you write a short .net program to reproduce it, I
should be able to rewrite it in C pretty quickly, as long as you provide
the sources.

- Heikki



Nathanael Terrien wrote:
> Hi List.
>
> Our application does something like this, through psqlodbc :
> ------------------------------------------------------------------------------
> Open transaction (« BEGIN »)
> FOR x TO y STEP 1
>    Do Stuff
>    « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
>   Do Stuff
>   « UPDATE table1 SET col1=z WHERE condition1 ; »
>   Do Stuff
> NEXT x
> End transaction (« COMMIT »)
> ------------------------------------------------------------------------------
>
> Against PostgreSQL 8.4 : no problem.
> Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 

Exactly what version is 9.x?  We solved a number of issues in FOR UPDATE
locking in early 9.3 minor releases; these should all be fixed in 9.3.5.
You might be running into the problem supposedly fixed by the below
commit, but it'd imply you're on 9.3.2 or earlier, which is unadvisable
because of other data-eating bugs:

commit 0bc00363b9b1d5ee44a0b25ed2dfc83f81e68258
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Fri Dec 13 17:16:25 2013 -0300

    Rework MultiXactId cache code

    The original performs too poorly; in some scenarios it shows way too
    high while profiling.  Try to make it a bit smarter to avoid excessive
    cosst.  In particular, make it have a maximum size, and have entries be
    sorted in LRU order; once the max size is reached, evict the oldest
    entry to avoid it from growing too large.

    Per complaint from Andres Freund in connection with new tuple freezing
    code.


Now that I think about this, maybe the cache in your case is not being
useful for some reason or other, and it's causing more of a slowdown.
Is this plpgsql?  If so, do you have EXCEPTION blocks in plpgsql code?
Maybe SAVEPOINTs somewhere?  (Does the ODBC driver create SAVEPOINTs
automatically?)

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


On 12/05/2014 01:59 PM, Alvaro Herrera wrote:
> Now that I think about this, maybe the cache in your case is not being
> useful for some reason or other, and it's causing more of a slowdown.
> Is this plpgsql?  If so, do you have EXCEPTION blocks in plpgsql code?
> Maybe SAVEPOINTs somewhere?  (Does the ODBC driver create SAVEPOINTs
> automatically?)

Ah, good point. psqlodbc does create SAVEPOINTs automatically, if you
run in the "statement" "rollback on error" mode. That is, Protocol=*-2
mode. It is the default.

You could try setting Protocol=7.4-0 in the config file (or set the
corresponding option in the ODBC configuration GUI) and see if it helps.
That changes the behaviour on errors, so that when an error happens the
whole transaction is rolled back, rather than just the statement. I
don't know if that's acceptable for your application, but if it is, you
should change that setting, because that gets rid of the extra SAVEPOINT
commands, and improves performance. Even if it turns out not to be the
cause of the exponential slowdown.

- Heikki



>Exactly what version is 9.x?

9.3.3 et 9.4 RC1

-----Message d'origine-----
De : Alvaro Herrera [mailto:alvherre@2ndquadrant.com]
Envoyé : vendredi 5 décembre 2014 12:59
À : Nathanael Terrien
Cc : pgsql-odbc@postgresql.org
Objet : Re: [ODBC] Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with
PostgreSQL9.x ? 

Nathanael Terrien wrote:
> Hi List.
>
> Our application does something like this, through psqlodbc :
> ----------------------------------------------------------------------
> --------
> Open transaction (« BEGIN »)
> FOR x TO y STEP 1
>    Do Stuff
>    « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
>   Do Stuff
>   « UPDATE table1 SET col1=z WHERE condition1 ; »
>   Do Stuff
> NEXT x
> End transaction (« COMMIT »)
> ----------------------------------------------------------------------
> --------
>
> Against PostgreSQL 8.4 : no problem.
> Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 

Exactly what version is 9.x?  We solved a number of issues in FOR UPDATE locking in early 9.3 minor releases; these
shouldall be fixed in 9.3.5. 
You might be running into the problem supposedly fixed by the below commit, but it'd imply you're on 9.3.2 or earlier,
whichis unadvisable because of other data-eating bugs: 

commit 0bc00363b9b1d5ee44a0b25ed2dfc83f81e68258
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Fri Dec 13 17:16:25 2013 -0300

    Rework MultiXactId cache code

    The original performs too poorly; in some scenarios it shows way too
    high while profiling.  Try to make it a bit smarter to avoid excessive
    cosst.  In particular, make it have a maximum size, and have entries be
    sorted in LRU order; once the max size is reached, evict the oldest
    entry to avoid it from growing too large.

    Per complaint from Andres Freund in connection with new tuple freezing
    code.


Now that I think about this, maybe the cache in your case is not being useful for some reason or other, and it's
causingmore of a slowdown. 
Is this plpgsql?  If so, do you have EXCEPTION blocks in plpgsql code?
Maybe SAVEPOINTs somewhere?  (Does the ODBC driver create SAVEPOINTs
automatically?)

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


I reconfigured the ODBC connection setting (through the driver's windows interface) and set the protocol to "7.4-1" and
itsolved the slowdown problem. 
(w00t!)
Which is fine by us as it was the default behavior before (I guess), and that's how our application is designed to
handletransaction.  

Thank you, guys :)

The test case is pretty simple  :
--------------------------------------------------------
-- create and populate table
CREATE TABLE _slowlocks
(
  lib character varying(50),
  numbers integer
);
INSERT INTO _slowlocks VALUES ('one',0),('two',0),('three',0);
----------------------------------------------------------
--- In any language/program, run something like this :
---------------
ExecuteSQL("BEGIN;")
FOR(i=0,i++,i<1000)
    ExecuteSQL("SELECT numbers FROM _slowlocks WHERE lib='two' FOR UPDATE;")
    ExecuteSQL("UPDATE _slowlocks SET numbers= numbers +1 WHERE lib='two';")
NEXT i
ExecuteSQL("COMMIT;")
----------------------------------------------------------

With protocol set to "7.4-2", it took 1 minute and 11 seconds
With protocol set to "7.4-1", it took  1.2 second ...


-----Message d'origine-----
De : Heikki Linnakangas [mailto:hlinnakangas@vmware.com]
Envoyé : vendredi 5 décembre 2014 13:06
À : Alvaro Herrera; Nathanael Terrien
Cc : pgsql-odbc@postgresql.org
Objet : Re: [ODBC] Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with
PostgreSQL9.x ? 

On 12/05/2014 01:59 PM, Alvaro Herrera wrote:
> Now that I think about this, maybe the cache in your case is not being
> useful for some reason or other, and it's causing more of a slowdown.
> Is this plpgsql?  If so, do you have EXCEPTION blocks in plpgsql code?
> Maybe SAVEPOINTs somewhere?  (Does the ODBC driver create SAVEPOINTs
> automatically?)

Ah, good point. psqlodbc does create SAVEPOINTs automatically, if you run in the "statement" "rollback on error" mode.
Thatis, Protocol=*-2 mode. It is the default. 

You could try setting Protocol=7.4-0 in the config file (or set the corresponding option in the ODBC configuration GUI)
andsee if it helps.  
That changes the behaviour on errors, so that when an error happens the whole transaction is rolled back, rather than
justthe statement. I don't know if that's acceptable for your application, but if it is, you should change that
setting,because that gets rid of the extra SAVEPOINT commands, and improves performance. Even if it turns out not to be
thecause of the exponential slowdown. 

- Heikki