Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?

Поиск
Список
Период
Сортировка
От Nathanael Terrien
Тема Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?
Дата
Msg-id 15fb6aee107e469393d79dc11556f2eb@EXCH2013.mustinformatique.fr
обсуждение исходный текст
Ответ на Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?  (Alvaro Herrera <alvherre@2ndquadrant.com>)
Список pgsql-odbc
>Exactly what version is 9.x?

9.3.3 et 9.4 RC1

-----Message d'origine-----
De : Alvaro Herrera [mailto:alvherre@2ndquadrant.com]
Envoyé : vendredi 5 décembre 2014 12:59
À : Nathanael Terrien
Cc : pgsql-odbc@postgresql.org
Objet : Re: [ODBC] Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with
PostgreSQL9.x ? 

Nathanael Terrien wrote:
> Hi List.
>
> Our application does something like this, through psqlodbc :
> ----------------------------------------------------------------------
> --------
> Open transaction (« BEGIN »)
> FOR x TO y STEP 1
>    Do Stuff
>    « SELECT col1 FROM table1 WHERE condition1 FOR UPDATE ; »
>   Do Stuff
>   « UPDATE table1 SET col1=z WHERE condition1 ; »
>   Do Stuff
> NEXT x
> End transaction (« COMMIT »)
> ----------------------------------------------------------------------
> --------
>
> Against PostgreSQL 8.4 : no problem.
> Against PostgreSQL 9.x : starting at about a few hundred loops (locks), the process slows down, and continues to slow
downexponentially, until the COMMIT happens. 

Exactly what version is 9.x?  We solved a number of issues in FOR UPDATE locking in early 9.3 minor releases; these
shouldall be fixed in 9.3.5. 
You might be running into the problem supposedly fixed by the below commit, but it'd imply you're on 9.3.2 or earlier,
whichis unadvisable because of other data-eating bugs: 

commit 0bc00363b9b1d5ee44a0b25ed2dfc83f81e68258
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Fri Dec 13 17:16:25 2013 -0300

    Rework MultiXactId cache code

    The original performs too poorly; in some scenarios it shows way too
    high while profiling.  Try to make it a bit smarter to avoid excessive
    cosst.  In particular, make it have a maximum size, and have entries be
    sorted in LRU order; once the max size is reached, evict the oldest
    entry to avoid it from growing too large.

    Per complaint from Andres Freund in connection with new tuple freezing
    code.


Now that I think about this, maybe the cache in your case is not being useful for some reason or other, and it's
causingmore of a slowdown. 
Is this plpgsql?  If so, do you have EXCEPTION blocks in plpgsql code?
Maybe SAVEPOINTs somewhere?  (Does the ODBC driver create SAVEPOINTs
automatically?)

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


В списке pgsql-odbc по дате отправления:

Предыдущее
От: Heikki Linnakangas
Дата:
Сообщение: Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?
Следующее
От: Nathanael Terrien
Дата:
Сообщение: Re: Exponential processing time for multiple SELECT FOR UPDATE / UPDATE in a single transaction with PostgreSQL 9.x ?