Locking issues under transaction

Поиск
Список
Период
Сортировка
От Stefano Mioli
Тема Locking issues under transaction
Дата
Msg-id CALY=uwSDfVXBqBaKSdDrbSnHbCz4Wb1pEteVtaEunev=eBsVTg@mail.gmail.com
обсуждение исходный текст
Список pgsql-novice
Hello everyone,
        I recently got a "Out of shared memory - You might need to
increase max_locks_per_transaction" error.

This is a long running process where I import a bunch of CSV files
after proper processing.
I loop through them (they're not big, around 12K lines tops) and open
a transaction for each one, process it and then close the transaction,
then move to the next file.

The processing I do, as far as PG is concerned, amounts to doing a few
SELECTs (a couple per line, no joins or anything) and then an UPDATE
of one existing rows (at most one per line).

The problem is, pg_locks fills up with transactions that have no
relation id, are of type "transactionid" and have a mode of
"ExclusiveLock".
I got as many as 20K rows like that in pg_locks and, besides the ones
that, as expected, are locking the two tables I'm using, an
overwhelming majority looks like this (sorry, the indentation will be
screwed up, I'm posting a link to pastebin too):

"locktype" "database" "relation" "page" "tuple" "virtualxid"
"transactionid" "classid" "objid" "objsubid" "virtualtransaction"
"pid" "mode" "granted" "fastpath"
"virtualxid" "" "" "" "" "11/18291" "" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "t"
"transactionid" "" "" "" "" "" "61840165" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61843843" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61833173" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61835511" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61846000" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61838308" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"
"transactionid" "" "" "" "" "" "61832936" "" "" "" "11/18291" "6308"
"ExclusiveLock" "t" "f"

http://pastebin.com/V4K3AvHD

Basically, none of those records have either a database or a relation,
only a transaction id.

The only different row is the first one which I take as coming from
the transaction acquiring a lock on itself.

All the locks are being acquired by the same connection, and
apparently only when executing the UPDATE (which really is no harder
than UPDATE t SET foo = 'x' WHERE bar = 'hey' and baz = 'ho').

When I'm done processing the file and commit the transaction, all
those thousands of records just disappear, going back to a more normal
number of between 15 and 30.
It's as if each UPDATE somehow acquired a lock, making the record
count of pg_locks increase to the tune of one every one or two
seconds.


I assume I must be doing something wrong, but I can't figure out what.

As a temporary workaround I indeed increased
max_locks_per_transaction, but I would really like to actually solve
the problem.
I don't think an increase of max_locks_per_transaction is warranted
for my use case, which is quite simple.

I'm on Windows Web Server 2008 R2.

PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit


Anyone can shed some light?


Thanks in advance.

--
Stefano Mioli


В списке pgsql-novice по дате отправления:

Предыдущее
От: Hans Ginzel
Дата:
Сообщение: Re: nohup psql does not read .pgpass
Следующее
От: Hans Ginzel
Дата:
Сообщение: view without definition in information_schema.Views