Re: BUG #6393: cluster sometime fail under heavy concurrent write load

Поиск
Список
Период
Сортировка
От Alvaro Herrera
Тема Re: BUG #6393: cluster sometime fail under heavy concurrent write load
Дата
Msg-id 1326295419-sup-3436@alvh.no-ip.org
обсуждение исходный текст
Ответ на BUG #6393: cluster sometime fail under heavy concurrent write load  (maxim.boguk@gmail.com)
Ответы Re: BUG #6393: cluster sometime fail under heavy concurrent write load  (Maxim Boguk <maxim.boguk@gmail.com>)
Список pgsql-bugs
Excerpts from maxim.boguk's message of mar ene 10 23:00:59 -0300 2012:
> The following bug has been logged on the website:
>=20
> Bug reference:      6393
> Logged by:          Maxim Boguk
> Email address:      maxim.boguk@gmail.com
> PostgreSQL version: 9.0.6
> Operating system:   Linux Ubuntu
> Description:=20=20=20=20=20=20=20=20
>=20
> I have heavy write-load table under PostgreSQL 9.0.6 and sometime (not
> always but more then 50% chance) i'm getting the next error during cluste=
r:
>=20
> db=3D# cluster public.enqueued_mail;
> ERROR:  duplicate key value violates unique constraint
> "pg_toast_119685646_index"
> DETAIL:  Key (chunk_id, chunk_seq)=3D(119685590, 0) already exists.
>=20
> chunk_id different each time.
>=20
> No uncommon datatypes exists in the table.
>=20
> Currently I work on create reproducible test case (but it seems require 2=
-3
> open write transaction on the table).

I don't see how can this be done at all, given that cluster grabs an
exclusive lock on the table in question.  An better example illustrating
what you're really doing would be useful.

--=20
=C3=81lvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: BUG #6391: insert does not insert correct value
Следующее
От: "Kevin Grittner"
Дата:
Сообщение: Re: FreeBSD 9.0/amd64, PostgreSQL 9.1.2, pgbouncer 1.4.2: segmentation fault