ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

Поиск
Список
Период
Сортировка
От Kragen Sitaker
Тема ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index
Дата
Msg-id 20040109215656.2CBCCC23A0@mail.airwave.com
обсуждение исходный текст
Ответы Re: ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: ERROR: Cannot insert a duplicate key into unique index pg_class_relname_nsp_index  (Martijn van Oosterhout <kleptog@svana.org>)
Список pgsql-general
ERROR:  Cannot insert a duplicate key into unique index pg_class_relname_nsp_index

We've been getting this error in our application every once in a while
--- typically once an hour to once a day, although it varies over time.
The daemon that gets the error exits and restarts a few seconds later.
Usually it's fine then, but sometimes the error happens three or more
times in a row.

Occasionally, instead, we get "ERROR:  Cannot insert a duplicate key
into unique index pg_type_typname_nsp_index".

We started seeing this error on November 22, three days after we migrated
from Postgres 7.2.3 and 7.3.2 to 7.3.4.  We still see the error with
7.3.5, but not with 7.4.0.  We're not sure we're quite ready to migrate
all of our customers to 7.4.0 yet, though.

The daemon that gets this error does the following every 15 seconds:
- start a transaction
- execute a complex and sometimes slow SELECT INTO query, creating a
  temporary table
- lock another table in exclusive mode
- for no good reason, locking the temporary table too; this is
  embarrassing but could conceivably relate to the bug
- delete the contents of the other table
- copy the contents of the temporary table into the other table
- drop the temporary table (again, embarrassing, sorry)
- commit

Reducing the interval from 15 seconds to 0.2 seconds makes the error
happen several times an hour.  We hoped it might make the error happen
several times a minute.

We're doing this temp-table dance to minimize the time the other table
is locked, and to keep anyone who's reading the other table from seeing
an incomplete list of records.

It does all of this in four queries.  The second query, the one that
merely creates and populates the temporary table, is the one that gets
the error --- understandably, since I wouldn't expect the other queries
to insert into pg_class or pg_type.

Creating and destroying a temporary table with a lot of attributes every
second is causing other problems, too; the indices on the pg_attribute
table grow to hundreds of megabytes in size, and for some reason,
reindexing our system tables corrupts our database.  These huge indices
slow down access to attributes of tables, and therefore our whole
application.

We originally ran the slow SELECT INTO query in an Apache handler, because
we thought it would be fast.  Usually it took tens of milliseconds.
But every once in a while, on a loaded system, while other processes were
updating the tables it was selecting from, the query would take 2, 5,
10, 20, 30, even 60 seconds.  That's a long time for a web page to load.
We never did figure out why it was slow.  Now, the slow query happens
in a daemon, and the Apache handler just queries the table populated by
the daemon, which reliably runs fast.  Do you have any suggestions for
how to diagnose this unpredictable performance?

Many thanks.

-Kragen

В списке pgsql-general по дате отправления:

Предыдущее
От: Karsten Hilbert
Дата:
Сообщение: Re: no space left on device
Следующее
От: "John Sidney-Woollett"
Дата:
Сообщение: Re: Rép. : Re: start/stop a database