Re: Use of inefficient index in the presence of dead tuples

Поиск
Список
Период
Сортировка
От Laurenz Albe
Тема Re: Use of inefficient index in the presence of dead tuples
Дата
Msg-id 7bbc12fe52d8907bd6c8a1421e30c6d3154ac42f.camel@cybertec.at
обсуждение исходный текст
Ответ на Re: Use of inefficient index in the presence of dead tuples  (Alexander Staubo <alex@purefiction.net>)
Список pgsql-general
On Wed, 2024-05-29 at 14:36 +0200, Alexander Staubo wrote:
> > On 29 May 2024, at 02:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > I'm unpersuaded by the idea that ANALYZE should count dead tuples.
> > Since those are going to go away pretty soon, we would risk
> > estimating on the basis of no-longer-relevant stats and thus
> > creating problems worse than the one we solve.
>
> Mind you, “pretty soon” could actually be “hours" if a pg_dump is running,
> or some other long-running transaction is holding back the xmin. Granted,
> long-running transactions should be avoided, but they happen, and the
> result is operationally surprising.

Don't do these things on a busy transactional database.

> I have another use case where I used a transaction to do lock a resource
> to prevent concurrent access. I.e. the logic did
> “SELECT … FROM … WHERE id = $1 FOR UPDATE” and held that transaction open
> for hours while doing maintenance.

That's a dreadful idea.

>
> Just to clarify, this is a real use case, though the repro is of course
> artificial since the real production case is inserting and deleting rows
> very quickly.

No doubt.
Still I think that your main trouble are long-running transactions.
They will always give you trouble on a busy PostgreSQL database.
You should avoid them.

Yours,
Laurenz Albe



В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Pgpool with high availability
Следующее
От: Sumit Kochar
Дата:
Сообщение: Re: tds_fdw >> Install Foreign data Wrapper on EDB Postgres to connect to SQL server database