One tuple per transaction

Поиск
Список
Период
Сортировка
От Tambet Matiisen
Тема One tuple per transaction
Дата
Msg-id A66A11DBF5525341AEF6B8DE39CDE77008804D@black.aprote.com
обсуждение исходный текст
Ответы Re: One tuple per transaction  (Josh Berkus <josh@agliodbs.com>)
Список pgsql-performance
Hi!

In one of our applications we have a database function, which
recalculates COGS (cost of good sold) for certain period. This involves
deleting bunch of rows from one table, inserting them again in correct
order and updating them one-by-one (sometimes one row twice) to reflect
current state. The problem is, that this generates an enormous amount of
tuples in that table.

If I'm correct, the dead tuples must be scanned also during table and
index scan, so a lot of dead tuples slows down queries considerably,
especially when the table doesn't fit into shared buffers any more. And
as I'm in transaction, I can't VACUUM to get rid of those tuples. In one
occasion the page count for a table went from 400 to 22000 at the end.

All this made me wonder, why is new tuple created after every update?
One tuple per transaction should be enough, because you always commit or
rollback transaction as whole. And my observations seem to indicate,
that new index tuple is created after column update even if this column
is not indexed.

One tuple per transaction would save a loads of I/O bandwidth, so I
believe there must be a reason why it isn't implemented as such. Or were
my assumptions wrong, that dead tuples must be read from disk?

  Tambet

В списке pgsql-performance по дате отправления:

Предыдущее
От: "Joshua D. Drake"
Дата:
Сообщение: Re: Postgres on RAID5
Следующее
От: Josh Berkus
Дата:
Сообщение: Re: One tuple per transaction