Updating large tables without dead tuples

Поиск
Список
Период
Сортировка
От ldh@laurent-hasson.com
Тема Updating large tables without dead tuples
Дата
Msg-id BY2PR15MB0872700CE29FE00DC9E9647885CC0@BY2PR15MB0872.namprd15.prod.outlook.com
обсуждение исходный текст
Ответы Re: Updating large tables without dead tuples
Re: Updating large tables without dead tuples
Список pgsql-performance

Hello

 

I work with a large and wide table (about 300 million rows, about 50 columns), and from time to time, we get business requirements to make some modifications. But sometimes, it’s just some plain mistake. This has happened to us a few weeks ago where someone made a mistake and we had to update a single column of a large and wide table. Literally, the source data screwed up a zip code and we had to patch on our end.

 

Anyways… Query ran was:

    update T set source_id = substr(sourceId, 2, 10);

Took about 10h and created 100’s of millions of dead tuples, causing another couple of hours of vacuum.

 

This was done during a maintenance window, and that table is read-only except when we ETL data to it on a weekly basis, and so I was just wondering why I should pay the “bloat” penalty for this type of transaction. Is there a trick that could be use here?

 

More generally, I suspect that the MVCC architecture is so deep that something like LOCK TABLE, which would guarantee that there won’t be contentions, couldn’t be used as a heuristic to not create dead tuples? That would make quite a performance improvement for this type of work though.

 

 

Thank you,

Laurent.

В списке pgsql-performance по дате отправления:

Предыдущее
От: Andreas Kretschmer
Дата:
Сообщение: Re: Please help
Следующее
От: Stephen Frost
Дата:
Сообщение: Re: Updating large tables without dead tuples