Re: Massive table (500M rows) update nightmare

Поиск
Список
Период
Сортировка
От Carlo Stonebanks
Тема Re: Massive table (500M rows) update nightmare
Дата
Msg-id hi56qq$1igu$1@news.hub.org
обсуждение исходный текст
Ответ на Re: Massive table (500M rows) update nightmare  (Scott Marlowe <scott.marlowe@gmail.com>)
Список pgsql-performance
> Got an explain analyze of the delete query?

UPDATE mdx_core.audit_impt
SET source_table = 'mdx_import.'||impt_name
WHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010
AND coalesce(source_table, '') = ''

Index Scan using audit_impt_pkey on audit_impt  (cost=0.00..92.63 rows=1
width=608) (actual time=0.081..0.244 rows=10 loops=1)
  Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=
319400010))
  Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)
Total runtime: 372.141 ms

Hard to tell how reliable these numbers are, because the caches are likely
spun up for the WHERE clause - in particular, SELECT queries have been run
to test whether the rows actually qualify for the update.

The coalesce may be slowing things down slightly, but is a necessary evil.


В списке pgsql-performance по дате отправления:

Предыдущее
От: Nikolas Everett
Дата:
Сообщение: Re: Air-traffic benchmark
Следующее
От: "Carlo Stonebanks"
Дата:
Сообщение: Re: Massive table (500M rows) update nightmare