Re: Massive table (500M rows) update nightmare
| От | Pierre Frédéric Caillaud |
|---|---|
| Тема | Re: Massive table (500M rows) update nightmare |
| Дата | |
| Msg-id | op.u59ajlu4cke6l8@soyouz обсуждение исходный текст |
| Ответ на | Re: Massive table (500M rows) update nightmare ("Carlo Stonebanks" <stonec.register@sympatico.ca>) |
| Список | pgsql-performance |
>> crank it up more and delay the checkpoints as much as possible during >> these updates. 64 segments is already 1024M. > > We have 425M rows, total table size is 78GB, so we can imagine a worst > case UPDATE write is less than 200 bytes * number of rows specified in > the update (is that logic correct?). There is also the WAL : all these updates need to be logged, which doubles the UPDATE write throughput. Perhaps you're WAL-bound (every 16MB segment needs fsyncing), and tuning of fsync= and wal_buffers, or a faster WAL disk could help ? (I don't remember your config). > Inerestingly, the total index size is 148GB, twice that of the table, > which may be an indication of where the performance bottleneck is. Index updates can create random I/O (suppose you have a btree on a rather random column)...
В списке pgsql-performance по дате отправления: