Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating per-tuple freeze plans
От | Nathan Bossart |
---|---|
Тема | Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating per-tuple freeze plans |
Дата | |
Msg-id | 20220921201358.GA456274@nathanxps13 обсуждение исходный текст |
Ответ на | Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating per-tuple freeze plans (Peter Geoghegan <pg@bowt.ie>) |
Ответы |
Re: Reducing the WAL overhead of freezing in VACUUM by deduplicating per-tuple freeze plans
|
Список | pgsql-hackers |
On Tue, Sep 20, 2022 at 03:12:00PM -0700, Peter Geoghegan wrote: > On Mon, Sep 12, 2022 at 2:01 PM Peter Geoghegan <pg@bowt.ie> wrote: >> I'd like to talk about one such technique on this thread. The attached >> WIP patch reduces the size of xl_heap_freeze_page records by applying >> a simple deduplication process. > > Attached is v2, which I'm just posting to keep CFTester happy. No real > changes here. This idea seems promising. I see that you called this patch a work-in-progress, so I'm curious what else you are planning to do with it. As I'm reading this thread and the patch, I'm finding myself wondering if it's worth exploring using wal_compression for these records instead. I think you've essentially created an efficient compression mechanism for this one type of record, but I'm assuming that lz4/zstd would also yield some rather substantial improvements for this kind of data. Presumably a generic WAL record compression mechanism could be reused for other large records, too. That could be much easier than devising a deduplication strategy for every record type. -- Nathan Bossart Amazon Web Services: https://aws.amazon.com
В списке pgsql-hackers по дате отправления: