Re: Index corruption issue after migration from RHEL 7 to RHEL 9 (PostgreSQL 11 streaming replication)
| От | Bala M |
|---|---|
| Тема | Re: Index corruption issue after migration from RHEL 7 to RHEL 9 (PostgreSQL 11 streaming replication) |
| Дата | |
| Msg-id | CAJ4rSwstZoVgVjbHeDNVq+7eBWCVZSXjNMRpzB4QFjArZT0Hcg@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: Index corruption issue after migration from RHEL 7 to RHEL 9 (PostgreSQL 11 streaming replication) (Francisco Olarte <folarte@peoplecall.com>) |
| Список | pgsql-general |
Thank you all for your suggestions,
Thanks for your quick response and for sharing the details.
After reviewing the options, the logical replication approach seems to be the most feasible one with minimal downtime.
However, we currently have 7 streaming replication setups running from production, with a total database size of around 15 TB. Out of this, there are about 10 large tables ranging from 1 TB (max) to 50 GB (min) each, along with approximately 150+ sequences.
Could you please confirm if there are any successful case studies or benchmarks available for a similar setup?
Additionally, please share any recommended parameter tuning or best practices for handling logical replication at this scale.
Current server configuration:
CPU: 144 cores
RAM: 512 GB
After reviewing the options, the logical replication approach seems to be the most feasible one with minimal downtime.
However, we currently have 7 streaming replication setups running from production, with a total database size of around 15 TB. Out of this, there are about 10 large tables ranging from 1 TB (max) to 50 GB (min) each, along with approximately 150+ sequences.
Could you please confirm if there are any successful case studies or benchmarks available for a similar setup?
Additionally, please share any recommended parameter tuning or best practices for handling logical replication at this scale.
Current server configuration:
CPU: 144 cores
RAM: 512 GB
Thanks & Regards
Krishna.
On Fri, 24 Oct 2025 at 21:55, Francisco Olarte <folarte@peoplecall.com> wrote:
On Thu, 23 Oct 2025 at 17:21, Greg Sabino Mullane <htamfids@gmail.com> wrotepg_dump is the most reliable, and the slowest. Keep in mind that only the actual data needs to move over (not the indexes, which get rebuilt after the data is loaded). You could also mix-n-match pg_logical and pg_dump if you have a few tables that are super large. Whether either approach fits in your 24 hour window is hard to say without you running some tests.Long time ago I had a similar problem and did a "running with scissors" restore. This means:1.- Prepare normal configuration, test, etc for the new version.2.- Prepare a restore configuration, with fsync=off, wallevel=minimal, whatever option gives you any speed advantage.As the target was empty, if restore failed we could just clean and restart.3.- Dump, boot with the restore configuration, restore, clean shutdown, switch to production configuration, boot again and follow on.Time has passed and I lost my notes, but I remember the restore was much faster than doing it with the normal production configuration. Given current machine speeds, it maybe doable.Francisco Olarte.
В списке pgsql-general по дате отправления: