Re: filesystem full during vacuum - space recovery issues
От | Scott Ribe |
---|---|
Тема | Re: filesystem full during vacuum - space recovery issues |
Дата | |
Msg-id | AF87CB39-4713-424A-9B61-4CB3B8E590FC@elevated-dev.com обсуждение исходный текст |
Ответ на | Re: filesystem full during vacuum - space recovery issues (Ron Johnson <ronljohnsonjr@gmail.com>) |
Список | pgsql-admin |
1) Add new disk, use a new tablespace to move some big tables to it, to get back up and running 2) Replica server provisioned sufficiently for the db, pg_basebackup to it 3) Get streaming replication working 4) Switch over to new server In other words, if you don't want terrible downtime, you need yet another server fully provisioned to be able to run yourdb. -- Scott Ribe scott_ribe@elevated-dev.com https://www.linkedin.com/in/scottribe/ > On Jul 18, 2024, at 4:41 PM, Ron Johnson <ronljohnsonjr@gmail.com> wrote: > > Multi-threaded writing to the same giant text file won't work too well, when all the data for one table needs to be together. > > Just temporarily add another disk for backups. > > On Thu, Jul 18, 2024 at 4:55 PM Thomas Simpson <ts@talentstack.to> wrote: > > On 18-Jul-2024 16:32, Ron Johnson wrote: >> On Thu, Jul 18, 2024 at 3:01 PM Thomas Simpson <ts@talentstack.to> wrote: >> [snip] >> [BTW, v9.6 which I know is old but this server is stuck there] >> [snip] >> I know I'm stuck with the slow rebuild at this point. However, I doubt I am the only person in the world that needs todump and reload a large database. My thought is this is a weak point for PostgreSQL so it makes sense to consider waysto improve the dump reload process, especially as it's the last-resort upgrade path recommended in the upgrade guideand the general fail-safe route to get out of trouble. >> No database does fast single-threaded backups. > Agreed. My thought is that is should be possible for a 'new dumpall' to be multi-threaded. > Something like : > * Set number of threads on 'source' (perhaps by querying a listening destination for how many threads it is prepared toaccept via a control port) > * Select each database in turn > * Organize the tables which do not have references themselves > * Send each table separately in each thread (or queue them until a thread is available) ('Stage 1') > * Rendezvous stage 1 completion (pause sending, wait until feedback from destination confirming all completed) so we havea known consistent state that is safe to proceed to subsequent tables > * Work through tables that do refer to the previously sent in the same way (since the tables they reference exist and havetheir data) ('Stage 2') > * Repeat progressively until all tables are done ('Stage 3', 4 etc. as necessary) > The current dumpall is essentially doing this table organization currently [minus stage checkpoints/multi-thread] otherwisethe dump/load would not work. It may even be doing a lot of this for 'directory' mode? The change here is organizingn threads to process them concurrently where possible and coordinating the pipes so they only send data which canbe accepted. > The destination would need to have a multi-thread listen and co-ordinate with the sender on some control channel so feedback completion of each stage. > Something like a destination host and control channel port to establish the pipes and create additional netcat pipes onincremental ports above the control port for each thread used. > Dumpall seems like it could be a reasonable start point since it is already doing the complicated bits of serializing thedump data so it can be consistently loaded. > Probably not really an admin question at this point, more a feature enhancement. > Is there anything fundamentally wrong that someone with more intimate knowledge of dumpall could point out? > Thanks > Tom >
В списке pgsql-admin по дате отправления: