Обсуждение: Impact of multixact "members" limit exceeded

Поиск
Список
Период
Сортировка

Impact of multixact "members" limit exceeded

От
Daniel Lagerman
Дата:
Hello!

I have a question where I'm trying to determine the seriousness of the problem.

I repacked a table to give space back to the OS and that was fine. However since then I just noted that I see multixact "members" limit exceeded every time there is an insert on that table. A Vacuum has yet to complete on this table.

However looking into the tables we seem to have correct data in it and table age is not that high. So there does not seem that any data is corrupted or getting or getting overwritten and that new data is written fine into the table.

Vacuum is behind and I know that is an issue, this is a very transaction heavy database with long running transactions that does not release and I'm working on getting the vacuum through.

When looking in the the pg lists it seems that everyone agrees that this is not a good thing.

Álvaro Herrera:https://www.postgresql.org/message-id/20160517174055.GA158915%40alvherre.pgsql

"In 9.4, not really. In 9.5 there's a function mxid_age() that gives you
the age of a multixact, so you'd grab the oldest from
pg_database.datminmxid and compute the age of that one. Going from the
oldest multi to the oldest offset cannot be done without an additional
function, however. It's much easier to keep track of the oldest file in
$PGDATA/pg_multixact/members/; what you really need to care about is the
size of the "hole" between the newest and the oldest files there. Once
newest starts to stomp on oldest, you're screwed."
Now I'm not sure what this Hole is as new files have stopped being written altogether.

I also read this answer by Anders Fredund:
https://www.postgresql.org/message-id/20170809175728.opnxie26gtpsrjhk%40alap3.anarazel.de

"> We started feeding it several weeks ago and everything went smoothly until
> we hit this issue:
>
> 2017-08-09 05:21:50.946 WIB >DETAIL: This command would create a multixact
> with 2 members, but the remaining space is only enough for 0 members.
> 2017-08-09 05:21:50.946 WIB >HINT: Execute a database-wide VACUUM in
> database with OID 20101 with reduced vacuum_multixact_freeze_min_age and
> vacuum_multixact_freeze_table_age settings.

Ugh, that's not good."
But i still can't figure out what the actual impact would be?

As for Vacuum settings they are as follows:

vacuum_freeze_min_age = 10000000
autovacuum_freeze_max_age = 600000000
vacuum_freeze_table_age = 400000000
vacuum_cost_delay = 10
vacuum_cost_limit = 2000
autovacuum_vacuum_cost_limit = 200
autovacuum_vacuum_cost_delay = 20

vacuum_multixact_freeze_min_age and vacuum_multixact_freeze_table_age  are the default 5/150 million.

Best regards

Daniel

Re: Impact of multixact "members" limit exceeded

От
Alvaro Herrera
Дата:
On 2018-Jun-13, Daniel Lagerman wrote:

> Hello!
> 
> I have a question where I'm trying to determine the seriousness of the
> problem.

What's your server version?

The way to reclaim members space is to move forwards the
pg_database.datminmxid value from the database where it is oldest; and
the way to move *that* forward is to advance the pg_class.relminmxid
value from the tables in which it is oldest.  So you need to set a low
value vacuum_multixact_freeze_min_age and _table_age and vacuum those
tables.

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Impact of multixact "members" limit exceeded

От
Daniel Lagerman
Дата:
Hello Álvaro,

I'm running at 9.4.3, I know its an older version but upgrading it outside the scope right now for various reasons.
Based on the settings I posted what would be your recommended settings right now to resolve this situation?

Maintenance is limited to weekends as this is a database which needs to be online 24/5 and consumes a lot of data and responds to many queries.

I very much appreciate the recommendation but I would also like to now the impact of this warning, data right now does not seem to be corrupted, get overwritten, for that matter not insert new data or cause issues when read. It just seems that is has stopped creating "members" and I can't read into what problem that would cause, table was around 1.6 TB, it was cleaned up and the re-packed using pg-repack to go down to around 400 GB in size, however a freeze vacuum or autovacuum did not complete.

Then when it comes to vacuum, do you think its worth increasing work_mem to say 200 GB, for the maintenance period only, and let autovacuum  go to town on the tables with the highest age and lowest minmixid? Or should one run for example flexible freeze instead with zero pauses as nothing is happening on the weekends, which means no inbound transactions. Also worth mentioning is thatI restart the server on the weekend after I have backed up the data.

I have tried running autovacuum for 40 hours before with 5 GB work_mem, on the weekend, across 6 tables at the same time and it never completed on the bigger tables which are around 2 TB.

Best regards

Daniel


On Wed, Jun 13, 2018 at 11:29 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
On 2018-Jun-13, Daniel Lagerman wrote:

> Hello!
>
> I have a question where I'm trying to determine the seriousness of the
> problem.

What's your server version?

The way to reclaim members space is to move forwards the
pg_database.datminmxid value from the database where it is oldest; and
the way to move *that* forward is to advance the pg_class.relminmxid
value from the tables in which it is oldest.  So you need to set a low
value vacuum_multixact_freeze_min_age and _table_age and vacuum those
tables.

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Impact of multixact "members" limit exceeded

От
Lawrence Jones
Дата:
Hey Daniel,

This may be entirely unrelated to your query, but we’ve previously experienced issues with 9.4 and crazy multixact members growth.

After digging into the issue, we found the culprit was code that would perform the following actions:

begin;
for query in many_queries:
  savepoint <x>;
  select * from table where id=‘myid' for update;
  update table increment(counter) where id=‘myid';
  release savepoint;
commit;

While I found it very difficult to find documentation on the purpose of multixacts, and I’m certain to have critical misunderstandings around their workings, it seemed that Postgres was inferring each new savepoint as a new potential transaction that would lock the given row, and each of those transactions needed to be added to the on-going multixact members list.

The members list (I believe) is immutable, and adding a new member requires construction of a new members list, and so your multixact members offset will grow quadratically with the number of times you lock your row. Where we were locking repeatedly in a large loop, we were seeing huge multixact members growth that would trigger the override for multixact vacuums, regardless of disabling or configuring your autovacuum.

If you’re continually seeing multixact members growth that outpaces your vacuums, then I suggest having a scan for the above usage pattern, or alternatively upgrading. We couldn’t replicate the growth in 9.5 and above, as presumably newer Postgres versions correctly no-op when a transaction tries locking a row when its parent is already present in the multixact.

This comes with the disclaimer of a Postgres amateur, it simply seemed relevant to your situation.

Lawrence


On 14 Jun 2018, at 04:47, Daniel Lagerman <spjheruur@gmail.com> wrote:

Hello Álvaro,

I'm running at 9.4.3, I know its an older version but upgrading it outside the scope right now for various reasons.
Based on the settings I posted what would be your recommended settings right now to resolve this situation?

Maintenance is limited to weekends as this is a database which needs to be online 24/5 and consumes a lot of data and responds to many queries.

I very much appreciate the recommendation but I would also like to now the impact of this warning, data right now does not seem to be corrupted, get overwritten, for that matter not insert new data or cause issues when read. It just seems that is has stopped creating "members" and I can't read into what problem that would cause, table was around 1.6 TB, it was cleaned up and the re-packed using pg-repack to go down to around 400 GB in size, however a freeze vacuum or autovacuum did not complete.

Then when it comes to vacuum, do you think its worth increasing work_mem to say 200 GB, for the maintenance period only, and let autovacuum  go to town on the tables with the highest age and lowest minmixid? Or should one run for example flexible freeze instead with zero pauses as nothing is happening on the weekends, which means no inbound transactions. Also worth mentioning is thatI restart the server on the weekend after I have backed up the data.

I have tried running autovacuum for 40 hours before with 5 GB work_mem, on the weekend, across 6 tables at the same time and it never completed on the bigger tables which are around 2 TB.

Best regards

Daniel


On Wed, Jun 13, 2018 at 11:29 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
On 2018-Jun-13, Daniel Lagerman wrote:

> Hello!
>
> I have a question where I'm trying to determine the seriousness of the
> problem.

What's your server version?

The way to reclaim members space is to move forwards the
pg_database.datminmxid value from the database where it is oldest; and
the way to move *that* forward is to advance the pg_class.relminmxid
value from the tables in which it is oldest.  So you need to set a low
value vacuum_multixact_freeze_min_age and _table_age and vacuum those
tables.

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Impact of multixact "members" limit exceeded

От
Alvaro Herrera
Дата:
Hello Daniel

The advice from Lawrence downthread is very much on point.  If you can
upgrade to 9.5, do it.  He mentions savepoints, but one more case is
plpgsql blocks with EXCEPTION clauses.

On 2018-Jun-13, Daniel Lagerman wrote:

> Hello Álvaro,
> 
> I'm running at 9.4.3, I know its an older version but upgrading it outside
> the scope right now for various reasons.

Okay.  There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
served by an upgrade anyway.

> Based on the settings I posted what would be your recommended settings
> right now to resolve this situation?

Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
tables fully) and vacuum_multixact_freeze_min_age to some value like one
million (so that they remove most of the oldest multixacts, leaving
just the frontmost one million).  Then vacuum the tables with the
highest multixact ages.  Make sure the datminmxid advances in all
databases (incl. postgres template0 template1); once it has advanced,
the oldest member files are removed.  Don't waste time processing tables
with datminmxid higher than the minimum, as that won't free up any
member space.

> I very much appreciate the recommendation but I would also like to now the
> impact of this warning, data right now does not seem to be corrupted, get
> overwritten, for that matter not insert new data or cause issues when read.

Corruption should not occur because 9.4.3 already contains the member
wraparound protections (commit 3ecab37d97ed).

> It just seems that is has stopped creating "members" and I can't read into
> what problem that would cause, table was around 1.6 TB, it was cleaned up
> and the re-packed using pg-repack to go down to around 400 GB in size,
> however a freeze vacuum or autovacuum did not complete.

Uhh .. pg_repack has been known to cause catalog corruption.  I don't
know if those bugs have been fixed.  At 2ndQuadrant we recommend *not*
running pg_repack.

> Then when it comes to vacuum, do you think its worth increasing work_mem to
> say 200 GB,

work_mem does not affect vacuum.  maintenance_work_mem is the one to
change; autovacuum_work_mem can be used for autovacuum.

> for the maintenance period only, and let autovacuum  go to town
> on the tables with the highest age and lowest minmixid? Or should one run
> for example flexible freeze instead with zero pauses as nothing is
> happening on the weekends, which means no inbound transactions.

"Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much
faster.

> I have tried running autovacuum for 40 hours before with 5 GB work_mem, on
> the weekend, across 6 tables at the same time and it never completed on the
> bigger tables which are around 2 TB.

So what did you do, cancel it?

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Impact of multixact "members" limit exceeded

От
Daniel Lagerman
Дата:
Hey Lawrence,

Thanks for the input, I will check if this is the case, I do not believe it is but its worth checking out. To me it looks like normal inserts.
I think we were just insanely behind on the vacuum. I have another server, same code for the App on top of the DB which does not have this issue.

For sure an upgrade to PG 9.5 or 9.6 will solve ALOT of issues.
Again thanks for the input, always appreciated!

Best regards

Daniel

On Thu, Jun 14, 2018 at 9:03 AM, Lawrence Jones <lawrjone@gmail.com> wrote:
Hey Daniel,

This may be entirely unrelated to your query, but we’ve previously experienced issues with 9.4 and crazy multixact members growth.

After digging into the issue, we found the culprit was code that would perform the following actions:

begin;
for query in many_queries:
  savepoint <x>;
  select * from table where id=‘myid' for update;
  update table increment(counter) where id=‘myid';
  release savepoint;
commit;

While I found it very difficult to find documentation on the purpose of multixacts, and I’m certain to have critical misunderstandings around their workings, it seemed that Postgres was inferring each new savepoint as a new potential transaction that would lock the given row, and each of those transactions needed to be added to the on-going multixact members list.

The members list (I believe) is immutable, and adding a new member requires construction of a new members list, and so your multixact members offset will grow quadratically with the number of times you lock your row. Where we were locking repeatedly in a large loop, we were seeing huge multixact members growth that would trigger the override for multixact vacuums, regardless of disabling or configuring your autovacuum.

If you’re continually seeing multixact members growth that outpaces your vacuums, then I suggest having a scan for the above usage pattern, or alternatively upgrading. We couldn’t replicate the growth in 9.5 and above, as presumably newer Postgres versions correctly no-op when a transaction tries locking a row when its parent is already present in the multixact.

This comes with the disclaimer of a Postgres amateur, it simply seemed relevant to your situation.

Lawrence


On 14 Jun 2018, at 04:47, Daniel Lagerman <spjheruur@gmail.com> wrote:

Hello Álvaro,

I'm running at 9.4.3, I know its an older version but upgrading it outside the scope right now for various reasons.
Based on the settings I posted what would be your recommended settings right now to resolve this situation?

Maintenance is limited to weekends as this is a database which needs to be online 24/5 and consumes a lot of data and responds to many queries.

I very much appreciate the recommendation but I would also like to now the impact of this warning, data right now does not seem to be corrupted, get overwritten, for that matter not insert new data or cause issues when read. It just seems that is has stopped creating "members" and I can't read into what problem that would cause, table was around 1.6 TB, it was cleaned up and the re-packed using pg-repack to go down to around 400 GB in size, however a freeze vacuum or autovacuum did not complete.

Then when it comes to vacuum, do you think its worth increasing work_mem to say 200 GB, for the maintenance period only, and let autovacuum  go to town on the tables with the highest age and lowest minmixid? Or should one run for example flexible freeze instead with zero pauses as nothing is happening on the weekends, which means no inbound transactions. Also worth mentioning is thatI restart the server on the weekend after I have backed up the data.

I have tried running autovacuum for 40 hours before with 5 GB work_mem, on the weekend, across 6 tables at the same time and it never completed on the bigger tables which are around 2 TB.

Best regards

Daniel


On Wed, Jun 13, 2018 at 11:29 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
On 2018-Jun-13, Daniel Lagerman wrote:

> Hello!
>
> I have a question where I'm trying to determine the seriousness of the
> problem.

What's your server version?

The way to reclaim members space is to move forwards the
pg_database.datminmxid value from the database where it is oldest; and
the way to move *that* forward is to advance the pg_class.relminmxid
value from the tables in which it is oldest.  So you need to set a low
value vacuum_multixact_freeze_min_age and _table_age and vacuum those
tables.

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Impact of multixact "members" limit exceeded

От
Daniel Lagerman
Дата:
Hello Daniel

The advice from Lawrence downthread is very much on point.  If you can
upgrade to 9.5, do it.  He mentions savepoints, but one more case is
plpgsql blocks with EXCEPTION clauses.

Hello Álvaro,

Thanks I'll make sure of both even if I do not believe this is this case either.

Okay.  There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
served by an upgrade anyway.

Not sure how to lookup these commits but for sure there are ALOT of bug fixes and I'm pushing that we will upgrade as 9.5 and 9.6 has optimizations an bug fixes that makes my life much easier. It looks to me that we might have hit the BUG where members runs ahead of offset which I think has been fixed in later versions.

Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
tables fully) and vacuum_multixact_freeze_min_age to some value like one
million (so that they remove most of the oldest multixacts, leaving
just the frontmost one million).  Then vacuum the tables with the
highest multixact ages.  Make sure the datminmxid advances in all
databases (incl. postgres template0 template1); once it has advanced,
the oldest member files are removed.  Don't waste time processing tables
with datminmxid higher than the minimum, as that won't free up any
member space.

Thanks, we opted to go with 1 million min and 100 million table age, the tables in question are all way over that anyway. We completed vacuum on all but two which are larger tables and I'm running Vacuum there right now after I also bumped the maintenance work mem a bit. As this is 9.4 I can't see the progress except that I noted that initialy it uses less CPU and once it seems to finnaly start working it uses more CPU 10%->50% but that is fine. So my hope is that the Vacuum completes over night.

Corruption should not occur because 9.4.3 already contains the member
wraparound protections (commit 3ecab37d97ed).

Glad to hear that and I have not seen any problems to be honest except for the pg_srv log filling up.

Uhh .. pg_repack has been known to cause catalog corruption.  I don't
know if those bugs have been fixed.  At 2ndQuadrant we recommend *not*
running pg_repack.


I know version before 1.2 had some corruption issues, I have never seen it myself around the 20 times I have been using it.  But you should never say never right? I did the same operation on a duplicate server and there is no issues there so I believe we were just way behind on Vacuum.

work_mem does not affect vacuum.  maintenance_work_mem is the one to
change; autovacuum_work_mem can be used for autovacuum.

Yep, that was a typo on my end I was referring to maintenance work mem of course.

"Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much
faster.

Yea, I will look into that if needed during the weekend if needed combined with more maintenance work mem if Vacuum has not completed. I can't really do Zero Pauses during normal operations as I need to write to the DB.

So what did you do, cancel it?

Yes, I actually used flexible freeze with a timeout so It terminated just before the Maintenance window ended. However now we run straight VACUUM on these tables due to this problem. I might even opt to not restart the server this weekend, don't really need a weekly restart to keep it going since it seems to have a good pace now.

Many thanks for your insight and guidance!


Best regards

Daniel

On Thu, Jun 14, 2018 at 10:37 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Hello Daniel

The advice from Lawrence downthread is very much on point.  If you can
upgrade to 9.5, do it.  He mentions savepoints, but one more case is
plpgsql blocks with EXCEPTION clauses.

On 2018-Jun-13, Daniel Lagerman wrote:

> Hello Álvaro,
>
> I'm running at 9.4.3, I know its an older version but upgrading it outside
> the scope right now for various reasons.

Okay.  There's at least one related bug fix in 9.4.4 (b6a3444fa635) but
also 8507a5b37bd9 and ec1408155d35 are related, so you'd be better
served by an upgrade anyway.

> Based on the settings I posted what would be your recommended settings
> right now to resolve this situation?

Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
tables fully) and vacuum_multixact_freeze_min_age to some value like one
million (so that they remove most of the oldest multixacts, leaving
just the frontmost one million).  Then vacuum the tables with the
highest multixact ages.  Make sure the datminmxid advances in all
databases (incl. postgres template0 template1); once it has advanced,
the oldest member files are removed.  Don't waste time processing tables
with datminmxid higher than the minimum, as that won't free up any
member space.

> I very much appreciate the recommendation but I would also like to now the
> impact of this warning, data right now does not seem to be corrupted, get
> overwritten, for that matter not insert new data or cause issues when read.

Corruption should not occur because 9.4.3 already contains the member
wraparound protections (commit 3ecab37d97ed).

> It just seems that is has stopped creating "members" and I can't read into
> what problem that would cause, table was around 1.6 TB, it was cleaned up
> and the re-packed using pg-repack to go down to around 400 GB in size,
> however a freeze vacuum or autovacuum did not complete.

Uhh .. pg_repack has been known to cause catalog corruption.  I don't
know if those bugs have been fixed.  At 2ndQuadrant we recommend *not*
running pg_repack.

> Then when it comes to vacuum, do you think its worth increasing work_mem to
> say 200 GB,

work_mem does not affect vacuum.  maintenance_work_mem is the one to
change; autovacuum_work_mem can be used for autovacuum.

> for the maintenance period only, and let autovacuum  go to town
> on the tables with the highest age and lowest minmixid? Or should one run
> for example flexible freeze instead with zero pauses as nothing is
> happening on the weekends, which means no inbound transactions.

"Zero pauses" (vacuum_cost_delay=0) is best, because it's obviously much
faster.

> I have tried running autovacuum for 40 hours before with 5 GB work_mem, on
> the weekend, across 6 tables at the same time and it never completed on the
> bigger tables which are around 2 TB.

So what did you do, cancel it?

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Impact of multixact "members" limit exceeded

От
Alvaro Herrera
Дата:
On 2018-Jun-14, Daniel Lagerman wrote:

> > Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
> > tables fully) and vacuum_multixact_freeze_min_age to some value like one
> > million (so that they remove most of the oldest multixacts, leaving
> > just the frontmost one million).  Then vacuum the tables with the
> > highest multixact ages.  Make sure the datminmxid advances in all
> > databases (incl. postgres template0 template1); once it has advanced,
> > the oldest member files are removed.  Don't waste time processing tables
> > with datminmxid higher than the minimum, as that won't free up any
> > member space.
> 
> Thanks, we opted to go with 1 million min and 100 million table age, the
> tables in question are all way over that anyway. We completed vacuum on all
> but two which are larger tables and I'm running Vacuum there right now
> after I also bumped the maintenance work mem a bit. As this is 9.4 I can't
> see the progress except that I noted that initialy it uses less CPU and
> once it seems to finnaly start working it uses more CPU 10%->50% but that
> is fine. So my hope is that the Vacuum completes over night.

Did you observe whether the vacuumed tables' relminmxid advance?  If it
didn't, then those vacuums are a waste of time.

Note that freeze_table_age affects when vacuum does a full table scan
vs. when it skips pages due to visibility map.  A vacuum that does a
partial scan does *not* advance the relfrozenxid / relminmxid; only
full-table scan vacuums can do that.  (Except in 9.6.)  If the
relminmxid ages are younger than your 100 million table_age, vacuum
won't do a full-table scan.

Cheers

-- 
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: Impact of multixact "members" limit exceeded

От
Daniel Lagerman
Дата:

Did you observe whether the vacuumed tables' relminmxid advance?  If it
didn't, then those vacuums are a waste of time.

Note that freeze_table_age affects when vacuum does a full table scan
vs. when it skips pages due to visibility map.  A vacuum that does a
partial scan does *not* advance the relfrozenxid / relminmxid; only
full-table scan vacuums can do that.  (Except in 9.6.)  If the
relminmxid ages are younger than your 100 million table_age, vacuum
won't do a full-table scan.

Yes they did, they all advanced, and no they are not younger than 100 million. They were all over the default value of 150 as are the remaining 2, relminxid/relfrozenid advanced, relminxid to match NextMultiXactID and age went down. But good point to keep in mind.

Best regards

Daniel


On Thu, Jun 14, 2018 at 7:15 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
On 2018-Jun-14, Daniel Lagerman wrote:

> > Set vacuum_multixact_freeze_table_age to 0 (causing all vacuums to scan
> > tables fully) and vacuum_multixact_freeze_min_age to some value like one
> > million (so that they remove most of the oldest multixacts, leaving
> > just the frontmost one million).  Then vacuum the tables with the
> > highest multixact ages.  Make sure the datminmxid advances in all
> > databases (incl. postgres template0 template1); once it has advanced,
> > the oldest member files are removed.  Don't waste time processing tables
> > with datminmxid higher than the minimum, as that won't free up any
> > member space.
>
> Thanks, we opted to go with 1 million min and 100 million table age, the
> tables in question are all way over that anyway. We completed vacuum on all
> but two which are larger tables and I'm running Vacuum there right now
> after I also bumped the maintenance work mem a bit. As this is 9.4 I can't
> see the progress except that I noted that initialy it uses less CPU and
> once it seems to finnaly start working it uses more CPU 10%->50% but that
> is fine. So my hope is that the Vacuum completes over night.

Did you observe whether the vacuumed tables' relminmxid advance?  If it
didn't, then those vacuums are a waste of time.

Note that freeze_table_age affects when vacuum does a full table scan
vs. when it skips pages due to visibility map.  A vacuum that does a
partial scan does *not* advance the relfrozenxid / relminmxid; only
full-table scan vacuums can do that.  (Except in 9.6.)  If the
relminmxid ages are younger than your 100 million table_age, vacuum
won't do a full-table scan.

Cheers

--
Álvaro Herrera                https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services