The objective is to create a backup from which I can restore any or all tables in the event of a crash. In my case, I use Postgres for my own scholarly purposes. Publications of whatever kind are not directly made public via the database. I am my only customer, and a service interruption, while a nuisance to me, does not create a crisis for others. I don’t want to lose my work, but a service interruption of a day or a week is no big deal.
If you reach the point where you have to consider splitting up the dumps for performance reasons, then you have really reached the point where pg_dump just isn't good enough for backups anymore. There are good uses for pg_dump even on such large databases, but backups aren't one of them.
You should then instead use pg_basebackup, or if you need even more functionality and performance than this provides, look at the external tools like pgbackrest or pgbarman. These tools don't need to be any more complicated to use than pg_dump, but will give you a much better backup.