AW: PostgreSQL high availability solutions for high rates.

Поиск
Список
Период
Сортировка
От Stephan Schmidt
Тема AW: PostgreSQL high availability solutions for high rates.
Дата
Msg-id BN7PR08MB4307DB59BDF1D47BB3F66CD7E8410@BN7PR08MB4307.namprd08.prod.outlook.com
обсуждение исходный текст
Ответ на Re: PostgreSQL high availability solutions for high rates.  (Thomas Kellerer <spam_eater@gmx.net>)
Список pgsql-admin

 

 

 

> > we are evaluating PostgreSQL for our productions,

> > for a system with 1000 updates per second,

> > 1000 inserts per sec,

> > ~1000 deletes per second1

> That adds up to 3000 transactions per second, but this:

> > 250,000 transactions per day

> only results in about 3 transactions per second.

> But even 3000 transactions per second can be sustained with good hardware (especially fast SSDs) without problems

> as long as there are at least some "quiet" times where autovacuum can catch up with the updates and deletes.

> > we are looking for “active-active (r/w) (r/w)”  ,  “active(r/w) -active(read) “

> > and “active(r/w) stand-by”  high availability solution,

> Postgres only supports read-only replicas where you can distribute the load of SELECT queries.

> It does not offer a solution to distribute write load over multiple nodes.

 

i can recommend BDR extension which comes at a prices if you need a proper Master-Master cluster setup. But you could also setup a couple of standalone postgresql

instances which replicate the written data via logical replication to a central database. this way you can scale IO load and have many master nodes for redundancy.

if this approach works for you depends on your use cases.

 

> > what High availability solutions ( PG internal or 3rd party) are  used successfully, keeping up   with the similar rates? double of those rates?

>

> A master/slave system (including multiples slaves) can be set up without the need of 3rd party tools.

>

> If you want to distribute read requests across slaves you will need a 3rd part tool, e.g. pgPool or pgBouncer.

>

> To sustain a high write throughput on a single node you need a fast I/O system.

> Typically this is done with several really fast SSDs or NVMe disks nowadays.

>

> The more CPUs you have, the better.

 

 

 

 

Cheers, stephan

В списке pgsql-admin по дате отправления:

Предыдущее
От: Laurenz Albe
Дата:
Сообщение: Re: error: database is not accepting commands to avoid wraparounddata loss in database ....
Следующее
От: M Jane
Дата:
Сообщение: Running postgresql databases in Docker