Обсуждение: sizing / capacity planning tipps related to expected request or transactions per second

Поиск
Список
Период
Сортировка

sizing / capacity planning tipps related to expected request or transactions per second

От
Dirk Krautschick
Дата:
Hi,

are there any nice rules of thumb about capacity planning in relation the expected
amount of transactions or request per second?

For example, if I have around 100 000 transactions per second on a 5 TB database.
With what amount of Memory and CPUs/Cores and which settings would you basically
Start to evaluate the performance.

Or are there any other recommendations or experiences here?

Thanks and best regards

Dirk

Re: sizing / capacity planning tipps related to expected request or transactions per second

От
MichaelDBA
Дата:
Hi Dirk,

There are a bunch of other things to consider besides just TPS and size 
of database.  Since PG is process-bound, I would consider connection 
activity: How many active connections at any one time?  This greatly 
affects your CPUs.  SQL workload is another big factor: a lot of complex 
queries may use up or want to use up large amounts of work_mem, which 
greatly affects your memory capacity.

Bunch of other stuff, but these are my top 2.


Regards,
Michael Vitale

Dirk Krautschick wrote on 8/24/2020 12:39 PM:
> Hi,
>
> are there any nice rules of thumb about capacity planning in relation the expected
> amount of transactions or request per second?
>
> For example, if I have around 100 000 transactions per second on a 5 TB database.
> With what amount of Memory and CPUs/Cores and which settings would you basically
> Start to evaluate the performance.
>
> Or are there any other recommendations or experiences here?
>
> Thanks and best regards
>
> Dirk




Re: sizing / capacity planning tipps related to expected request or transactions per second

От
Pavel Stehule
Дата:
Hi

po 24. 8. 2020 v 18:40 odesílatel Dirk Krautschick <Dirk.Krautschick@trivadis.com> napsal:
Hi,

are there any nice rules of thumb about capacity planning in relation the expected
amount of transactions or request per second?

For example, if I have around 100 000 transactions per second on a 5 TB database.
With what amount of Memory and CPUs/Cores and which settings would you basically
Start to evaluate the performance.

You have to know the duration of a typical query - if it is 1ms, then one cpu can do 1000 tps and you need 100 cpu. If duration is 10 ms, then you need 1000 cpu.

as minimum RAM for OLTP is 10% of database size, in your case 500GB RAM.

Any time, when I see a request higher than 20-30K tps, then it is good to think about horizontal scaling or about sharding.

Regards

Pavel



Or are there any other recommendations or experiences here?

Thanks and best regards

Dirk