Обсуждение: Calculating vm.nr_hugepages
Hello, I'm writing an Ansible play which is to set the correct value for vm.nr_hugepages on Linux servers where I hope to make Postgres make use of huge pages. However, I'm struggling to find the right formula. I assume I need to find the same value as I get from running "postgres -C shared_memory_size_in_huge_pages". I call that my target value. Note: I cannot simply run "postgres -C ...", because I need my Ansible play to work against a server where Postgres is running. I've tried using the formula described at https://www.cybertec-postgresql.com/en/huge-pages-postgresql/, but it produces a different value than my target: Using a shared_buffers value of 21965570048, like in Cybertec Postgresql's example: "postgres ... -C 21965570048B" yields: 10719 The formula from Cybertec Postgresql says: 10475 I've also tried doing what ChatGPG suggested: Number of Huge Pages when shared_buffers is set to 1 GiB = shared_buffers / huge_page_size = 1073741824 bytes / 2097152 bytes = 512 But that's also wrong compared to "postgres -C ..." (which said 542). Which formula can I use? It's OK for me for it to be slightly wrong compared to "postgres -C", but if it's wrong, it needs to be slightly higher than what "postgres -C" outputs, so that I'm sure there's enough huge pages for Postgres to be able to use them properly. -- Kind regards, Troels Arvin
On Wed, Aug 30, 2023 at 8:12 AM Troels Arvin <troels@arvin.dk> wrote:
Hello,
I'm writing an Ansible play which is to set the correct value for
vm.nr_hugepages on Linux servers where I hope to make Postgres make use
of huge pages.
However, I'm struggling to find the right formula.
I assume I need to find the same value as I get from running "postgres
-C shared_memory_size_in_huge_pages". I call that my target value.
Note: I cannot simply run "postgres -C ...", because I need my Ansible
play to work against a server where Postgres is running.
I've tried using the formula described at
https://www.cybertec-postgresql.com/en/huge-pages-postgresql/, but it
produces a different value than my target:
Using a shared_buffers value of 21965570048, like in Cybertec
Postgresql's example:
"postgres ... -C 21965570048B" yields: 10719
The formula from Cybertec Postgresql says: 10475
I've also tried doing what ChatGPG suggested:
Number of Huge Pages when shared_buffers is set to 1 GiB =
shared_buffers / huge_page_size
= 1073741824 bytes / 2097152 bytes
= 512
But that's also wrong compared to "postgres -C ..." (which said 542).
Which formula can I use? It's OK for me for it to be slightly wrong
compared to "postgres -C", but if it's wrong, it needs to be slightly
higher than what "postgres -C" outputs, so that I'm sure there's enough
huge pages for Postgres to be able to use them properly.
Good morning Troels,
I had a similar thread a couple of years ago, you may want to read:
In it, Justin Przby provides the detailed code for exactly what factors into HugePages, if you require that level of precision.
I hadn't seen that Cybertec blog post before. I ended up using my own equation that I derived in that thread after Justin shared his info. The chef/ruby code involved is:
if shared_buffers_size > 40000
padding = 500
end
shared_buffers_usage = shared_buffers_size + 200 + (25 * shared_buffers_size / 1024)
max_connections_usage = (max_connections - 100) / 20
wal_buffers_usage = (wal_buffers_size - 16) / 2
vm.nr_hugepages = ((shared_buffers_usage + max_connections_usage + wal_buffers_usage + padding) / 2).ceil()
max_connections_usage = (max_connections - 100) / 20
wal_buffers_usage = (wal_buffers_size - 16) / 2
vm.nr_hugepages = ((shared_buffers_usage + max_connections_usage + wal_buffers_usage + padding) / 2).ceil()
wal_buffers_size is usually 16MB so wal_buffers_usage ends up being zeroed out. This has worked out for our various postgres VM sizes. There's obviously going to be a little extra HugePages that goes unused, but these VMs are dedicated for postgresql usage and shared_buffers_size defaults to 25% of VM memory so there's still plenty to spare. But we use this so we can configure vm.nr_hugepages at deployment time via Chef.
Don.
Don Seiler
www.seiler.us
www.seiler.us
> On 30/08/2023 15:12 CEST Troels Arvin <troels@arvin.dk> wrote: > > I assume I need to find the same value as I get from running "postgres > -C shared_memory_size_in_huge_pages". I call that my target value. > Note: I cannot simply run "postgres -C ...", because I need my Ansible > play to work against a server where Postgres is running. > > I've tried using the formula described at > https://www.cybertec-postgresql.com/en/huge-pages-postgresql/, but it > produces a different value than my target: > > Using a shared_buffers value of 21965570048, like in Cybertec > Postgresql's example: > "postgres ... -C 21965570048B" yields: 10719 > The formula from Cybertec Postgresql says: 10475 Probably because 21965570048B > 20GB. What does your command look like exactly? Why do you use shared_buffers=21965570048B and not 20GB? The larger value is quoted in the last section of the linked blog post for pre-15 Postgres and is the shared memory size that Postgres wants to allocate but fails to do with shared_buffers=20GB. The section also provides the formula for manually calculating vm.nr_hugepages. > I've also tried doing what ChatGPG suggested: > Number of Huge Pages when shared_buffers is set to 1 GiB = > shared_buffers / huge_page_size > = 1073741824 bytes / 2097152 bytes > = 512 > But that's also wrong compared to "postgres -C ..." (which said 542). The formula from the blog post gives me 513 but it also includes some additional shared memory for internal stuff. So 512 is correct when the shared memory size already includes the overhead for internal stuff. -- Erik