Re: [GENERAL] Current best practice for maximum shared_bufferssettings on big hardware?

Поиск
Список
Период
Сортировка
От stevenchang1213
Тема Re: [GENERAL] Current best practice for maximum shared_bufferssettings on big hardware?
Дата
Msg-id 5925ab37.c766620a.6a61d.365c@mx.google.com
обсуждение исходный текст
Список pgsql-general


1. cache miss due to 256kb ring buffer for large volume data io. that's why warming tool is developed for covering the issue.
2.pg_prewarm containing in contrib module since 9.4, but patch  for 9.2, which load relation data to share buffer or is cache.
3.pgfincore is a external module like rpm for redhat to be a postgres extension for checking relation cache stats in os and more control like removing relation from os cache. 

I met the same situation as yours also in 9.2 at least 2 years ago. that's why i studied a lot about shared_buffer to figure it out. I suggest u surf internet for postgre share buffer issue. you will find what you want.

steven

從我的 Samsung Galaxy 智慧型手機傳送。

-------- 原始訊息 --------
自: Bill Moran <wmoran@potentialtech.com>
日期: 2017/5/24 22:52 (GMT+08:00)
至: stevenchang1213 <stevenchang1213@gmail.com>
副本: pgsql-general@postgresql.org
主旨: Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

On Wed, 24 May 2017 21:02:45 +0800
stevenchang1213 <stevenchang1213@gmail.com> wrote:

> hello, at most 40% total memory, official doc also says so.

The docs say, "it is unlikely that an allocation of more than 40% of RAM
to shared_buffers will work better than a smaller amount" which is a far
cry from defining a maximum amount, or even warning that there are dangers
for high values. Also, that is in the same paragraph that starts with
"If you have a dedicated database server with 1GB or more of RAM ..." so
I don't need to check the RCS logs to predict that that paragraph hasn't
been updated in a while.

> you can testify it using pg_prewarm and pgfincore.

Not sure how those tools are going to predict whether I'm going to see
database stalls or other performance inversions from adding more
shared_buffers. For the purposes of seeing if shared_buffers are being
used effectively, I need only graph the block hits and misses to see
that a huge number of cache pages are satisfying requests, but also that
the amount of cache misses is still high enough for me to know that my
working set does _not_ fit in shared_buffers. What I _don't_ know is
whether increasing shared_buffers (to say, 128G on a 750G machine) is
still going to result in the same, weird performance inversion I saw
back in the 9.2 days.

> btw, numa supported? if so, extra care is necessary when starting db cluster.

Egad how I hate Linux's default NUMA policy. But I do know how to manage
it, and it's not part of the issue.

> 從我的 Samsung Galaxy 智慧型手機傳送。
> -------- 原始訊息 --------自: Bill Moran <wmoran@potentialtech.com> 日期: 2017/5/24  20:24  (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?
>
> A few years ago, I was working with "big" servers. At least, they were
> big for that age, with *128G* of RAM!!!1 Holy mackeral, right?!!?
>
> Anyway, at that time, I tried allocating 64G to shared buffers and we
> had a bunch of problems with inconsistent performance, including "stall"
> periods where the database would stop responding for 2 or 3 seconds.
> After trying all sorts of tuning options that didn't help, the problem
> finally went away after reducing shared_buffers to 32G. I speculated, at
> the time, that the shared buffer code hit performance issues managing
> that much memory, but I never had the opportunity to really follow up
> on it.
>
> Now, this was back in 2012 or thereabouts. Seems like another lifetime.
> Probably PostgreSQL 9.2 at that time.
>
> Nowadays, 128G is a "medium sized" server. I just got access to one
> with 775G. It would appear that I could order from Dell with 1.5T of
> RAM if I'm willing to sell my house ...
>
> Yet, all the docs and advice I'm able to find online seem to have been
> written pre 2008 and say things like "if your server has more than 1G
> of RAM ..."
>
> I feel like it's time for a documentation update ;) But I, personally
> don't have the experience recently enough to know what sort of
> recommendations to make.
>
> What are people's experience with modern versions of Postgres on hardware
> this size? Do any of the experts have specific recommendations on large
> shared_buffers settings? Any developers care to comment on any work
> that's been done since 2012 to make large values work better?
>
> --
> Bill Moran <wmoran@potentialtech.com>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


--
Bill Moran <wmoran@potentialtech.com>

В списке pgsql-general по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: [GENERAL] Is there possibility btree_redo with XLOG_BTREE_DELETE done between standby_redo and the end of backup
Следующее
От: stevenchang1213
Дата:
Сообщение: Re: [GENERAL] Current best practice for maximum shared_bufferssettings on big hardware?