On Thu, Feb 18, 2016 at 6:48 PM, drum.lucas@gmail.com
<drum.lucas@gmail.com> wrote:
> Thanks for the reply, Scott.
>
>
> On 19 February 2016 at 13:47, Scott Marlowe <scott.marlowe@gmail.com> wrote:
>>
>> So how big is your data set? Is it much bigger, about the same, or
>> much smaller than your shared_buffers? The problem with a giant
>
>
> The total DB size is 1,9 TB
Yeah 50GB isn't gonna hold the whole thing so being that big is
probably counterproductive. I'd drop it to anywhere from 1 to 10GB,
and test work load on each size etc.
>> Also are you using a pooler? I would take it as no. Note that your
>> connections go from 30 or so to over 140 during a spike. A computer,
>> based on number of concurrent iops it can handle etc, will have a
>> performance graph that climbs as you hit a certain peak number of
>> active connections. On a machine like yours I'd expect that peak to be
>> between 4 and 20. Restricting active connections to a number in that
>> range makes the machine faster in terms of throughput, and keeps it
>> from slowly tipping over as you go further and further past it's peak
>> number.
>>
>> pgbouncer is super easy to setup and it can handle huge numbers of
>> idle connections (10,000 etc) while keeping the db running at its
>> fastest. My advice? Get a pooler in there.
>
>
> I'm not using a pooler.. But I'll have a look on it
Cool. pic a pool size (4 to 20 etc) that corresponds to the best
throughput (transactions per second etc).
> Should I decrease my max_connections as well?
Not before you put a connection pooler in place. Right now lowering it
will likely create as many problems as it solves, with failed to
connect error messages etc. After a pooler's in place it's nice to set
the max conns to something about 2x what you think you should see with
the connection pooler in place.
--
To understand recursion, one must first understand recursion.