On 03/01/2012 07:58 PM, Claudio Freire wrote:
> On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg<pvh@pvh.ca> wrote:
>>> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens
>>> even is dangerous.
>>>
>> Why do you say that? We've had work_mem happily at 100MB for years. Is
>> there a particular degenerate case you're concerned about?
> Me too.
>
> But I've analyzed the queries I'll be sending to the database and I've
> carefully bound the effective amount of memory used given the load
> I'll be experiencing.
>
> Saying that it should be set to 100M without consideration for those
> matters is the suicide part. work_mem applies to each sort operation.
> Suppose, just for the sake of argument, that each connection is
> performing 5 such sorts (ie, 5 joins of big tables - not unthinkable),
> then suppose you have your max_connections to the default of 100, then
> the system could request as much as 50G of ram.
>
> I set work_mem higher in my database system since I *know* most of the
> connections will not perform any merge or hash joins, nor will they
> sort the output, so they won't use work_mem even once. The ones that
> will, I have limited on the application side to a handful, hence I
> *know* that 50G theoretical maximum will not be reached.
>
> Can the OP say that? I have no reason to think so. Hence I don't
> suggest 100M is OK on a 4G system.
Well, obviously you need to know your workload. Nobody said otherwise.
cheers
andrew