On Mon, Dec 5, 2016 at 12:41 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> It's not quite the same thing, because control->max_total_segment_size
>> is a total of the memory used by all allocations plus the associated
>> bookkeeping overhead, not the amount of memory used by a single
>> allocation.
>
> Really? Why doesn't it start out at zero then?
It seems I misspoke. It's an upper limit on the total amount of
memory that could be used, not the amount actually used.
> Given your later argumentation, I wonder why we're trying to implement
> any kind of limit at all, rather than just operating on the principle
> that it's the kernel's problem to enforce a limit. In short, maybe
> removing max_total_segment_size would do fine.
Well, if we did that, then we'd have to remove dsa_set_size_limit().
I don't want to do that, because I think it's useful for the calling
code to be able to ask this code to enforce a limit that may be less
than the point at which allocations would start failing. We do that
sort of thing all the time (e.g. work_mem, max_locks_per_transaction)
for good reasons. Let's not re-engineer this feature now on the
strength of "it produces a compiler warning". I think the easiest
thing to do here is change SIZE_MAX to (Size) -1. If there are deeper
problems that need to be addressed, we can consider those separately.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company