Обсуждение: DSA failed to allocate memory
Hi, I am using Dynamic shared memory areas(DSA) to manage some variable length shared memory, I've found that in some cases allocation fails even though there are enough contiguous pages. The steps to reproduce are as follows: 1. create a dsa area with a 1MB DSM segment 2. set its size limit to 1MB 3. allocate 4KB memory until fails 4. free all allocated memory in step 3 5. repeat step 3 and step 4 When I first run the step 3, there is 240 4KB memory allocated successfully. But when I free all and allocate again, no memory can be allocated even though there are 252 contiguous pages. IMO, this should not be expected to happen, right? The call stack is as follows: #0 get_best_segment (area=0x200cc70, npages=16) at dsa.c:1972 #1 0x0000000000b46b36 in ensure_active_superblock (area=0x200cc70, pool=0x7fa7b51f46f0, size_class=33) at dsa.c:1666 #2 0x0000000000b46555 in alloc_object (area=0x200cc70, size_class=33) at dsa.c:1460 #3 0x0000000000b44f05 in dsa_allocate_extended (area=0x200cc70, size=4096, flags=2) at dsa.c:795 I read the relevant code and found that get_best_segment re-bin the segment to segment index 4 when first run the step 3. But when free all and run the step 3 again, get_best_segment search from the first bin that *might* have enough contiguous pages, it is calculated by contiguous_pages_to_segment_bin(), for a superblock with 16 pages, contiguous_pages_to_segment_bin is 5. So the second time, get_best_segment search bin from segment index 5 to 16, but the suitable segment has been re-bin to 4 that we do not check. Finally, the get_best_segment return NULL and dsa_allocate_extended return a invalid dsa pointer. Maybe we can use one of the following methods to fix it: 1. re-bin segment to suitable segment index when called dsa_free 2. get_best_segment search all bins I wrote a simple test code that is attached to reproduce it. Ant thoughts? -- Best Regards Dongming(https://www.aliyun.com/)
Вложения
On Mon, Jan 24, 2022 at 4:59 AM Dongming Liu <ldming101@gmail.com> wrote: > Maybe we can use one of the following methods to fix it: > 1. re-bin segment to suitable segment index when called dsa_free > 2. get_best_segment search all bins (2) is definitely the wrong idea. The comments say: /* * What is the lowest bin that holds segments that *might* have n contiguous * free pages? There is no point in looking in segments in lower bins; they * definitely can't service a request for n free pages. */ #define contiguous_pages_to_segment_bin(n) Min(fls(n), DSA_NUM_SEGMENT_BINS - 1) So it's OK for a segment to be in a bin that suggests that it has more consecutive free pages than it really does. But it's NOT ok for a segment to be in a bin that suggests it has fewer consecutive pages than it really does. If dsa_free() is putting things back into the wrong place, that's what we need to fix. -- Robert Haas EDB: http://www.enterprisedb.com
So it's OK for a segment to be in a bin that suggests that it has more
consecutive free pages than it really does. But it's NOT ok for a
segment to be in a bin that suggests it has fewer consecutive pages
than it really does. If dsa_free() is putting things back into the
wrong place, that's what we need to fix.
I'm trying to move segments into appropriate bins in dsa_free().
In 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extract
the re-bin segment logic into a separate function called rebin_segment,
call it to move the segment to the appropriate bin when dsa memory isfreed. Otherwise, when allocating memory, due to the segment with
enough contiguous pages is in a smaller bin, a suitable segment
may not be found to allocate memory.
Fot test, I port the test_dsa patch from [1] and add an OOM case to
test memory allocation until OOM, free and then allocation, compare
the number of allocated memory before and after.
Any thoughts?
Вложения
On Fri, Mar 18, 2022 at 3:30 PM Dongming Liu <ldming101@gmail.com> wrote:
So it's OK for a segment to be in a bin that suggests that it has more
consecutive free pages than it really does. But it's NOT ok for a
segment to be in a bin that suggests it has fewer consecutive pages
than it really does. If dsa_free() is putting things back into the
wrong place, that's what we need to fix.I'm trying to move segments into appropriate bins in dsa_free().In 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extractthe re-bin segment logic into a separate function called rebin_segment,call it to move the segment to the appropriate bin when dsa memory is
freed. Otherwise, when allocating memory, due to the segment with
enough contiguous pages is in a smaller bin, a suitable segment
may not be found to allocate memory.Fot test, I port the test_dsa patch from [1] and add an OOM case totest memory allocation until OOM, free and then allocation, comparethe number of allocated memory before and after.Any thoughts?
Best Regards,
Dongming
Вложения
On Mon, Mar 28, 2022 at 8:14 PM Dongming Liu <ldming101@gmail.com> wrote: > On Fri, Mar 18, 2022 at 3:30 PM Dongming Liu <ldming101@gmail.com> wrote: >> I'm trying to move segments into appropriate bins in dsa_free(). >> In 0001-Re-bin-segment-when-dsa-memory-is-freed.patch, I extract >> the re-bin segment logic into a separate function called rebin_segment, >> call it to move the segment to the appropriate bin when dsa memory is >> freed. Otherwise, when allocating memory, due to the segment with >> enough contiguous pages is in a smaller bin, a suitable segment >> may not be found to allocate memory. >> >> Fot test, I port the test_dsa patch from [1] and add an OOM case to >> test memory allocation until OOM, free and then allocation, compare >> the number of allocated memory before and after. Hi Dongming, Thanks for the report, and for working on the fix. Can you please create a commitfest entry (if you haven't already)? I plan to look at this soon, after the code freeze. Are you proposing that the test_dsa module should be added to the tree? If so, some trivial observations: "#ifndef HAVE_INT64_TIMESTAMP" isn't needed anymore (see commit b6aa17e0, which is in all supported branches), the year should be updated, and we use size_t instead of Size in new code.
On Mon, Mar 28, 2022 at 3:53 PM Thomas Munro <thomas.munro@gmail.com> wrote:
Hi Dongming,
Thanks for the report, and for working on the fix. Can you please
create a commitfest entry (if you haven't already)? I plan to look at
this soon, after the code freeze.
I created a commitfest entry https://commitfest.postgresql.org/38/3607/.
Thanks for your review.
Are you proposing that the test_dsa module should be added to the
tree? If so, some trivial observations: "#ifndef
HAVE_INT64_TIMESTAMP" isn't needed anymore (see commit b6aa17e0, which
is in all supported branches), the year should be updated, and we use
size_t instead of Size in new code.
features. I have removed the HAVE_INT64_TIMESTAMP related code.
Most of the code for test_dsa comes from your patch[1] and I add some
test cases.
In addition, I add a few OOM test cases that allocate a fixed size of memory
until the memory overflows, run it twice and compare the amount of memory
they allocate. These cases will fail on the current master branch.
Вложения
Thomas Munro <thomas.munro@gmail.com> writes: > Thanks for the report, and for working on the fix. Can you please > create a commitfest entry (if you haven't already)? I plan to look at > this soon, after the code freeze. Hi Thomas, are you still intending to look at this DSA bug fix? It's been sitting idle for months. regards, tom lane
On Fri, Jan 20, 2023 at 11:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote: > Thomas Munro <thomas.munro@gmail.com> writes: > > Thanks for the report, and for working on the fix. Can you please > > create a commitfest entry (if you haven't already)? I plan to look at > > this soon, after the code freeze. > > Hi Thomas, are you still intending to look at this DSA bug fix? > It's been sitting idle for months. Yeah. I think the analysis looks good, but I'll do some testing next week with the aim of getting it committed. Looks like it now needs Meson changes, but I'll look after that as my penance.
On Fri, Jan 20, 2023 at 11:02 PM Thomas Munro <thomas.munro@gmail.com> wrote: > Yeah. I think the analysis looks good, but I'll do some testing next > week with the aim of getting it committed. Looks like it now needs > Meson changes, but I'll look after that as my penance. Here's an updated version that I'm testing... Changes to the main patch: * Adjust a few comments * pgindent * Explained a bit more in the commit message I'm wondering about this bit in rebin_segment(): + if (segment_map->header == NULL) + return; Why would we be rebinning an uninitialised/unused segment? Does something in your DSA-client code (I guess you have an extension?) hit this case? The tests certainly don't; I'm not sure how the case could be reached. Changes to the test: * Update copyright year * Size -> size_t * pgindent * Add Meson glue * Re-alphabetise the makefile * Make sure we get BGWH_STOPPED while waiting for bgworkers to exit * Background worker main function return type is fixed (void) * results[1] -> results[FLEXIBLE_ARRAY_MEMBER] * getpid() -> MyProcPid I wonder if this code would be easier to understand, but not materially less efficient, if we re-binned eagerly when allocating too, so the bin is always correct/optimal. Checking fpm_largest() again after allocating should be cheap, I guess (it just reads a member variable that we already paid the cost of maintaining). We don't really seem to amortise much, we just transfer the rebinning work to the next caller to consider the segment. I haven't tried out that theory though.
Вложения
On Mon, Feb 20, 2023 at 5:52 PM Thomas Munro <thomas.munro@gmail.com> wrote: > I'm wondering about this bit in rebin_segment(): > > + if (segment_map->header == NULL) > + return; > > Why would we be rebinning an uninitialised/unused segment? Answering my own question: because destroy_superblock() can do that. So I think destroy_superblock() should test for that case, not rebin_segment(). See attached.
Вложения
Pushed. I wasn't sure it was worth keeping the test in the tree. It's here in the mailing list archives for future reference.