Apart from the above, added a local memory context to allocate the memory required for forming tuple for each line. This context resets for every hba line to avoid consuming unnecessary memory for scenarios of huge pg_hba.conf files.
In the revert_hba_context_release_in_backend patch, apart from reverting the commit - 1e24cf64. In tokenize_file function, changed the new context allocation from CurrentMemoryContext instead of TopMemoryContext.
1. Have you considered re-loading the HBA file upon call to this function in a local context instead of keeping it in the backends memory? I do not expect that the revert of 1e24cf645d24aab3ea39a9d259897fd0cae4e4b6 would be accepted, as the commit message refers to potential security problems with keeping this data in backend memory:
... This saves a
probably-usually-negligible amount of space per running backend. It also
avoids leaving potentially-security-sensitive data lying around in memory
in processes that don't need it. You'd have to be unusually paranoid to
think that that amounts to a live security bug, so I've not gone so far as
to forcibly zero the memory; but there surely isn't a good reason to keep
this data around.
2. I also wonder why JSONB arrays for database/user instead of TEXT[]?
3. What happens with special keywords for database column like sameuser/samerole/samegroup and for special values in the user column?
4. Would it be possible to also include the raw unparsed line from the HBA file? Just the line number is probably enough when you have access to the host, but to show the results to someone else you might need to copy the raw line manually. Not a big deal anyway.
5. Some tests demonstrating possible output would be really nice to have.