On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe <jeevan.ladhe@enterprisedb.com> wrote: > Yes, you are right here, and I could verify this fact with an experiment. > When autoflush is 1, the file gets less compressed i.e. the compressed file > is of more size than the one generated when autoflush is set to 0. > But, as of now, I couldn't think of a solution as we need to really advance the > bytes written to the output buffer so that we can write into the output buffer.
I don't understand why you think we need to do that. What happens if you just change prefs->autoFlush = 1 to set it to 0 instead? What I think will happen is that you'll call LZ4F_compressUpdate a bunch of times without outputting anything, and then suddenly one of the calls will produce a bunch of output all at once. But so what? I don't see that anything in bbsink_lz4_archive_contents() would get broken by that.
It would be a problem if LZ4F_compressUpdate() didn't produce anything and also didn't buffer the data internally, and expected us to keep the input around. That we would have difficulty doing, because we wouldn't be calling LZ4F_compressUpdate() if we didn't need to free up some space in that sink's input buffer. But if it buffers the data internally, I don't know why we care.
If I set prefs->autoFlush to 0, then LZ4F_compressUpdate() returns an error: ERROR_dstMaxSize_tooSmall after a few iterations.
After digging a bit in the source of LZ4F_compressUpdate() in LZ4 repository, I see that it throws this error when the destination buffer capacity, which in our case is mysink->base.bbs_next->bbs_buffer_length is less than the compress bound which it calculates internally by calling LZ4F_compressBound()
internally for buffered_bytes + input buffer(CHUNK_SIZE in this case). Not sure