Re: libpq: Process buffered SSL read bytes to support records >8kB on async API
От | Lars Kanis |
---|---|
Тема | Re: libpq: Process buffered SSL read bytes to support records >8kB on async API |
Дата | |
Msg-id | 68cb19fc-4a1c-473e-99e6-d83b99c2e245@greiz-reinsdorf.de обсуждение исходный текст |
Ответ на | Re: libpq: Process buffered SSL read bytes to support records >8kB on async API (Jacob Champion <jacob.champion@enterprisedb.com>) |
Ответы |
Re: libpq: Process buffered SSL read bytes to support records >8kB on async API
|
Список | pgsql-hackers |
Thank you Jacob for verifying this issue! > Gory details of the packet sizes, if it's helpful: > - max TLS record size is 12k, because it made the math easier > - server sends DataRow of 32006 bytes, followed by DataRow of 806 > bytes, followed by CommandComplete/ReadyForQuery > - so there are three TLS records on the wire containing > 1) DataRow 1 fragment 1 (12k bytes) > 2) DataRow 1 fragment 2 (12k bytes) > 3) DataRow 1 fragment 3 (7430 bytes) + DataRow 2 (806 bytes) > + CommandComplete + ReadyForQuery How did you verify the issue on the server side - with YugabyteDB or with a modified Postgres server? I'd like to verify the GSSAPI part and I'm familiar with the Postgres server only. > I agree that PQconsumeInput() needs to ensure that the transport > buffers are all drained. But I'm not sure this is a complete solution; > doesn't GSS have the same problem? And are there any other sites that > need to make the same guarantee before returning? Which other sites do you mean? The synchronous transfer already works, since the select() is short-circuit in case of pending bytes: [1] > I need to switch away from this for a bit. Would you mind adding this > to the next Commitfest as a placeholder? No problem; registered: https://commitfest.postgresql.org/50/5251/ -- Regards, Lars [1] https://github.com/postgres/postgres/blob/77761ee5dddc0518235a51c533893e81e5f375b9/src/interfaces/libpq/fe-misc.c#L1070
В списке pgsql-hackers по дате отправления: