Idea: quicker abort after loss of client connection

Поиск
Список
Период
Сортировка
От Tom Lane
Тема Idea: quicker abort after loss of client connection
Дата
Msg-id 21746.991785662@sss.pgh.pa.us
обсуждение исходный текст
Ответы Re: Idea: quicker abort after loss of client connection  (ncm@zembu.com (Nathan Myers))
Re: Idea: quicker abort after loss of client connection  (Bruce Momjian <pgman@candle.pha.pa.us>)
Список pgsql-hackers
Currently, if the client application dies (== closes the connection),
the backend will observe this and exit when it next returns to the
outer loop and tries to read a new command.  However, we might detect
the loss of connection much sooner; for example, if we are doing a
SELECT that outputs large amounts of data, we will see failures from
send().

We have deliberately avoided trying to abort as soon as the connection
drops, for fear that that might cause unexpected problems.  However,
it's moderately annoying to see the postmaster log fill with
"pq_flush: send() failed" messages when something like this happens.

It occurs to me that a fairly safe way to abort after loss of connection
would be for pq_flush or pq_recvbuf to set QueryCancel when they detect
a communications problem.  This would not immediately abort the query in
progress, but would ensure a cancel at the next safe time in the
per-tuple loop.  You wouldn't get very much more output before that
happened, typically.

Thoughts?  Is there anything about this that might be unsafe?  Should
QueryCancel be set after *any* failure of recv() or send(), or only
if certain errno codes are detected (and if so, which ones)?
        regards, tom lane


В списке pgsql-hackers по дате отправления:

Предыдущее
От: David Ford
Дата:
Сообщение: Re: URGENT PROBLEM
Следующее
От: "Christopher Kings-Lynne"
Дата:
Сообщение: RE: Imperfect solutions