Suppose that the server is executing a lengthy query, and the client breaks the connection. The operating system will be aware that the connection is no more, but PostgreSQL doesn't notice, because it's not try to read from or write to the socket. It's not paying attention to the socket at all. In theory, the query could be one that runs for a million years and continue to chew up CPU and I/O, or at the very least a connection slot, essentially forever. That's sad.
I don't have a terribly specific idea about how to improve this, but is there some way that we could, at least periodically, check the socket to see whether it's dead? Noticing the demise of the client after a configurable interval (maybe 60s by default?) would be infinitely better than never.