Re: pgbench could not send data to client: Broken pipe

Поиск
Список
Период
Сортировка
От Kevin Grittner
Тема Re: pgbench could not send data to client: Broken pipe
Дата
Msg-id 4C88D1F1020000250003544F@gw.wicourts.gov
обсуждение исходный текст
Ответ на Re: pgbench could not send data to client: Broken pipe  (Greg Smith <greg@2ndquadrant.com>)
Ответы Re: pgbench could not send data to client: Broken pipe  (Greg Smith <greg@2ndquadrant.com>)
Список pgsql-performance
Greg Smith <greg@2ndquadrant.com> wrote:
> Kevin Grittner wrote:
>> Of course, the only way to really know some of these numbers is
>> to test your actual application on the real hardware under
>> realistic load; but sometimes you can get a reasonable
>> approximation from early tests or "gut feel" based on experience
>> with similar applications.
>
> And that latter part only works if your gut is as accurate as
> Kevin's.  For most people, even a rough direct measurement is much
> more useful than any estimate.

:-)  Indeed, when I talk about "'gut feel' based on experience with
similar applications" I'm think of something like, "When I had a
query with the same number of joins against tables about this size
with the same number and types of key columns, metrics showed that
it took n ms and was CPU bound, and this new CPU and RAM hardware
benchmarks twice as fast, so I'll ballpark this at 2/3 the runtime
as a gut feel, and follow up with measurements as soon as
practical."  That may not have been entirely clear....

> So the incoming query in this not completely contrived case (I
> just picked the numbers to make the math even) takes the same
> amount of time to deliver a result either way.

I'm gonna quibble with you here.  Even if it gets done with the last
request at the same time either way (which discounts the very real
contention and context switch costs), if you release the thundering
herd of requests all at once they will all finish at about the same
time as that last request, while a queue allows a stream of
responses throughout.  Since results start coming back almost
immediately, and stream through evenly, your *average response time*
is nearly cut in half with the queue.  And that's without figuring
the network congestion issues of having all those requests complete
at the same time.

In my experience you can expect the response time benefit of
reducing the size of your connection pool to match available
resources to be more noticeable than the throughput improvements.
This directly contradicts many people's intuition, revealing the
downside of "gut feel".

-Kevin

В списке pgsql-performance по дате отправления:

Предыдущее
От: Greg Smith
Дата:
Сообщение: Re: pgbench could not send data to client: Broken pipe
Следующее
От: Greg Smith
Дата:
Сообщение: Re: pgbench could not send data to client: Broken pipe