Re: Why insertion throughput can be reduced with an increase of batch size?

Поиск
Список
Период
Сортировка
От Adrian Klaver
Тема Re: Why insertion throughput can be reduced with an increase of batch size?
Дата
Msg-id 2e81486a-beba-57b7-9bb0-5d6204b4d652@aklaver.com
обсуждение исходный текст
Ответ на Why insertion throughput can be reduced with an increase of batch size?  (Павел Филонов <filonovpv@gmail.com>)
Список pgsql-general
On 08/21/2016 11:53 PM, Павел Филонов wrote:
> My greetings to everybody!
>
> I recently faced with the observation which I can not explain. Why
> insertion throughput can be reduced with an increase of batch size?
>
> Brief description of the experiment.
>
>   * PostgreSQL 9.5.4 as server
>   * https://github.com/sfackler/rust-postgres library as client driver
>   * one relation with two indices (scheme in attach)
>
> Experiment steps:
>
>   * populate DB with 259200000 random records
>   * start insertion for 60 seconds with one client thread and batch size = m
>   * record insertions per second (ips) in clients code
>
> Plot median ips from m for m in [2^0, 2^1, ..., 2^15] (in attachment).
>
>
> On figure with can see that from m = 128 to m = 256 throughput have been
> reduced from 13 000 ips to 5000.
>
> I hope someone can help me understand what is the reason for such behavior?

To add to Jeff's questions:

You say you are measuring the IPS in the clients code.

Where is the client, on the same machine, same network or remote network?

>
> --
> Best regards
> Filonov Pavel
>
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


В списке pgsql-general по дате отправления:

Предыдущее
От: Adrian Klaver
Дата:
Сообщение: Re: Unique constraint on field inside composite type.
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Unique constraint on field inside composite type.