Обсуждение: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

Поиск
Список
Период
Сортировка

INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Tom DalPozzo
Дата:
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one 256 char data field; t1 has always got 10000 row, while t2 is increasing as explained in the following.

My pqlib client countinously updates  one row in t1 (every time targeting a different row) and inserts a new row in t2. All this in blocks of 1000 update-insert per commit, in order to get better performance.
Wal_method is fsync, fsync is on, attached my conf file. 
I've a 3.8ghz laptop with evo SSD.

Performance is  measured every two executed blocks and related to these blocks.

Over the first few minutes performance is around 10Krow/s then it slowly drops, over next few minutes to 4Krow/s, then it slowly returns high and so on, like a wave.
I don't understand this behaviour. Is it normal? What does it depend on?

Also, when I stop the client I see the SSD light still heavily working. It would last quite a while unless I stop the postgresql server, in this case it suddenly stops. If I restart the server it remains off.
I'm wondering if it's normal. I'd like to be sure that my data are safe once commited.

Regards
Pupillo

P.S.: I put this question in general questions as my concern is not if the performance is high or not.









Вложения

Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Adrian Klaver
Дата:
On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> Hi,
> I've two tables, t1 and t2, both with one bigint id indexed field and
> one 256 char data field; t1 has always got 10000 row, while t2 is
> increasing as explained in the following.
>
> My pqlib client countinously updates  one row in t1 (every time
> targeting a different row) and inserts a new row in t2. All this in
> blocks of 1000 update-insert per commit, in order to get better performance.
> Wal_method is fsync, fsync is on, attached my conf file.
> I've a 3.8ghz laptop with evo SSD.
>
> Performance is  measured every two executed blocks and related to these
> blocks.
>
> Over the first few minutes performance is around 10Krow/s then it slowly
> drops, over next few minutes to 4Krow/s, then it slowly returns high and
> so on, like a wave.
> I don't understand this behaviour. Is it normal? What does it depend on?

Have you looked at the Postgres log entries that cover these episodes?

Is there anything of interest there?

>
> Also, when I stop the client I see the SSD light still heavily working.
> It would last quite a while unless I stop the postgresql server, in this
> case it suddenly stops. If I restart the server it remains off.
> I'm wondering if it's normal. I'd like to be sure that my data are safe
> once commited.
>
> Regards
> Pupillo
>
> P.S.: I put this question in general questions as my concern is not if
> the performance is high or not.
>
>
>
>
>
>
>
>
>
>
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Tomas Vondra
Дата:
On Fri, 2016-12-02 at 13:45 -0800, Adrian Klaver wrote:
>
> On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> >
> >
> > Hi,
> > I've two tables, t1 and t2, both with one bigint id indexed field
> > and
> > one 256 char data field; t1 has always got 10000 row, while t2 is
> > increasing as explained in the following.
> >
> > My pqlib client countinously updates  one row in t1 (every time
> > targeting a different row) and inserts a new row in t2. All this in
> > blocks of 1000 update-insert per commit, in order to get better
> > performance.
> > Wal_method is fsync, fsync is on, attached my conf file.
> > I've a 3.8ghz laptop with evo SSD.
> >
> > Performance is  measured every two executed blocks and related to
> > these
> > blocks.
> >
> > Over the first few minutes performance is around 10Krow/s then it
> > slowly
> > drops, over next few minutes to 4Krow/s, then it slowly returns
> > high and
> > so on, like a wave.
> > I don't understand this behaviour. Is it normal? What does it
> > depend on?
> Have you looked at the Postgres log entries that cover these
> episodes?
>
> Is there anything of interest there?
>
In particular look at checkpoints. In the config file you've changed
checkpoint_timeout, but you haven't changed max_wal_size, so my guess
is the checkpoints happen every few minutes, and run for about 1/2 the
time (thanks for completion_target=0.5). That would be consistent with
pattern of good/bad performance.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Tom DalPozzo
Дата:
I tried to tune some parameters without appreciable changes in this behaviour.
I tried to play with:
checkpoint timeout
wal size
shared buffers
commit delay
checkpoijnt completion target

No meaningful info found in the log file.

Regards



2016-12-04 4:02 GMT+01:00 Tomas Vondra <tomas.vondra@2ndquadrant.com>:
On Fri, 2016-12-02 at 13:45 -0800, Adrian Klaver wrote:
>
> On 12/02/2016 09:40 AM, Tom DalPozzo wrote:
> >
> >
> > Hi,
> > I've two tables, t1 and t2, both with one bigint id indexed field
> > and
> > one 256 char data field; t1 has always got 10000 row, while t2 is
> > increasing as explained in the following.
> >
> > My pqlib client countinously updates  one row in t1 (every time
> > targeting a different row) and inserts a new row in t2. All this in
> > blocks of 1000 update-insert per commit, in order to get better
> > performance.
> > Wal_method is fsync, fsync is on, attached my conf file.
> > I've a 3.8ghz laptop with evo SSD.
> >
> > Performance is  measured every two executed blocks and related to
> > these
> > blocks.
> >
> > Over the first few minutes performance is around 10Krow/s then it
> > slowly
> > drops, over next few minutes to 4Krow/s, then it slowly returns
> > high and
> > so on, like a wave.
> > I don't understand this behaviour. Is it normal? What does it
> > depend on?
> Have you looked at the Postgres log entries that cover these
> episodes?
>
> Is there anything of interest there?
>
In particular look at checkpoints. In the config file you've changed
checkpoint_timeout, but you haven't changed max_wal_size, so my guess
is the checkpoints happen every few minutes, and run for about 1/2 the
time (thanks for completion_target=0.5). That would be consistent with
pattern of good/bad performance.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Jeff Janes
Дата:
On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one 256 char data field; t1 has always got 10000 row, while t2 is increasing as explained in the following.

My pqlib client countinously updates  one row in t1 (every time targeting a different row) and inserts a new row in t2. All this in blocks of 1000 update-insert per commit, in order to get better performance.
Wal_method is fsync, fsync is on, attached my conf file. 
I've a 3.8ghz laptop with evo SSD.

Performance is  measured every two executed blocks and related to these blocks.

Over the first few minutes performance is around 10Krow/s then it slowly drops, over next few minutes to 4Krow/s, then it slowly returns high and so on, like a wave.
I don't understand this behaviour. Is it normal? What does it depend on?

Yes, that is normal.  It is also very complicated.  It depends on pretty much everything.  PostgreSQL, kernel, filesystem, IO controller, firmware, hardware, other things going on on the computer simultaneously, etc.
 

Also, when I stop the client I see the SSD light still heavily working.

This is normal.  It writes out critical data to a WAL log first, and then leisurely writes out the changes to the actual data files later.  In the case of a crash, the WAL will be used to replay the data file changes which may or may not have made it to disk.

It would last quite a while unless I stop the postgresql server, in this case it suddenly stops.

Do you stop postgresql with fast or immediate shutdown?
 
If I restart the server it remains off.
I'm wondering if it's normal. I'd like to be sure that my data are safe once commited.

If your kernel/fs/SSD doesn't lie about syncing the data, then your data is safe once committed. (It is possible there are bugs in PostgreSQL, of course, but nothing you report indicates you have found one).

If you really want to be sure that the full stack, from PostgreSQL down to the hardware on the SSD, is crash safe, the only real way is to do some "pull the plug" tests.

Cheers,

Jeff

Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Tom DalPozzo
Дата:
Hi,
about SSD light: 
I guessed it was WAL -> actual db files data traffic. It explains why the light stops blinking after shutting down the server (I did it via kill command) . But if so, I expected the light to restart blinking after     restarting the server (in order to continue WAL->db activity).
Regards





2016-12-05 20:02 GMT+01:00 Jeff Janes <jeff.janes@gmail.com>:
On Fri, Dec 2, 2016 at 9:40 AM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
Hi,
I've two tables, t1 and t2, both with one bigint id indexed field and one 256 char data field; t1 has always got 10000 row, while t2 is increasing as explained in the following.

My pqlib client countinously updates  one row in t1 (every time targeting a different row) and inserts a new row in t2. All this in blocks of 1000 update-insert per commit, in order to get better performance.
Wal_method is fsync, fsync is on, attached my conf file. 
I've a 3.8ghz laptop with evo SSD.

Performance is  measured every two executed blocks and related to these blocks.

Over the first few minutes performance is around 10Krow/s then it slowly drops, over next few minutes to 4Krow/s, then it slowly returns high and so on, like a wave.
I don't understand this behaviour. Is it normal? What does it depend on?

Yes, that is normal.  It is also very complicated.  It depends on pretty much everything.  PostgreSQL, kernel, filesystem, IO controller, firmware, hardware, other things going on on the computer simultaneously, etc.
 

Also, when I stop the client I see the SSD light still heavily working.

This is normal.  It writes out critical data to a WAL log first, and then leisurely writes out the changes to the actual data files later.  In the case of a crash, the WAL will be used to replay the data file changes which may or may not have made it to disk.

It would last quite a while unless I stop the postgresql server, in this case it suddenly stops.

Do you stop postgresql with fast or immediate shutdown?
 
If I restart the server it remains off.
I'm wondering if it's normal. I'd like to be sure that my data are safe once commited.

If your kernel/fs/SSD doesn't lie about syncing the data, then your data is safe once committed. (It is possible there are bugs in PostgreSQL, of course, but nothing you report indicates you have found one).

If you really want to be sure that the full stack, from PostgreSQL down to the hardware on the SSD, is crash safe, the only real way is to do some "pull the plug" tests.

Cheers,

Jeff

Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Jeff Janes
Дата:
On Tue, Dec 6, 2016 at 2:44 AM, Tom DalPozzo <t.dalpozzo@gmail.com> wrote:
Hi,
about SSD light: 
 
I guessed it was WAL -> actual db files data traffic. It explains why the light stops blinking after shutting down the server (I did it via kill command) .

Do you kill with -15 (the default) or -9?  And which process, the postgres master itself or just some random child?
 
But if so, I expected the light to restart blinking after     restarting the server (in order to continue WAL->db activity).

The normal checkpoint is paced.  So trickling out data slowly will keep the light on, but not actually stress the system.

When you shutdown the system, it does a fast checkpoint.  This gets the data written out as quickly as possible (since you are shutting down, it doesn't worry about interfering with performance for other users, as there are none), so once it is done you don't see the light anymore.  If you do a clean shutdown (kill -15 of the postgres master) this fast checkpoint happens upon shutdown.  If you do an abort (kill -9) then the fast checkpoint happens upon start-up, once recovery is finished but before the database is opened of regular use.

 Cheers,

Jeff

Re: INSERT - UPDATE throughput oscillating and SSD activity after stopping the client

От
Tom DalPozzo
Дата:
Hi,
I did: pkill -x postgres
so it should send SIGTERM.
Regards
Pupillo