Обсуждение: Slow INSERT

От:
Michal Táborský
Дата:

I am experiencing rather slow INSERTs on loaded server. The table I am
inserting to is:

CREATE TABLE pagestats
(
   page_id int4 NOT NULL,
   viewed timestamptz DEFAULT now(),
   session int4 NOT NULL
) WITH OIDS;

The table is populated with 700k rows. It is VACUUM ANALYZED every
night, though it is only INSERTED to and SELECTED from, no UPDATES or
DELETES. There are no indices, triggers or constraints attached to it.
There are about 5 inserts pre second (sometimes more, but 10/s max).

The INSERT is:
INSERT INTO pagestats (page_id,session) VALUES (5701,1147421823)

Sometimes, it takes as long as 1300ms! Other queries are quite swift,
even compplex SELECTS and most of the INSERTS run fast. But occasionally
(every 50th or 100th INSERT) it takes forever (and stalls the webpage
from loading). The only special thing about this table is, it does not
have a PRIMARY KEY, but I should think that this constraint would only
slow it down even more.

Any ideas what can be wrong?

--
Michal Taborsky
http://www.taborsky.cz


От:
Tom Lane
Дата:

=?ISO-8859-2?Q?Michal_T=E1borsk=FD?= <> writes:
> I am experiencing rather slow INSERTs on loaded server.
> ... There are no indices, triggers or constraints attached to it.

It's hard to see how inserting to such a simple table would be slow.

> Sometimes, it takes as long as 1300ms! Other queries are quite swift,
> even compplex SELECTS and most of the INSERTS run fast. But occasionally
> (every 50th or 100th INSERT) it takes forever (and stalls the webpage
> from loading).

Is the number of inserts between slowdowns perfectly repeatable?  My
first thought is that the fast case is associated with inserting onto a
page that is the same one last inserted to, and the slow case is
associated with finding a new page to insert onto (which, given that you
never UPDATE or DELETE, will always mean extending the file).  Given
that the table rows are fixed width, the number of rows that fit on a
page should be constant, so this theory cannot be right if the number of
inserts between slowdowns varies.

Also, are all the inserts being issued by the same server process, or
are they scattered across multiple processes?  I'm not sure this theory
holds water unless all the inserts are done in the same process.

            regards, tom lane

От:
Michal Taborsky
Дата:

Tom Lane wrote:
> It's hard to see how inserting to such a simple table would be slow.

Indeed.

> Is the number of inserts between slowdowns perfectly repeatable?  My
> first thought is that the fast case is associated with inserting onto a
> page that is the same one last inserted to, and the slow case is
> associated with finding a new page to insert onto (which, given that you
> never UPDATE or DELETE, will always mean extending the file).  Given
> that the table rows are fixed width, the number of rows that fit on a
> page should be constant, so this theory cannot be right if the number of
> inserts between slowdowns varies.

I ran some tests to support this hypothesis. Every 500th insert is a tad
slower, but it is insignificant (normally the INSERT lasts 1.5ms, every
500th is 9ms). During my tests (10 runs of 1000 INSERTS) I had
experienced only one "slow" insert (2000ms). It is clearly caused by
other processes running on this server, but such degradation of
performance is highly suspicious, because the server very rarely goes
over load 1.0. Just for the record, it is FreeBSD 4.9 and the system
never swaps.

> Also, are all the inserts being issued by the same server process, or
> are they scattered across multiple processes?  I'm not sure this theory
> holds water unless all the inserts are done in the same process.

Nope. It is a webserver, so these requests are pushed through several
persistent connections (20-30, depends on current load). This insert
occurs only once per pageload.

--
Michal Taborsky
http://www.taborsky.cz


От:
Michal Taborsky
Дата:

Tom Lane wrote:
> Actually, the simpler theory is that the slowdown is caused by
> background checkpoint operations.  Now a checkpoint would slow
> *everything* down not only this one insert, so maybe that's not
> the right answer either, but it's my next idea.  You could check
> this to some extent by manually issuing a CHECKPOINT command and
> seeing if you get an insert hiccup.  Note though that closely
> spaced checkpoints will have less effect, because less I/O will
> be triggered when not much has changed since the last one.  So
> you'd want to wait a bit between experiments.

Aha! This is really the case. I've let the test run and issued manual
CHECKPOINT command. The command itself took about 3 secs and during that
time I had some slow INSERTS. So we know the reason.

I've read the discussion in "Trying to minimize the impact of
checkpoints" thread and I get it, that there is nothing I can do about
it. Well, we'll have to live with that, at least until 7.5.

Thanks of the help all the same.

--
Michal Taborsky
http://www.taborsky.cz


От:
Tom Lane
Дата:

Michal Taborsky <> writes:
> I've read the discussion in "Trying to minimize the impact of
> checkpoints" thread and I get it, that there is nothing I can do about
> it. Well, we'll have to live with that, at least until 7.5.

You could experiment with the checkpoint interval (checkpoint_timeout).
A shorter interval will mean more total I/O (the same page will get
written out more often) but it should reduce the amount of I/O done by
any one checkpoint.  You might find that the extra overhead is worth it
to reduce the spikes.

But 7.5 should provide a much better answer, yes.

            regards, tom lane

От:
Tom Lane
Дата:

Michal Taborsky <> writes:
> I ran some tests to support this hypothesis. Every 500th insert is a tad
> slower, but it is insignificant (normally the INSERT lasts 1.5ms, every
> 500th is 9ms). During my tests (10 runs of 1000 INSERTS) I had
> experienced only one "slow" insert (2000ms). It is clearly caused by
> other processes running on this server, but such degradation of
> performance is highly suspicious, because the server very rarely goes
> over load 1.0.

Actually, the simpler theory is that the slowdown is caused by
background checkpoint operations.  Now a checkpoint would slow
*everything* down not only this one insert, so maybe that's not
the right answer either, but it's my next idea.  You could check
this to some extent by manually issuing a CHECKPOINT command and
seeing if you get an insert hiccup.  Note though that closely
spaced checkpoints will have less effect, because less I/O will
be triggered when not much has changed since the last one.  So
you'd want to wait a bit between experiments.

            regards, tom lane