Re: Processor usage/tuning question

Поиск
Список
Период
Сортировка
От israel
Тема Re: Processor usage/tuning question
Дата
Msg-id be938b518ed0e6873d694436f86f51c4@ravnalaska.net
обсуждение исходный текст
Ответ на Re: Processor usage/tuning question  (Andy Colson <andy@squeakycode.net>)
Ответы Re: Processor usage/tuning question  (Jeff Janes <jeff.janes@gmail.com>)
Список pgsql-general
On 10/03/2014 6:28 pm, Andy Colson wrote:
> On 10/03/2014 04:40 PM, Alan Hodgson wrote:
>> On Friday, October 03, 2014 11:24:31 AM Israel Brewster wrote:
>>> I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at
>>> some
>>> stats today, I saw that it was handling about 4-5 transactions/second
>>> (according to the SELECT sum(xact_commit+xact_rollback) FROM
>>> pg_stat_database; query), and an instance of the postmaster process
>>> was
>>> consistently showing 40%-80% utilization to handle this. I didn't
>>> think
>>> anything of that (the machine has plenty of capacity) until I
>>> mentioned it
>>> to a friend of mine, who said that utilization level seemed high for
>>> that
>>> many transactions. So if that level of utilization IS high, what
>>> might I
>>> need to tune to bring it down to a more reasonable level?
>>>
>>
>> You probably have some read queries not properly indexed that are
>> sequentially
>> scanning that 1.2 million row table over and over again. Enable slow
>> query
>> logging and see what's going on.
>>
>>
>>
>
> Yep, do that... and then:
>
> https://wiki.postgresql.org/wiki/Slow_Query_Questions
>
> -Andy

Thank you all for the advice. It looks like the load is due to a query
that is taking around 1300ms to complete - a query that is run by every
client connected (probably half a dozen or so, although I don't have
specific numbers), every fifteen seconds or so. As you can imagine, that
keeps the server rather busy :-) Specifically, it looks like the time is
due to a sort (PARTITION BY tail ORDER BY pointtime DESC) that operates
on around 100,000 rows.

The lovely details:

The query in question is the following:

SELECT *
FROM (SELECT tail, to_char(pointtime,'MM/DD/YYYY HH24:MI:SS'),
lat,lng,altitude,heading,speed,source,pointtime,
       ROW_NUMBER() OVER (PARTITION BY tail ORDER BY pointtime DESC) as
row
FROM data
WHERE tail in (<list of about 55 values or so>) and
pointtime>='<timestamp of 24 hours prior to current UTC
time>'::timestamp) s1
WHERE s1.row<=5
ORDER BY tail, pointtime DESC

In english, it boils down to get the five most recent data points for
each listed tail number. I look at the last 24 hours of data because it
is quite possible that a tail number may have no recent data points.

One obvious optimization is to look at a smaller time range. This will
definitely speed up the query, but at the risk of not getting any data
points for one or more of the requested tail numbers (there is already
this risk, but looking back 24 hours keeps it fairly small for us).

The table description:
tracking=# \d data
                                      Table "public.data"
   Column   |            Type             |                     Modifiers
-----------+-----------------------------+---------------------------------------------------
  id        | bigint                      | not null default
nextval('data_id_seq'::regclass)
  tail      | character varying(16)       | not null
  timerecp  | timestamp without time zone | not null default now()
  altitude  | integer                     |
  pointtime | timestamp without time zone |
  lat       | numeric(7,5)                | not null
  lng       | numeric(8,5)                | not null
  speed     | integer                     |
  heading   | integer                     |
  source    | character varying(64)       |
  syncd     | boolean                     | default false
Indexes:
     "data_pkey" PRIMARY KEY, btree (id)
     "pointtime_idx" btree (pointtime)
     "syncd_idx" btree (syncd)
     "tail_idx" btree (tail)
     "tailtime_idx" btree (tail, pointtime DESC)
     "timerecp_idx" btree (timerecp)

tracking=#

Adding the two-column sorted index didn't seem to affect the query time
much.

The table current contains 1303951 rows, and any given 24 hour period
has around 110,000 rows.

The results of the explain analyze command can be seen here:
http://explain.depesz.com/s/H5w (nice site, btw. I'll have to be sure to
bookmark it), where it clearly shows the the sort on
data.tail,data.pointtime is the largest timesink (if I am reading it
right).

Postgres version is PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu,
compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit

This is the first time I have dug into this particular query, I want to
say it wasn't this slow in my testing, but then the server wasn't under
use in my testing either, and I probably had a lot less data (everything
works, so it's been a while since I looked). Hardware is dual quad-core
2.5GHZ xeon processors, 16 GB ram, and a SSD raid 10 holding the
database. All this is new as of about 4 months ago.

And to recap the postgres memory settings:
shared_buffers: 4GB
effective_cache_size: 12GB

So, basically, what it boils down to is "is there a way to speed up that
sort"? I want to say I've seen a number of similar questions here
recently, so I'll spend some time perusing those.

Thanks again!


В списке pgsql-general по дате отправления:

Предыдущее
От: Jim Nasby
Дата:
Сообщение: Re: Converting char to varchar automatically
Следующее
От: Gavin Flower
Дата:
Сообщение: Re: table versioning approach (not auditing)