Re: Message queue table..

Поиск
Список
Период
Сортировка
От Chris Browne
Тема Re: Message queue table..
Дата
Msg-id 60mynros15.fsf@dba2.int.libertyrms.com
обсуждение исходный текст
Ответ на Message queue table..  (Jesper Krogh <jesper@krogh.cc>)
Список pgsql-performance
jesper@krogh.cc (Jesper Krogh) writes:
> I have this "message queue" table.. currently with 8m+
> records. Picking the top priority messages seem to take quite
> long.. it is just a matter of searching the index.. (just as explain
> analyze tells me it does).
>
> Can anyone digest further optimizations out of this output? (All
> records have funcid=4)
>
> # explain analyze SELECT job.jobid, job.funcid, job.arg, job.uniqkey,
> job.insert_time, job.run_after, job.grabbed_until, job.priority,
> job.coalesce FROM workqueue.job  WHERE (job.funcid = 4) AND
> (job.run_after <= 1208442668) AND (job.grabbed_until <= 1208442668)
> AND (job.coalesce = 'Efam') ORDER BY funcid, priority ASC LIMIT 1
> ;

There might be value in having one or more extra indices...

Here are *plausible* candidates:

1.  If "funcid = 4" is highly significant (e.g. - you are always
running this query, and funcid often <> 4), then you might add a
functional index such as:

  create index job_funcid_run_after on workqueue.job (run_after) where funcid = 4;
  create index job_funcid_grabbeduntil on workqueue.job (grabbed_until) where funcid = 4;

2.  Straight indices like the following:

   create index job_run_after on workqueue.job(run_after);
   create index job_grabbed_until on workqueue.job(grabbed_until);
   create index job_funcid on workqueue.job(funcid);
   create index job_coalesce on workqueue.job(coalesce);

Note that it is _possible_ (though by no means guaranteed) that all
three might prove useful, if you're running 8.1+ where PostgreSQL
supports bitmap index scans.

Another possibility...

3.  You might change your process to process multiple records in a
"run" so that you might instead run the query (perhaps via a cursor?)

with LIMIT [Something Bigger than 1].

It does seem mighty expensive to run a 245ms query to find just one
record.  It seems quite likely that you could return the top 100 rows
(LIMIT 100) without necessarily finding it runs in any more time.

Returning 100 tuples in 245ms seems rather more acceptable, no?  :-)
--
(format nil "~S@~S" "cbbrowne" "linuxfinances.info")
http://linuxdatabases.info/info/linuxdistributions.html
Rules of the Evil Overlord #32. "I will not fly into a rage and kill a
messenger who brings me bad news  just to illustrate how evil I really
am. Good messengers are hard to come by."
<http://www.eviloverlord.com/>

В списке pgsql-performance по дате отправления:

Предыдущее
От: James Mansion
Дата:
Сообщение: full_page_write and also compressed logging
Следующее
От: Heikki Linnakangas
Дата:
Сообщение: Re: 3-days-long vacuum of 20GB table