Re: Slow query and indexes...

Поиск
Список
Период
Сортировка
От PFC
Тема Re: Slow query and indexes...
Дата
Msg-id op.trz1uv0acigqcu@apollo13
обсуждение исходный текст
Ответ на Re: Slow query and indexes...  ("Jonas Henriksen" <jonas.f.henriksen@gmail.com>)
Ответы Re: Slow query and indexes...  (Jim Nasby <decibel@decibel.org>)
Список pgsql-general
> Thanks for a good answer, I'll try to find a workaround. The number of
> data_loggers will change, but not to frequently. I was actually hoping
> to make a view showing the latest data for each logger, maybe I can
> manage that with a stored procedure thingy...

    - Create a table which contains your list of loggers (since it's good
normalization anyway, you probably have it already) and have your data
table's logger_id REFERENCE it
    - You now have a simple way to get the list of loggers (just select from
the loggers table which will contain 3 rows)
    - Then, to get the most recent record for each logger_id, you do :

SELECT l.logger_id, (SELECT id FROM data d WHERE d.logger_id = l.logger_id
ORDER BY d.logger_id DESC, d.date_time DESC LIMIT 1) AS last_record_id
 FROM loggers l

    2 minute example :

forum_bench=> CREATE TABLE loggers (id SERIAL PRIMARY KEY, name TEXT );
CREATE TABLE

forum_bench=> INSERT INTO loggers (name) VALUES ('logger 1'),('logger
2'),('logger 3');
INSERT 0 3

forum_bench=> CREATE TABLE data (id SERIAL PRIMARY KEY, logger_id INTEGER
NOT NULL REFERENCES loggers( id ));
CREATE TABLE

forum_bench=> INSERT INTO data (logger_id) SELECT 1+floor(random()*3) FROM
generate_series(1,1000000);

forum_bench=> SELECT logger_id, count(*) FROM data GROUP BY logger_id;
  logger_id | count
-----------+--------
          3 | 333058
          2 | 333278
          1 | 333664


NOTE : I use id rather than timestamp to get the last one

forum_bench=> EXPLAIN ANALYZE SELECT logger_id, max(id) FROM data GROUP BY
logger_id;
                                                      QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
  HashAggregate  (cost=19166.82..19169.32 rows=200 width=8) (actual
time=1642.556..1642.558 rows=3 loops=1)
    ->  Seq Scan on data  (cost=0.00..14411.88 rows=950988 width=8) (actual
time=0.028..503.308 rows=1000000 loops=1)
  Total runtime: 1642.610 ms

forum_bench=> CREATE INDEX data_by_logger ON data (logger_id, id);
CREATE INDEX

forum_bench=> EXPLAIN ANALYZE SELECT l.id, (SELECT d.id FROM data d WHERE
d.logger_id=l.id ORDER BY d.logger_id DESC, d.id DESC LIMIT 1) FROM
loggers l;
                                                                      QUERY
PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------------
  Seq Scan on loggers l  (cost=0.00..3128.51 rows=1160 width=4) (actual
time=0.044..0.074 rows=3 loops=1)
    SubPlan
      ->  Limit  (cost=0.00..2.68 rows=1 width=8) (actual time=0.020..0.020
rows=1 loops=3)
            ->  Index Scan Backward using data_by_logger on data d
(cost=0.00..13391.86 rows=5000 width=8) (actual time=0.018..0.018 rows=1
loops=3)
                  Index Cond: (logger_id = $0)
  Total runtime: 0.113 ms
(6 lignes)

forum_bench=> SELECT l.id, (SELECT d.id FROM data d WHERE d.logger_id=l.id
ORDER BY d.logger_id DESC, d.id DESC LIMIT 1) FROM loggers l;
  id | ?column?
----+----------
   1 |   999999
   2 |  1000000
   3 |   999990
(3 lignes)

В списке pgsql-general по дате отправления:

Предыдущее
От: Magnus Hagander
Дата:
Сообщение: Re: tokenize string for tsearch?
Следующее
От: "Manuel Preliteiro"
Дата:
Сообщение: /var/run/postgresql ??