Обсуждение: long running query running too long

Поиск
Список
Период
Сортировка

long running query running too long

От
"Todd Fulton"
Дата:

Hi All,

 

I’m really like this list.  Thank you for all the invaluable information!  May I ask a question?

 

I’ve got a table with about 8 million rows and growing.  I must run reports daily off this table, and another smaller one.  Typical query – joins, groupings and aggregates included.  This certain report takes about 10 minutes on average and is getting longer.  I’ve created all the indices I think are necessary.

 

Any advice on how I can get this puppy to go faster?  Hardware changes are not an option at this point, so I’m hoping there is something else I can poke at.  Anyone?

 

 

Todd

 

 

 

POSTGRESQL CONF:

 

#log_connections = on

#fsync = off

#max_connections = 64

 

# Any option can also be given as a command line switch to the

# postmaster, e.g., 'postmaster -c log_connections=on'. Some options

# can be set at run-time with the 'SET' SQL command.

 

# See /usr/share/doc/postgresql/README.postgresql.conf.gz for a full list

# of the allowable options

 

debug_level = 0

log_connections = on

log_pid = on

log_timestamp = on

syslog = 0

# if syslog is 0, turn silent_mode off!

silent_mode = off

syslog_facility = LOCAL0

trace_notify = off

max_connections = 128

# shared_buffers must be at least twice max_connections, and not less than 16

shared_buffers = 256

# TCP/IP access is allowed by default, but the default access given in

# pg_hba.conf will permit it only from localhost, not other machines.

tcpip_socket = 1

 

 

EXPLAIN ANALYZE for the query:

 

prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid, count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid = l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;

NOTICE:  QUERY PLAN:

 

Aggregate  (cost=2740451.66..2820969.41 rows=805178 width=48) (actual time=460577.85..528968.17 rows=1875 loops=1)

  ->  Group  (cost=2740451.66..2800839.97 rows=8051775 width=48) (actual time=460577.57..516992.19 rows=8117748 loops=1)

        ->  Sort  (cost=2740451.66..2740451.66 rows=8051775 width=48) (actual time=460577.55..474657.59 rows=8117748 loops=1)

              ->  Hash Join  (cost=128.26..409517.83 rows=8051775 width=48) (actual time=11.45..85332.88 rows=8117748 loops=1)

                    ->  Seq Scan on spk_tgplog l  (cost=0.00..187965.75 rows=8051775 width=8) (actual time=0.03..28926.67 rows=8125690 loops=1)

                    ->  Hash  (cost=123.41..123.41 rows=1941 width=40) (actual time=11.28..11.28 rows=0 loops=1)

                          ->  Seq Scan on spk_tgp t  (cost=0.00..123.41 rows=1941 width=40) (actual time=0.06..7.60 rows=1880 loops=1)

Total runtime: 529542.66 msec

 

 

 

 

 

Re: long running query running too long

От
PC Drew
Дата:
On Feb 17, 2004, at 10:06 AM, Todd Fulton wrote:
>
>
> I’ve got a table with about 8 million rows and growing.  I must run
> reports daily off this table, and another smaller one.  Typical query
> – joins, groupings and aggregates included.  This certain report takes
> about 10 minutes on average and is getting longer.  I’ve created all
> the indices I think are necessary.
>
>

What indexes have you created?  The query is not using any indexes, so
there might be a problem there.  Can you disable seqscans temporarily
to test this?

>
> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid,
> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =
> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;


Can you please send the results of the following commands:

psql=# \d spk_tgp

and

psql=# \d spk_tgplog


You might also want to try using a sub-query instead of a join.  I'm
assuming that the spk_tgplog table has a lot of rows and spk_tgp has
very few rows.  It might make sense to try something like this:

EXPLAIN ANALYZE
SELECT t.tgpid, t.directoryname, t.templateid, r.requested
FROM (SELECT tgpid, count(*) AS requested FROM spk_tgplog GROUP BY
tgpid) r, spk_tgp t
WHERE r.tgpid = t.tgpid;

--
PC Drew


Re: long running query running too long

От
"Todd Fulton"
Дата:
Hey!  I think I have appropriate indexes, but might now.  You're
absolutely right on my join -- spk_tgplog has the 8.5 million rows,
spk_tgp around 2400.  I'll try the sub-select.  Here is the output you
asked for:

spank_prod=# \d spk_tgp;
                                             Table "spk_tgp"
     Column     |           Type           |
Modifiers
----------------+--------------------------+----------------------------
---------------------------------
 tgpid          | bigint                   | not null
 directoryname  | character varying(64)    | not null
 directoryurl   | character varying(1028)  | not null
 submiturl      | character varying(1028)  |
 submitdate     | date                     |
 acceptdate     | date                     |
 templateid     | character varying(64)    | not null
 reciprocalcode | character varying(2056)  |
 notes          | character varying(2056)  |
 createdate     | timestamp with time zone | not null default
('now'::text)::timestamp(6) with time zone
 modifydate     | timestamp with time zone | not null default
('now'::text)::timestamp(6) with time zone
 requested      | integer                  |
 hostid         | integer                  | default 1
Indexes: idx_spk_tgp_tgpid
Primary key: pk_spk_tgp

spank_prod=# \d idx_spk_tgp_tgpid
       Index "idx_spk_tgp_tgpid"
    Column     |         Type
---------------+-----------------------
 tgpid         | bigint
 directoryname | character varying(64)
btree

spank_prod=# \d spk_tgplog;
                                           Table "spk_tgplog"
    Column     |           Type           |
Modifiers
---------------+--------------------------+-----------------------------
--------------------------------
 remoteaddress | character varying(32)    | not null
 tgpid         | bigint                   | not null
 referer       | character varying(256)   |
 createdate    | timestamp with time zone | not null default
('now'::text)::timestamp(6) with time zone
Indexes: idx_spk_tgplog_createdate,
         idx_spk_tgplog_tgpid

spank_prod=# \d idx_spk_tgplog_createdate
   Index "idx_spk_tgplog_createdate"
   Column   |           Type
------------+--------------------------
 createdate | timestamp with time zone
btree

spank_prod=# \d idx_spk_tgplog_tgpid
Index "idx_spk_tgplog_tgpid"
 Column |  Type
--------+--------
 tgpid  | bigint
btree



Todd



-----Original Message-----
From: pgsql-performance-owner@postgresql.org
[mailto:pgsql-performance-owner@postgresql.org] On Behalf Of PC Drew
Sent: Tuesday, February 17, 2004 12:05 PM
To: Todd Fulton
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] long running query running too long


On Feb 17, 2004, at 10:06 AM, Todd Fulton wrote:
>
>
> I’ve got a table with about 8 million rows and growing.  I must run
> reports daily off this table, and another smaller one.  Typical query
> – joins, groupings and aggregates included.  This certain report takes

> about 10 minutes on average and is getting longer.  I’ve created all
> the indices I think are necessary.
>
>

What indexes have you created?  The query is not using any indexes, so
there might be a problem there.  Can you disable seqscans temporarily
to test this?

>
> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid,
> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =

> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;


Can you please send the results of the following commands:

psql=# \d spk_tgp

and

psql=# \d spk_tgplog


You might also want to try using a sub-query instead of a join.  I'm
assuming that the spk_tgplog table has a lot of rows and spk_tgp has
very few rows.  It might make sense to try something like this:

EXPLAIN ANALYZE
SELECT t.tgpid, t.directoryname, t.templateid, r.requested
FROM (SELECT tgpid, count(*) AS requested FROM spk_tgplog GROUP BY
tgpid) r, spk_tgp t
WHERE r.tgpid = t.tgpid;

--
PC Drew


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


Re: long running query running too long

От
PC Drew
Дата:
On Feb 17, 2004, at 1:41 PM, Todd Fulton wrote:
>
> spank_prod=# \d idx_spk_tgp_tgpid
>        Index "idx_spk_tgp_tgpid"
>     Column     |         Type
> ---------------+-----------------------
>  tgpid         | bigint
>  directoryname | character varying(64)
> btree
>

A couple of things to note:

1.  What version of PostgreSQL are you running?  I'm currently running
7.3.4 and my output of \d on a table shows more index information than
yours does.  If you're running anything earlier than 7.3, I'd
definitely recommend that you upgrade.

2.  Why are you using a multicolumn index in this case?  You might want
to read the page in the documentation that discusses multi-column
indexes specifically.

http://www.postgresql.org/docs/7.4/interactive/indexes-multicolumn.html

In any case, it might even be the case that the index isn't being used
at all.  Does anyone know if indexes are used in a case like this:

> spk_tgp t JOIN spk_tgplog l ON (t.tgpid = l.tgpid)

My hunch is that it's not used.  My understanding is that an index acts
more as a shortcut so the database doesn't have to go through the
entire table to look for specific values.  When joining two tables,
however, you inherently have to go through the entire table.  If anyone
can clarify this, that'd be great.

--
PC Drew


Re: long running query running too long

От
Tom Lane
Дата:
"Todd Fulton" <pongo@jah.net> writes:
> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid,
> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =
> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;
> NOTICE:  QUERY PLAN:

> Aggregate  (cost=2740451.66..2820969.41 rows=805178 width=48) (actual
> time=460577.85..528968.17 rows=1875 loops=1)
>   ->  Group  (cost=2740451.66..2800839.97 rows=8051775 width=48) (actual
> time=460577.57..516992.19 rows=8117748 loops=1)
>         ->  Sort  (cost=2740451.66..2740451.66 rows=8051775 width=48)
> (actual time=460577.55..474657.59 rows=8117748 loops=1)
>               ->  Hash Join  (cost=128.26..409517.83 rows=8051775
> width=48) (actual time=11.45..85332.88 rows=8117748 loops=1)
>                     ->  Seq Scan on spk_tgplog l  (cost=0.00..187965.75
> rows=8051775 width=8) (actual time=0.03..28926.67 rows=8125690 loops=1)
>                     ->  Hash  (cost=123.41..123.41 rows=1941 width=40)
> (actual time=11.28..11.28 rows=0 loops=1)
>                           ->  Seq Scan on spk_tgp t  (cost=0.00..123.41
> rows=1941 width=40) (actual time=0.06..7.60 rows=1880 loops=1)
> Total runtime: 529542.66 msec

The join itself is being done fine --- I doubt there is another option
that will go faster, given the difference in the table sizes.  Note the
join step completes in only 85 seconds.  What is killing you is the
sorting/grouping operation.  You could try increasing sort_mem to see
if that makes it go any faster, but I suspect the best answer would be to
update to PG 7.4.  7.4 will probably use hash aggregation for this and
avoid the sort altogether.

            regards, tom lane