Обсуждение: performance tuning queries
Hi All;
I'm looking for tips / ideas per performance tuning some specific queries.
These are generally large tables on a highly active OLTP system
(100,000 - 200,000 plus queries per day)
First off, any thoughts per tuning inserts into large tables. I have a large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0)
(1 row)
The query cost is low but this is one of the slowest statements per pgfouine
Next we have a select count(*) that also one of the top offenders:
select count(*) from public.tab3 where user_id=31
and state='A'
and amount>0;
QUERY PLAN
-----------------------------------------------------------------------------------------------------
Aggregate (cost=3836.53..3836.54 rows=1 width=0)
-> Index Scan using order_user_indx ontab3 user_id (cost=0.00..3834.29
rows=897 width=0)
Index Cond: (idx_user_id = 31406948::numeric)
Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))
(4 rows)
We have an index on the user_id but not on the state or amount,
add index to amount ?
Thoughts ?
> First off, any thoughts per tuning inserts into large tables. I have a
> large
> table with an insert like this:
>
> insert into public.bigtab1 (text_col1, text_col2, id) values ...
>
> QUERY PLAN
> ------------------------------------------
> Result (cost=0.00..0.01 rows=1 width=0)
> (1 row)
>
> The query cost is low but this is one of the slowest statements per
> pgfouine
Possible Causes of slow inserts :
- slow triggers ?
- slow foreign key checks ? (missing index on referenced table ?)
- functional index on a slow function ?
- crummy hardware (5 MB/s RAID cards, etc)
- too many indexes ?
> Next we have a select count(*) that also one of the top offenders:
>
> select count(*) from public.tab3 where user_id=31
> and state='A'
> and amount>0;
>
> QUERY PLAN
> -----------------------------------------------------------------------------------------------------
> Aggregate (cost=3836.53..3836.54 rows=1 width=0)
> -> Index Scan using order_user_indx ontab3 user_id
> (cost=0.00..3834.29
> rows=897 width=0)
> Index Cond: (idx_user_id = 31406948::numeric)
> Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric))
> (4 rows)
>
> We have an index on the user_id but not on the state or amount,
>
> add index to amount ?
Can we see EXPLAIN ANALYZE ?
In this case the ideal index would be multicolumn (user_id, state) or
(user_id,amount) or (user_id,state,amount) but choosing between the 3
depends on your data...
You could do :
SELECT count(*), state, amount>0 FROM public.tab3 where user_id=31 GROUP
BY state, amount>0;
And post the results.
am Wed, dem 26.11.2008, um 21:21:04 -0700 mailte Kevin Kempter folgendes: > Next we have a select count(*) that also one of the top offenders: > > select count(*) from public.tab3 where user_id=31 > and state='A' > and amount>0; > > QUERY PLAN > ----------------------------------------------------------------------------------------------------- > Aggregate (cost=3836.53..3836.54 rows=1 width=0) > -> Index Scan using order_user_indx ontab3 user_id (cost=0.00..3834.29 > rows=897 width=0) > Index Cond: (idx_user_id = 31406948::numeric) > Filter: ((state = 'A'::bpchar) AND (amount > 0::numeric)) > (4 rows) > > We have an index on the user_id but not on the state or amount, > > add index to amount ? Depends. - Is the index on user_id a unique index? - how many different values are in the table for state, i.e., maybe an index on state can help - how many rows in the table with amount > 0? If almost all rows contains an amount > 0 an index can't help in this case Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net
Kevin Kempter schrieb: > Hi All; > > I'm looking for tips / ideas per performance tuning some specific queries. > These are generally large tables on a highly active OLTP system > (100,000 - 200,000 plus queries per day) > > First off, any thoughts per tuning inserts into large tables. I have a large > table with an insert like this: > > insert into public.bigtab1 (text_col1, text_col2, id) values ... > > QUERY PLAN > ------------------------------------------ > Result (cost=0.00..0.01 rows=1 width=0) > (1 row) > > The query cost is low but this is one of the slowest statements per pgfouine > Do you insert multiple values in one transaction, or one transaction per insert?