I tried creating the following index:
CREATE INDEX messageq17
ON messageq_table
USING btree
(staff_ty, staff_id, entity_id)
WHERE inactive = false;
'inactive = false' (active would be much easy but this is legacy) records should make up a smaller proportion of the
overalldataset (and much more of the queries will specify this clause) and the results are very promising.
I will also try changing the settings and report back.
Thanks again guys,
-----Original Message-----
From: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Sent: 03 August 2012 15:34
To: Russell Keane; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] query using incorrect index
Russell Keane <Russell.Keane@inps.co.uk> wrote:
> "log_min_duration_statement";"1ms"
> "shared_buffers";"32MB"
> "work_mem";"1MB"
Those are pretty low values even for a 4GB machine. I suggest the following changes and additions, based on the fact
thatyou seem to have the active portion of the database fully cached.
shared_buffers = '160MB'
work_mem = '8MB'
seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.03
effective_cache_size = '2GB'
> Explain analyse with both indexes present but without the limit (uses
> the correct index):
> "Total runtime: 0.092 ms"
Part of problem is that it thinks it will find a matching row fairly quickly, and having done so using the index it
chosewill mean it is the *right* row. The problem is that there are no matching rows, so it has to scan the entire
index. More fine-grained statistics
*might* help. If other techniques don't help, you can rewrite the query slightly to create an optimization fence, but
thatshould be a last resort. I agree with Robert that if you have a lot of queries that select on "incoming" and/or
"inactive",a conditional index (with a WHERE clause in its definition) is likely to be very helpful.
-Kevin