On Sat, 12 Jan 2013 02:47:26 +0100
"T. E. Lawrence" <t.e.lawrence@icloud.com> wrote:
> Hello,
>
> I have a pretty standard query with two tables:
>
> SELECT table_a.id FROM table_a a, table_b b WHERE ... AND ... AND b.value=...;
>
> With the last "AND b.value=..." the query is extremely slow (did not wait for it to end, but more than a minute),
becausethe value column is not indexed (contains items longer than 8K).
You can construct your own home made index, add a new column in table b, with the first 8-16 bytes/chars of b.value,
usethis column on your query and refine to a complete b.value. Don't forget tocCreate an index for it too. You can keep
thiscolumn updated with a trigger.
Perhaps you can use a partial index for b.value column, i never used that feature so documentation/others can point you
howto do it.
> However the previous conditions "WHERE ... AND ... AND" should have already reduced the candidate rows to just a few
(table_bcontains over 50m rows). And indeed, removing the last "AND b.value=..." speeds the query to just a
millisecond.
>
> Is there a way to instruct PostgreSQL to do first the initial "WHERE ... AND ... AND" and then the last "AND
b.value=..."on the (very small) result?
>
> Thank you and kind regards,
> T.
--- ---
Eduardo Morras <emorrasg@yahoo.es>