sequential scan performance
| От | Michael Engelhart |
|---|---|
| Тема | sequential scan performance |
| Дата | |
| Msg-id | D520F8B3-20D6-4272-A6D6-8B690871DE73@mac.com обсуждение исходный текст |
| Ответы |
Re: sequential scan performance
Re: sequential scan performance Re: sequential scan performance Re: sequential scan performance |
| Список | pgsql-performance |
Hi -
I have a table of about 3 million rows of city "aliases" that I need
to query using LIKE - for example:
select * from city_alias where city_name like '%FRANCISCO'
When I do an EXPLAIN ANALYZE on the above query, the result is:
Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42)
(actual time=73.369..3330.281 rows=407 loops=1)
Filter: ((name)::text ~~ '%FRANCISCO'::text)
Total runtime: 3330.524 ms
(3 rows)
this is a query that our system needs to do a LOT. Is there any way
to improve the performance on this either with changes to our query
or by configuring the database deployment? We have an index on
city_name but when using the % operator on the front of the query
string postgresql can't use the index .
Thanks for any help.
Mike
В списке pgsql-performance по дате отправления: