Re: Searching in varchar column having 100M records
От | Tomas Vondra |
---|---|
Тема | Re: Searching in varchar column having 100M records |
Дата | |
Msg-id | 20190719183946.ls5vmus7ukhbl63w@development обсуждение исходный текст |
Ответ на | Re: Searching in varchar column having 100M records (mayank rupareliya <mayankjr03@gmail.com>) |
Ответы |
Re: Searching in varchar column having 100M records
|
Список | pgsql-performance |
On Fri, Jul 19, 2019 at 07:43:26PM +0530, mayank rupareliya wrote: >Well, you haven't shown us the execution plan, so it's hard to check why >it did not help much and give you further advice. > > >This is the latest query execution with explain after adding indexing on >both columns. > >Aggregate (cost=174173.57..174173.58 rows=1 width=8) (actual >time=65087.657..65087.658 rows=1 loops=1) > -> Bitmap Heap Scan on fields (cost=1382.56..174042.61 rows=52386 >width=0) (actual time=160.340..65024.533 rows=31857 loops=1) > Recheck Cond: ((field)::text = 'Champlin'::text) > Heap Blocks: exact=31433 > -> Bitmap Index Scan on index_field (cost=0.00..1369.46 >rows=52386 width=0) (actual time=125.078..125.079 rows=31857 loops=1) > Index Cond: ((field)::text = 'Champlin'::text) >Planning Time: 8.595 ms >Execution Time: 65093.508 ms > That very clearly does not use the index-only scan, so it's not surprising it's not any faster. You need to find out why the planner makes that decision. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
В списке pgsql-performance по дате отправления: