RE: index skipped in favor of seq scan.
От | ryan.a.roemmich@mail.sprint.com |
---|---|
Тема | RE: index skipped in favor of seq scan. |
Дата | |
Msg-id | H0001cb70b98181f.0994776942.kcopmp06@MHS обсуждение исходный текст |
Ответ на | index skipped in favor of seq scan. (ryan.a.roemmich@mail.sprint.com) |
Ответы |
Re: index skipped in favor of seq scan.
Foreign Keys |
Список | pgsql-general |
On the other end of the spectrum there are many addresses with only one entry. When I use one of these addresses in the WHERE clause it takes just as long as the address with 150k rows. If the sequential scan is better for 150k rows out of 800k rows, what about 1 out of 800k? It seems that when my table grew to this size the index was no longer used. If that's true is there any point in having the index? -----Original Message----- From: peter.e [mailto:peter_e@gmx.net] Sent: Monday, July 09, 2001 4:26 PM To: ryan.a.roemmich Cc: pgsql-general Subject: Re: [GENERAL] index skipped in favor of seq scan. ryan.a.roemmich@mail.sprint.com writes: > I am working with putting syslog logs into a database, I'm parsing the > logs and using the key information for my fields. With my test data of > ~200K rows the optimizer used my b-tree index that I created for an > oft-used where clause. When the table grew to over 800K rows the index > was no longer used. The field in question contains IP addresses, but > uses varchar. The values are _not_ unique. One particular address has > 150K entries. How can I keep my where queries speedy? For 150k out of 800k rows, a sequential scan is definitely the better choice. If you can prove otherwise, please post data. For problems with the optimizer in general you should post the schema, the queries, and the explain output. -- Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
В списке pgsql-general по дате отправления: