Hi,
I have a table with a column called "state". Each row can be in one
of four states, let's call them 'new', 'pending', 'ok', and 'bad'.
On average, about 95% of the rows will be 'bad', with the remaining
5% being in one of the other three states. If the table has 50K rows
and I just want to pull out the 'ok' rows, I don't want to do a sequential
scan. To pull out the 'bad' rows, obviously, sequential scan is fine.
I've heard that a btree index performs badly in this situation. Is
a hash index appropriate? I've heard bad things about hash indexes in
PostgreSQL.
Regards,
David.
Roaring Penguin Software Inc. | http://www.roaringpenguin.com
GPG fingerprint: C523 771C 3710 0F54 B2D2 4B0D C6EF 6991 34AB 95BA
GPG public key: http://www.roaringpenguin.com/dskoll-key-2002.txt ID: 34AB95BA