It is, but it is only 32 msec because the query has already run and cached the useful bits. And since I have random values, as soon as I look up some new values, they are cached and no longer new.
according to my experiene i would vote for too slow filesystem
What I was hoping for was some general insight from the EXPLAIN ANALYZE, that maybe extra or different indices would help, or if there is some better method for finding one row from 100 million. I realize I am asking a vague question which probably can't be solved as presented.
hmm .. perhaps you can try to denormalize the table, and then use multicolumn indices?