I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
It looks like you are wishing to optimize for all-in-memory situations,
in which case the traditional advice is to reduce random_page_cost to
something close to 1. AFAICS all the rowcount estimates you're seeing
are spot on, or as close to spot on as you could realistically hope for,
and so the problem lies with the cost parameters. Fooling with the
statistics is not going to help if the rowcount estimates are already
I tried to change random_page_cost to 1.1 or something close to it and increase/decrease effective_cache_size. But Postgres always prefer plan with merge join.