So it looks like when the limit crosses a certain threshold (somewhere north of 2^16), Postgres decides to do a Seq Scan instead of an Index Scan.
I've already lowered random_page_cost to 2. Maybe I should lower it to 1.5? Actually 60K should be plenty for my purposes anyway.
On Wed, Feb 1, 2012 at 10:35 AM, Scott Marlowe
<scott.marlowe@gmail.com> wrote:
On Wed, Feb 1, 2012 at 11:19 AM, Alessandro Gagliardi
<
alessandro@path.com> wrote:
> Interestingly, increasing the limit does not seem to increase the runtime in
> a linear fashion. When I run it with a limit of 60000 I get a runtime
> of 14991 ms. But if I run it with a limit of 70000 I get a runtime of 77744
> ms. I assume that that's because I'm hitting a memory limit and paging out.
> Is that right?
Hard to say. more likely your query plan changes at that point. Run
the queries with "explain analyze" in front of them to find out.