Tom Lane <tgl@sss.pgh.pa.us> writes:
> Um ... I *am* an optimizer-geek.
That explains so much :)
> I stand by my comment that there's a tradeoff between the potential gain
> from an optimization and the time spent to find it.
Well there are always tradoffs in engineering. I'm just trying to push a
little bit in one direction and make you rethink your assumptions.
In the past postgres users were largely OLTP systems (largely web sites)
running ad-hoc queries that are parsed and executed every time and interpolate
parameters into the query string. That's crap. With the new FE binary
protocol, and a bit of a push to the driver writers a good high performance
(and secure) system ought to be able to run entirely prepared queries that are
prepared once per backend process and executed thousands of times.
> PG is at a disadvantage compared to typical compilation scenarios, in
> that a compiler assumes its output will be executed many times, while
> SQL queries often are planned and then executed but once. There's been
> some talk of working harder when planning a "prepared statement", but
> so far I've not seen very many places where I'd really want to alter
> the planner's behavior on that basis.
I think that's backwards actually. The queries where you would want to work
super extra hard, spending a few seconds or even minutes checking possible
plans are the ones that will run for hours. Those are more likely to be DSS
queries that are unprepared ad-hoc queries that will be executed only once.
For OLTP queries I think postgres can afford to not worry about small
(subsecond) constant-time optimizations even if they're unlikely to return big
benefits because OLTP systems should be running entirely prepared queries.
--
greg