Greg,
> We've hashed through this area before, but for Lance's benefit I'll
> reiterate my dissenting position on this subject. If you're building a
> "tool for dummies", my opinion is that you shouldn't ask any of this
> information. I think there's an enormous benefit to providing something
> that takes basic sizing information and gives conservative guidelines
> based on that--as you say, "safe, middle-of-the-road values"--that are
> still way, way more useful than the default values. The risk in trying to
> make a complicated tool that satisfies all the users Josh is aiming his
> more sophisticated effort at is that you'll lose the newbies.
The problem is that there are no "safe, middle-of-the-road" values for some
things, particularly max_connections and work_mem. Particularly, there are
very different conf profiles between reporting applications and OLTP/Web.
We're talking about order-of-magnitude differences here, not just a few
points. e.g.:
Web app, typical machine:
max_connections = 200
work_mem = 256kb
default_statistics_target=100
autovacuum=on
Reporting app, same machine:
max_connections = 20
work_mem = 32mb
default_statistics_target=500
autovacuum=off
Possibly we could make the language of the "application type" selection less
technical, but I don't see it as dispensible even for a basic tool.
> I wouldn't even bother asking how many CPUs somebody has for what Lance is
> building. The kind of optimizations you'd do based on that are just too
> complicated to expect a tool to get them right and still be accessible to
> a novice.
CPUs affects the various cpu_cost parameters, but I can but the idea that this
should only be part of the "advanced" tool.
--
Josh Berkus
PostgreSQL @ Sun
San Francisco