Gavin Sherry wrote:
> On Thu, 16 Jun 2005, [ISO-8859-1] Hans-J�rgen Sch�nig wrote:
>
>
>>>2) By no fault of its own, autovacuum's level of granularity is the table
>>>level. For people dealing with non-trivial amounts of data (and we're not
>>>talking gigabytes or terabytes here), this is a serious drawback. Vacuum
>>>at peak times can cause very intense IO bursts -- even with the
>>>enhancements in 8.0. I don't think the solution to the problem is to give
>>>users the impression that it is solved and then vacuum their tables during
>>>peak periods. I cannot stress this enough.
>>
>>
>>I completly agree with Gavin - integrating this kind of thing into the
>>backend writer or integrate it with FSM would be the ideal solution.
>>
>>I guess everybody who has already vacuumed a 2 TB relation will agree
>>here. VACUUM is not a problem for small "my cat Minka" databases.
>>However, it has been a real problem on large, heavy-load databases. I
>>have even seen people splitting large tables and join them with a view
>>to avoid long vacuums and long CREATE INDEX operations (i am not joking
>>- this is serious).
>
>
> I think this gets away from my point a little. People with 2 TB tables can
> take care of themselves, as can people who've taken the time to partition
> their tables to speed up vacuum. I'm more concerned about the majority of
> people who fall in the middle -- between the hobbiest and the high end
> data centre.
>
> Thanks,
>
> Gavin
I think your approach will help all of them.
If we had some sort of autovacuum (which is packages with most distros
anyway - having it in the core is nice as well) and a mechanism to
improve realloaction / vacuum speed we have solved all problems.
i do think that 2 tb can take care of themselves. the question is,
however, whether the database can do what they want ...
thanks a lot,
hans
--
Cybertec Geschwinde u Schoenig
Schoengrabern 134, A-2020 Hollabrunn, Austria
Tel: +43/664/393 39 74
www.cybertec.at, www.postgresql.at