> On Fri, 18 Feb 2005 22:35:31 -0500, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> pgsql@mohawksoft.com writes:
>> > I think there should be a 100% no data loss fail safe.
>>
>> Possibly we need to recalibrate our expectations here. The current
>> situation is that PostgreSQL will not lose data if:
>>
>> 1. Your disk drive doesn't screw up (eg, lie about write
>> complete,
>> or just plain die on you).
>> 2. Your kernel and filesystem don't screw up.
>> 3. You follow the instructions about routine vacuuming.
>> 4. You don't hit any bugs that we don't know about.
>>
> I'm not an expert but a happy user. My opinion is:
> 1) there is nothing to do with #1 and #2.
> 2) #4 is not a big problem because of the velocity developers fix
> those when a bug is found.
>
> 3) All databases has some type of maintenance routine, in informix for
> example we have (update statistics, and there are others for oracle)
> of course they are for performance reasons, but vacuum is too for that
> and additionally give us the XID wraparound.
> So, to have a maintenance routine in PostgreSQL is not bad. *Bad* is
> to have a DBA(1) with no clue about the tool is using. Tools that do
> to much are an incentive in hire *no clue* people.
>
> (1) DBA: DataBase Administrator or DataBase Aniquilator???
PostgreSQL is such an awesome project. The only thing it seems to suffer
from is a disregard for its users.