Re: implicit abort harmful?

Поиск
Список
Период
Сортировка
От Csaba Nagy
Тема Re: implicit abort harmful?
Дата
Msg-id 1054545362.30137.17.camel@coppola.ecircle.de
обсуждение исходный текст
Ответ на Re: implicit abort harmful?  ("Ian Harding" <ianh@tpchd.org>)
Ответы Bulk coying of data from one table to another  ("shoaib" <shoaibm@vmoksha.com>)
Список pgsql-general
Hi all,

This is going the flame-war way. Too bad, cause it's an important
feature.
I do miss the possibility to continue a transaction after a partial
failure. There's a performance penalty in always checking data integrity
before insert when you only get 3 records wrong out of a million. On the
other hand, if the implementation of the statement undo
(sub-transactions, undo, etc.) would also cause a performance penalty,
then it's all the same, except the code would be cleaner...

It's kind of amusing to see all postgres developers getting irritated
when this feature is requested over and over again. Understandable,
given the complexity of the problem. Maybe you should put a specific FAQ
item: "continuing transaction after error: don't ask for this feature
unless you can't contribute it !!!"

Comparing with MS SQL is also wrong, it's kind of "look, there are worse
implementations than ours, so we're OK". Why don't you compare with the
working implementations ? I won't give any examples (they do exist),
cause in my experience postgres developers are allergic to this too.

Please understand that I do appreciate the effort all postgres
developers invested in this product which I am using, and I am satisfied
with it.

Just that it's not constructive to convince yourself and everybody that
a workaround is better then the requested feature all the time when the
feature is complex and hard to implement or you simply don't use it...

Cheers,
Csaba.


On Fri, 2003-05-30 at 22:07, Ian Harding wrote:
> Try Microsoft SQL Server (tm).  It will not roll back a transaction unless explicitly told to do so.  It will also
notinform you of any error, unless you specifically include code to check the status of the error code after every
singleDML statement.  If you do so in a subtransaction, you also have to keep track of how far nested you are and
whethera prior error has been detected.  Stored procedures have to be written to return error status as well, and
nestedstored procedure calls  likewise must keep track of the religiously checked error code status for earlier
proceduresand act (or not act) accordingly. 
>
> I really don't think you are going to talk anyone around here into that scenario....
>
> Ian Harding
> Programmer/Analyst II
> Tacoma-Pierce County Health Department
> iharding@tpchd.org
> Phone: (253) 798-3549
> Pager: (253) 754-0002
>
>
> >>> "Wayne Armstrong" <wdarmst@bacchus.com.au> 05/30/03 12:18PM >>>
> ** Reply to message from Jan Wieck <JanWieck@Yahoo.com> on Fri, 30 May 2003
> 15:06:01 -0400
> Hi,
>  Subtransactions here are likely as onerous as the solution that I bet gets
> used most of the time in this scenario, that is to commit after every insert.
> And, it's not a matter of good or bad coding here. There are cases where the
> response to an sql or insert error should be a rollback. There are as many
> cases where (given that the application is informed there was a problem), the
> problem can safely be ignored. Again, the decision should be upto the
> application not the database manager. It is the dbms perogative to reject the
> statement, not the transaction.
>
> Regards,
> Wayne
>
> > Wayne Armstrong wrote:
> > > ** Reply to message from Jan Wieck <JanWieck@Yahoo.com> on Thu, 29 May 2003
> > > 20:25:56 -0400
> > > Hi Jan,
> > >  Yup, staging tables are probably going to be the answer in this case.
> > >  However, my point is, you shouldn't need workarounds for this. (or for the
> > > more general insert on failure update on failure ignore type of logic used in a
> > > lot of import scenarios)
> > >  Taking the decision if or not to rollback a transaction (sql error or no) away
> > > from the application, and enforcing it in the dbm, results in kluges,
> > > workarounds, and a generally less robust dbm.
> >
> > You obviously didn't search the mail archives too much, did you? If
> > INSERT ... SELECT ... fails half way through due to a duplicate key
> > error - how to get rid of the so far inserted tuples?
> >
> > This problem is well known, has been often discussed and is yet not
> > solveable because we do not have subtransactions. They are coming, but I
> > don't know if the proposed implementation will cope well with 120
> > million single INSERTs each running in it's own subtransaction.
> >
> > And I disagree with your last statement. It doesn't result in a less
> > robust dbm, it results in more complex applications that (if written by
> > more primitive coders) will instruct a perfectly robust dbm to do the
> > wrong thing to the data - from a business point of view.
> >
> >
> > Jan
> >
> > >
> > > Regards,
> > > Wayne
> > >
> > >> Martijn van Oosterhout wrote:
> > >> > On Sun, May 25, 2003 at 10:04:30AM +1000, Wayne Armstrong wrote:
> > >> >
> > >> >> During import of 120 thousand records from an isam file system, 3 say records
> > >> >>fail integrity checks ( files in non-database systems tend not to have
> > >> >>referential integrity implemented on them except at an application level
> > >> >>(meaning not at all :). The desired result is to drop the records failing
> > >> >>integrity checks.
> > >> >
> > >> >
> > >> > Yes, I have that problem too. I actually wrote a script that took an input
> > >> > file and automatically reissued queries that succeeded but got rolled-back.
> > >> > I called it quickloader.pl :)
> > >> >
> > >> > Probably the biggest problem is that you can't use COPY to load the data.
> > >> > I've thought about loading into another table and transferring later but I
> > >> > havn't sorted out the details.
> > >>
> > >> The general idea would be to setup a table that has exactly the same
> > >> structure as the final target table, but with no constraints at all. As
> > >> long as your data passes all input functions you can even COPY it in.
> > >>
> > >> Now you run check queries that show you all tuples in that staging table
> > >> that would fail constraints on the final table. Fix those and you can do
> > >>
> > >>      INSERT INTO final SELECT * FROM staging;
> > >>
> > >> If step one fails because of data that doesn't pass the input functions
> > >> of our data types, you have to go through another level of staging with
> > >> a table that has text fields only and move it by explicit casting after
> > >> cleaning up those problems.
> > >>
> > >>
> > >> Jan
> > >>
> > >> --
> > >> #======================================================================#
> > >> # It's easier to get forgiveness for being wrong than for being right. #
> > >> # Let's break this rule - forgive me.                                  #
> > >> #================================================== JanWieck@Yahoo.com #
> >
> >
> >
> > --
> > #======================================================================#
> > # It's easier to get forgiveness for being wrong than for being right. #
> > # Let's break this rule - forgive me.                                  #
> > #================================================== JanWieck@Yahoo.com #
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>



В списке pgsql-general по дате отправления:

Предыдущее
От: erwan ancel
Дата:
Сообщение: check constraint
Следующее
От: "Verhaag, G."
Дата:
Сообщение: ERROR: CreateTrigger: function.....