On the COPY's atomicity -- looking for a definitive answer from a core
developer, not a user's guess, please.
Suppose I COPY a huge amount of data, e.g. 100 records.
My 99 records are fine for the target, and the 100-th is not -- it
comes with a wrong record format or a target constraint violation.
The whole thing is aborted then, and the good 99 records are not
making it into the target table.
My question is: Where are these 99 records have been living, on the
database server, while the 100-th one hasn't come yet, and the need to
throw the previous data accumulation away has not come yet?
There have to be some limits to the space and/or counts taken by the
new, uncommitted, data, while the COPY operation is still in progress.
What are they?
Say, I am COPYing 100 TB of data and the bad records are close to the
end of the feed -- how will this all error out?
Thanks,
-- Alex