"Creager, Robert S" wrote:
>
> I've a question. I have often seen the 'trick' of dropping an index,
> importing large amounts of data, then re-creating the index to speed the
> import. The obvious problem with this is during the time from index drop to
> the index finishing re-creation, a large db is going to be essentially
> worthless to queries which use those indexes. I know nothing about the
> backend and how it does 'stuff', so I may be asking something absurd here.
> Why, when using transactions, are indexes updated on every insert? It seems
> logical (to someone who doesn't know better), that the indexes could be
> updated on the COMMIT.
>
> Please don't hurt me too bad...
> Rob
>
I imagine because the transaction might do a select on data it just
inserted/updated.
--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio. http://www.targabot.com