Brad Pepers <brad@linuxcanada.com> writes:
> ALPESH KOTHARI wrote:
>> To store such 1000 data it takes as large as 90
>> seconds. I don't have any pre-experience using db.
>> So, is this much time OK?
> You've turned off auto-commit right and just do one commit at
> the end? Otherwise its doing a lot of work for each record
> added. In general when you are doing bulk inserts, you want
> to turn off some of the database features to gain speed.
Other commonly used tricks for speeding up bulk inserts are:
1. Use "COPY FROM STDIN" to load all the records in one command, instead
of a series of INSERT commands. This reduces parsing, planning, etc
overhead a great deal. (If you do this then it's not necessary to fool
around with autocommit.)
2. If you are loading a freshly created table, the fastest way is to
create the table, bulk-load with COPY, then create any indexes needed
for the table. Creating an index on pre-existing data is quicker than
updating it incrementally as each record is loaded. This isn't useful
for adding to an existing table, of course.
regards, tom lane
************