On 03/11/10 08:46, Mladen Gogala wrote:
> I wrote a little Perl script, intended to test the difference that
> array insert makes with PostgreSQL. Imagine my surprise when a single
> record insert into a local database was faster than batches of 100
> records. Here are the two respective routines:
Interesting - I'm seeing a modest but repeatable improvement with bigger
array sizes (using attached program to insert pgbench_accounts) on an
older dual core AMD box with a single SATA drive running Ubuntu 10.04 i686.
rows arraysize elapsed(s)
1000000 1 161
1000000 10 115
1000000 100 110
1000000 1000 109
This is *despite* the fact that tracing the executed sql (setting
log_min_duration_statement = 0) shows that there is *no* difference (i.e
1000000 INSERT executions are performed) for each run. I'm guessing that
some perl driver overhead is being saved here.
I'd be interested to see if you can reproduce the same or similar effect.
What might also be interesting is doing each INSERT with an array-load
of bind variables appended to the VALUES clause - as this will only do 1
insert call per "array" of values.
Cheers
Mark