Steve Eckmann <eckmann@computer.org> writes:
> We also found that we could improve MySQL performance significantly
> using MySQL's "INSERT" command extension allowing multiple value-list
> tuples in a single command; the rate for MyISAM tables improved to
> about 2600 objects/second. PostgreSQL doesn't support that language
> extension. Using the COPY command instead of INSERT might help, but
> since rows are being generated on the fly, I don't see how to use COPY
> without running a separate process that reads rows from the
> application and uses COPY to write to the database.
Can you conveniently alter your application to batch INSERT commands
into transactions? Ie
BEGIN;
INSERT ...;
... maybe 100 or so inserts ...
COMMIT;
BEGIN;
... lather, rinse, repeat ...
This cuts down the transactional overhead quite a bit. A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.
regards, tom lane