speeding up inserts
От | Chris Ochs |
---|---|
Тема | speeding up inserts |
Дата | |
Msg-id | 009701c3cfeb$b7e3ce10$d9072804@chris2 обсуждение исходный текст |
Ответы |
Re: speeding up inserts
Re: speeding up inserts |
Список | pgsql-general |
First of all, we are still running sapdb at the moment but are in the process of moving to postgresql, so it seemed a good idea to post this type of question here. In our environment we have transaction processing, each transaction accounts for 10-30 inserts and 3-4 selects. We also have users that use a management interface for doing all sorts of queries on the data once it gets into the db. Most of the user queries are selects with a few updates, and fewer inserts. The basic problem is that the transaction times are very critical, one second is a big deal. The data inserted into the db from a transaction does not have to happen instantly, it can be delayed (to a point anyways). Being that there is only so much you can do to speed up inserts, I have been testing out a different system of getting the data from the application into the database. Now what our application does is create the queries as it runs, then instead of inserting them into the database it writes them all out to a single file at the end of the transaction. This is a huge performance boost. We then use a separate deamon to run the disk queue once every second and do all the inserts. If for some reason the main application cant' write to disk it will revert to inserting them directly. Is this a crazy way to handle this? No matter what I have tried, opening and writing a single line to a file on disk is way faster than any database I have used. I even tried using BerkeleyDB as the queue instead of the disk, but that wasn't a whole lot faster then using the cached database handles (our application runs under mod perl). Chris
В списке pgsql-general по дате отправления: