Performance of batch COMMIT
| От | Benjamin Arai |
|---|---|
| Тема | Performance of batch COMMIT |
| Дата | |
| Msg-id | 007801c604d4$9a965880$d7cc178a@uni обсуждение исходный текст |
| Ответы |
Re: Performance of batch COMMIT
|
| Список | pgsql-general |
Each week I have to update a very large database. Currently I run a commit about every 1000 queries. This vastly increased performance but I am wondering if the performance can be increased further. I could send all of the queries to a file but COPY doesn't support plain queries such as UPDATE, so I don't think that is going to help. The only time I have to run a commit is when I need to make a new table. The server has 4GB of memory and fast everything else. The only postgresql.conf variable I have changed is for the shared_memory.
Would sending all of the queries in a single query string increase performance?
What is the optimal batch size for commits?
Are there any postgresql.conf variable that should be tweaked?
Anybody have any suggestions?
В списке pgsql-general по дате отправления: