Re: performance of insert/delete/update
От | Robert Treat |
---|---|
Тема | Re: performance of insert/delete/update |
Дата | |
Msg-id | 1038338746.17245.51.camel@camel обсуждение исходный текст |
Ответ на | Re: performance of insert/delete/update (Ron Johnson <ron.l.johnson@cox.net>) |
Список | pgsql-performance |
On Mon, 2002-11-25 at 23:27, Ron Johnson wrote: > On Mon, 2002-11-25 at 21:30, Tom Lane wrote: > > Ron Johnson <ron.l.johnson@cox.net> writes: > > > On Mon, 2002-11-25 at 18:23, scott.marlowe wrote: > > >> The next factor that makes for fast inserts of large amounts of data in a > > >> transaction is MVCC. With Oracle and many other databases, transactions > > >> are written into a seperate log file, and when you commit, they are > > >> inserted into the database as one big group. This means you write your > > >> data twice, once into the transaction log, and once into the database. > > > > > You are just deferring the pain. Whereas others must flush from log > > > to "database files", they do not have to VACUUM or VACUUM ANALYZE. > > > > Sure, it's just shuffling the housekeeping work from one place to > > another. The thing that I like about Postgres' approach is that we > > put the housekeeping in a background task (VACUUM) rather than in the > > critical path of foreground transaction commit. > > If you have a quiescent point somewhere in the middle of the night... > You seem to be implying that running vacuum analyze causes some large performance issues, but it's just not the case. I run a 24x7 operation, and I have a few tables that "turn over" within 15 minutes. On these tables I run vacuum analyze every 5 - 10 minutes and really there is little/no performance penalty. Robert Treat
В списке pgsql-performance по дате отправления: