Re: speeding up inserts
От | Chris Ochs |
---|---|
Тема | Re: speeding up inserts |
Дата | |
Msg-id | 018901c3d09a$5784fc60$d9072804@chris2 обсуждение исходный текст |
Ответ на | speeding up inserts ("Chris Ochs" <chris@paymentonline.com>) |
Список | pgsql-general |
> "Chris Ochs" <chris@paymentonline.com> writes: > > Is this a crazy way to handle this? > > Depends. Do you care if you lose that data (if the system crashes > before your daemon can insert it into the database)? I think the > majority of the win you are seeing comes from the fact that the data > doesn't actually have to get to disk --- your "write to file" never > gets further than kernel disk buffers in RAM. > > I would think that you could get essentially the same win by aggregating > your database transactions into bigger ones. From a reliability point > of view you're doing that anyway --- whatever work the daemon processes > at a time is the real transaction size. > > regards, tom lane > The transactions are as big as they can be, all the data is committed at once. I'm guessing that for any database to be as fast as I want it, it just needs bigger/better hardware, which isnt' an option at the moment. I was also thinking about data loss with the disk queue. Right now it's such a small risk, but as we do more transactions it gets bigger. So right now yes it's an acceptable risk given the chance of it happening and what a worst case scenario would look like. but at a point it wouldnt' be. Chris
В списке pgsql-general по дате отправления: