Re: Indexes and Primary Keys on Rapidly Growing Tables
От | Alessandro Gagliardi |
---|---|
Тема | Re: Indexes and Primary Keys on Rapidly Growing Tables |
Дата | |
Msg-id | CAAB3BBJRh47qLa6sjEU3KyUhMGLX2=vQE=T6i4fwu6P+rzKuCw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Indexes and Primary Keys on Rapidly Growing Tables (Josh Berkus <josh@agliodbs.com>) |
Ответы |
Re: Indexes and Primary Keys on Rapidly Growing Tables
|
Список | pgsql-performance |
I was thinking about that (as per your presentation last week) but my problem is that when I'm building up a series of inserts, if one of them fails (very likely in this case due to a unique_violation) I have to rollback the entire commit. I asked about this in the novice forum and was advised to use SAVEPOINTs. That seems a little clunky to me but may be the best way. Would it be realistic to expect this to increase performance by ten-fold?
On Mon, Feb 20, 2012 at 3:30 PM, Josh Berkus <josh@agliodbs.com> wrote:
On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:Batching is usually helpful for inserts, especially if there's a unique
> . But first I just want to know if people
> think that this might be a viable solution or if I'm barking up the wrong
> tree.
key on a very large table involved.
I suggest also making the buffer table UNLOGGED, if you can afford to.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
В списке pgsql-performance по дате отправления: