Re: Postgres for a "data warehouse", 5-10 TB

Поиск
Список
Период
Сортировка
От Robert Klemme
Тема Re: Postgres for a "data warehouse", 5-10 TB
Дата
Msg-id j4lens$hp0$1@dough.gmane.org
обсуждение исходный текст
Ответ на Re: Postgres for a "data warehouse", 5-10 TB  (Marti Raudsepp <marti@juffo.org>)
Ответы Re: Postgres for a "data warehouse", 5-10 TB  (Andy Colson <andy@squeakycode.net>)
Список pgsql-performance
On 11.09.2011 19:02, Marti Raudsepp wrote:
> On Sun, Sep 11, 2011 at 17:23, Andy Colson<andy@squeakycode.net>  wrote:
>> On 09/11/2011 08:59 AM, Igor Chudov wrote:
>>> By the way, does that INSERT UPDATE functionality or something like this exist in Postgres?
>> You have two options:
>> 1) write a function like:
>> create function doinsert(_id integer, _value text) returns void as
>> 2) use two sql statements:
>
> Unfortunately both of these options have caveats. Depending on your
> I/O speed, you might need to use multiple loader threads to saturate
> the write bandwidth.
>
> However, neither option is safe from race conditions. If you need to
> load data from multiple threads at the same time, they won't see each
> other's inserts (until commit) and thus cause unique violations. If
> you could somehow partition their operation by some key, so threads
> are guaranteed not to conflict each other, then that would be perfect.
> The 2nd option given by Andy is probably faster.
>
> You *could* code a race-condition-safe function, but that would be a
> no-go on a data warehouse, since each call needs a separate
> subtransaction which involves allocating a transaction ID.

Wouldn't it be sufficient to reverse order for race condition safety?
Pseudo code:

begin
   insert ...
catch
   update ...
   if not found error
end

Speed is another matter though...

Kind regards

    robert


В списке pgsql-performance по дате отправления:

Предыдущее
От: Robert Klemme
Дата:
Сообщение: Re: Postgres for a "data warehouse", 5-10 TB
Следующее
От: Andy Colson
Дата:
Сообщение: Re: Postgres for a "data warehouse", 5-10 TB