Re: Improvement for query planner? (no, not about count(*) again ;-))

Поиск
Список
Период
Сортировка
От Andres Freund
Тема Re: Improvement for query planner? (no, not about count(*) again ;-))
Дата
Msg-id 20200720205041.32grdowjochgagpq@alap3.anarazel.de
обсуждение исходный текст
Ответ на Re: Improvement for query planner? (no, not about count(*) again ;-))  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-bugs
Hi,

On 2020-07-20 13:58:19 -0400, Tom Lane wrote:
> =?utf-8?Q?Tobias_V=C3=B6lk?= <tobias.voelk@t-online.de> writes:
> > I’ve asked postgres to make an unlogged newtable(name text primary key) consisting of the unqiue names and
executed:
> 
> > Insert into newtable(name) select name1 from games on conflict do nothing;
> 
> ON CONFLICT is a really, really expensive way to eliminate duplicates.
> It's meant to handle situations where two or more sessions might
> concurrently insert duplicate keys, which means that (a) there's not
> really any way to detect the situation in advance or optimize it,
> and (b) we don't expect it to happen that much anyhow.

And it's explicitly not about handling conflicts between rows inserted
in the same statement. In fact, one gets an error when using ON
CONFLICT .. DO UPDATE affects a row modified in the same statement:

CREATE TABLE conflict(key text primary key, data text not null);
INSERT INTO conflict VALUES ('a', 'a1'),('a', 'a2'),('b', 'b2') ON CONFLICT (key) DO UPDATE set data = excluded.data;
ERROR:  21000: ON CONFLICT DO UPDATE command cannot affect row a second time
HINT:  Ensure that no rows proposed for insertion within the same command have duplicate constrained values.
LOCATION:  ExecOnConflictUpdate, nodeModifyTable.c:1590
Time: 1.174 ms

Greetings,

Andres Freund



В списке pgsql-bugs по дате отправления:

Предыдущее
От: Tom Lane
Дата:
Сообщение: Re: Improvement for query planner? (no, not about count(*) again ;-))
Следующее
От: tutiluren@tutanota.com
Дата:
Сообщение: pg_dump's "--exclude-table" and "--exclude-table-data" options are ignored and/or cause the dump to fail entirely unless both the schema and table name use 1950s-era identifiers.